Compositional Pattern-Producing Networks (CPPNs) are networks which, among other things, may be used to encode images at (theoretically) infinite resolution. While other networks output images all at once, CPPNs take in individual $x$ and $y$ coordinate and output one pixel at time. In this demo, a CPPN is used to generate random images. The architecture of the CPPN is as follows:

- An input layer takes in a random latent vector with values in the range $[-1, 1]$, as well as $x$ and $y$ coordinates of the pixel and a radius $r = \sqrt{x^2 + y^2}$.
- The inputs then flow through several hidden layers with tanh activations.
- Finally, there is an output layer of size 1 (for greyscale images) or size 3 (for RGB images) with sigmoid activation (this ensures all outputs are in the range $[0, 1]$).

All weights are initially drawn from a unit normal distribution. When you press the `Restart`

button, these weights will be re-initialized. To get the final image, pixel values are calculated on a grid of `width`

uniformly spaced coordinates in the $x$ direction and `height`

uniformly spaced coordinates in the $y$ direction.

For more info about CPPNs, I highly recommend referring to David Ha's blog post. This demo is based on that blog post, as well as the corresponding demo. The network in this demo was implemented with TensorFlow.js. The source code is located in the following places: