Compositional Pattern-Producing Networks (CPPNs) are networks which, among other things, may be used to encode images at (theoretically) infinite resolution. While other networks simultaneously output all pixels for an image, CPPNs take in individual $x$ and $y$ coordinate and output one pixel at time. This demo generates random images with a CPPN. The CPPN architecture is as follows:

- An input layer takes in a random latent vector with values in the range $[-1, 1]$, as well as $x$ and $y$ coordinates of the pixel and a radius $r = \sqrt{x^2 + y^2}$.
- The inputs flow through several fully connected layers with tanh activations.
- Finally, there is an output layer of size 1 (for greyscale images) or size 3 (for RGB images) with sigmoid activation (this ensures all outputs are in the range $[0, 1]$).

All weights are initially drawn from a unit normal distribution. When you press the `Restart`

button, these weights will be re-initialized. To get the final image, pixel values are calculated on a grid of `width`

uniformly spaced coordinates in the $x$ direction and `height`

uniformly spaced coordinates in the $y$ direction.

For more info about CPPNs, I recommend referring to David Ha’s blog post. This demo is based on that blog post, as well as the corresponding demo. The network in this demo was implemented with TensorFlow.js. The implementation is contained in the HTML for this page (i.e. you can view it with “inspect element”).