Mozzacella Automato Salad

Generating Random Numbers with Cellular Automata
By Martijn van Beest, Jaro Camphuijsen & Rahiel Kasim

Why care about Random Numbers?

Random numbers are an essential ingredient of all computer programs involving probabilities. They are used in stochastic simulations like the Monte-Carlo method, to implement randomized algorithms and even in cryptography to provide secure communication online with for example your bank. The most generic way to generate random numbers is by measuring a truly random physical process like radioactive decay or thermal noise. We cannot however always depend on such sources of randomness, and sometimes you want to be able to reproduce your sequence of random numbers. For these reasons and due to their speed of generating random numbers we will look at pseudo random-number generators (PRNG). PRNGs are not completely random like physical RNGs: someone with a computer that can run exponential time algorithms can distinguish between a stream of random numbers generated from a PRNG and a RNG. Because such computers do not exist (yet?), good PRNGs are indistinguishable from real RNGs. It is however not easy to construct a good PRNG. Not only will it need a truly random seed to start, it is also not trivial to find an algorithm that can derive a sequence of random numbers from it. Dysfunctioning PRNGs lead to spurious results: they can invalidate the statistics of your experiment by introducing a bias, or destroy the security guarantees of cryptography because they are easily predictable. It is thus of utmost importance that the random numbers in use are actually random. Cellular Automata (CA) have been used as random number generators and the full rule space of the elementary CA's has been explored, however we think we can improve on these results and we explore the rulespace of other than elementary CA's as well.

Cellular Automata

A Cellular Automaton is a discrete model where each cell can be in one of \(k\) states and evolves according to a fixed rule that evaluates all cells and their \(r\) closest neighbours. CA's can be defined in an arbitrary number of dimensions \(d\). A CA rule consists of \(k^{2 d r + 1}\) sub-rules of length \(2 d r + 1\). Since for each sub-rule we have \(k\) possible out states, the number of rules of a CA can be computed with $$ n_{rules} = k^{k^{2 d r + 1}}$$

Stephen Wolfram developed a naming system for CA rules where the sub-rules were sorted according to the base \(k\) representation of their instates and the decimal representation of the base \(k\) number that is formed by the sorted out-states gives the rule name. Below you can find a visualization of a one dimensional CA where we can choose the number of states \(k\), the number of neighbours \(r\) and the rule number.

1D Cellular Automata
\( k \)
\( r \)
Wolfram Code

Wolfram also conducted a systematic study on a specific kind of one dimensional Cellular Automata with radius \(r = 1\) and number of states \(k = 2\), called the elementary CA. By noting how some rules end up in a loop of a finite amount of states or in one single state and others show chaotic or complex behaviour, he classified the rules into four classes according to their behaviour.


The hypothesis stated for this project is the following: "Is it easier to use less elementary CAs for random number generation?". A supporting reason for this hypothesis is that we expect to get more chaos out of our CAs when we introduce more degrees of freedom.

Random numbers from CA's

The existence of Wolfram class 3 and 4 (respectively chaotic and complex) indicate that we can use Cellular Automata for the generation of random numbers. When we want to generate a random number with a CA we want to use the It is easy to generate a bit from a CA with \(k = 2\), we can select one cell, using the new state it generates each time step as a new bit. When we want to choose a random integer in a given range we need to make sure that we take enough bits. However we also need to take care that we keep our sample space homogeneous when mapping the found states to an integer.

In the case of \(k = 3\) we generate trits and if we want to find for example an integer between 0 and 9 it fits exactly in two trits. However an interval of size 10 does not and we would have to sample three trits. As we now have 27 possible values we need to decide what to do with all values above 10. We could take the found ternary number modulo 10 (the interval), however this would mean that the numbers 0 to 6 are over represented. We could also just throw away every number above 10 and try to generate another number. In this case we would throw away almost two third of the found numbers which is very wasteful.

There are of course other (more advanced) methods to generate numbers from the CA states however the previously mentioned is in most cases easy and fast. Wolfram propagated that for the elementary CA's, rule 30 is a proper random number generator and later research by Coe, Ahnert and Fink [2] who searched through all elementary CA rules confirmed this and they formed a list of random generator elementary CA rules.

For testing our random numbers we needed a byte stream so we had to convert our series of bits, trits or units of other bases to bytes and did this by gathering enough states to fill a byte (8 bits, 5 trits, etc.) and then get the modulo 256 of the found base k number. If the number however was greater than n times 256, with n the number of bytes that fully fitted inside our base k number, it was discarded and a new number was generated. There are of course other (more advanced) methods to generate numbers from the CA states however the previously mentioned is in most cases easy and fast. Wolfram propagated that for the elementary CA's, rule 30 is a proper random number generator and later research by Coe, Ahnert and Fink [2] who searched through all elementary CA rules confirmed this and they formed a list of random generator elementary CA rules. Rule 30 was also used as the default PRNG in Mathematica versions prior to 6.0 [4]. They used a CA size of 261, we will also use this size for the 1D CA's we will test.

Rule 30 Generating Bits


Randomness tests

Randomness can be defined in various ways. One definition does not necessarily imply the other as well and several kinds of randomness can be used for different purposes. For example when we do a Monte-Carlo simulation we want the random numbers to be evenly spaced over the domain, however true random numbers often show large gaps. On the other hand for cryptography we need random numbers for which the algorithm is not easily derived but which can be reproduced if we know the algorithm and the seed. We used various measures all bundled in the program Ent. Ent took as input a byte stream which is why we wanted to produce random bytes.

A sorting of CA rules was proposed by Langton, he derived a parameter from the base k representation of the rule number. It gives the fractional representation of one arbitrary chosen quiescent state as opposed to the total number of sub-rules. $$\lambda = \frac{k^{2r+1} - n_q }{k^{2r+1}} $$ This lambda parameter seems to be a nice measure for complexity of the rule and we will use it to sort our results per rule. We traversed the full rule space for the elementary CA again and performed numerous statistical tests on each rule. The results of these tests are plotted against Langton parameter with in red the set we selected to be random enough.


A first easy measure for randomness is the mean bit. A random bit sequence should have a mean bit value of \(0.5\) since we want equal chance for a 0 or 1 state. If it deviates from \(0.5\) we have consistently more of one of the two states. In the image below we see the results for all 256 elementary CA rules.

Mean against the Langton parameter

Monte-Carlo Pi

A second measure is the stochastic determination of the number \(\pi\), which is done by randomly placing samples in a square domain and checking if it is within a radius from the center of the square equal to half of the side of the square. The percentage of hits is used to calculate the value of \(\pi\).

Monte-Carlo pi against the Langton parameter


The entropy of a rule is defined as the average number of bits per character it creates. It tells us how much the produced byte stream could be compressed. For a normalized measure of the entropy between 0 and 1 we need to divide by 8 (bits per byte). The results for the 256 elementary CA rules is plotted below. We can see that all of our selected random rules have an entropy of nearly one.

Entropy against the Langton parameter


The Chi-square test is the final test and said to be most sensitive for errors in pseudo random number generators. It computes the Chi-square distribution of the stream of bytes and returns it as a \(\chi^{2}\) value and accompanying p_value. Even though we eventually use the p-value to select our rules, the \(\chi^{2}\) distribution is interesting to see since we find a large gap between the selected values and the rest.

Chi Square against the Langton parameter


The p-value is calculated from the \(\chi^{2}\) value using the \(\chi^{2}\) distribution. It tells us how often a truly random sequence would differ from the measured value. We interpret this as the degree to which the sequence is probably non-random. Values outside the range 0.1 - 0.9 are suspect to being non random. We used this p-value to select our set of 19 proper random elementary CA rules which are shown red in all of the preceding graphs.

P-value against the Langton parameter

We see that our p-value selection scores also the best on the other tests. And other rules have a p-value of either very close one or zero.

What came out?

The earlier mentioned paper [2] found a set of 28 random rules:

Using our Chi-square test results, we found a set of 19 random rules:

This set has 17 rules in common with the paper, only rule 122 and 161 are "new". The rules {101, 166, 170, 15, 240, 210, 180, 85, 120, 89, 154} found by the paper did not pass our Chi-square test. And thus we want to reject them as being a good random number generator.

Top 10 rules based on the Chi-square test:

pi_deviation mean_deviation p_value_deviation
195 0.001276 0.082446 0.013847
86 0.008420 0.109408 0.045251
149 0.010588 0.205130 0.052170
225 0.000460 0.028304 0.054775
150 0.009388 0.132046 0.073749
135 0.017645 0.111598 0.165868
60 0.003140 0.049358 0.215533
75 0.004052 0.026974 0.224544
122 0.004828 0.114394 0.227844
30 0.007324 0.009032 0.270481

Three Colours

We also explored other rule spaces, for example the \(r = 1\) \(k = 3\) rules. The number of possible rules for this case increases from 256 to \(7 * 10^{12}\), which makes it impossible to explore every rule. We sampled 1400 random rules from the possibilities and plotted the entropy and chi square again against the Langton parameter.

Entropy against the Langton parameter
Chi square against the Langton parameter

Although now we have almost six times as much data, we must keep in mind that we only sample a \(10^{-10}\) fraction of the full rulespace.This is hardly enough to say anything useful about the tests. However we see that the entropy distribution is of a similar shape and approaches 1 quite close. However, the p-value of every single rule was zero and we see from the chi square distribution that we do not approach a small enough \(\chi^{2}\) value and if we compare it to the 2 color case, we see we only have sampled in the top part of the plot.


The hypothesis that we get better or more random generators for more degrees of freedom cannot be accepted or rejected since we do not have enough results from more degrees of freedom. After a run time of 48 hours we concluded that this would take too long to run for our two week project. However we narrowed down the set of elementary random number generators and added two new ones as well. Rule 30 has been used by Wolfram as random number generator for his scientific computation software Wolfram Mathematica. However rule 30 does not seem to perform extraordinarily well as opposed to other rules. In our test rule 195 has much better p-value and we would prefer this as standard random generator rule. However rule 30 seems to have a really good Monte-Carlo properties and could therefore be of use for stochastic simulation.

Lessons learned & Recommendations

We wanted to statistically test random number generators. From the literature [1] we learned that do this properly, you'd run your RNG both through the Diehard [6] and NIST [7] statistical test suites. Our implementation was however very slow, generating only about 1.4 KiB/s, while the tests require more than 2 GiB of random numbers. Given more time, we'd reimplement the PRNG in C.

\(3^{3^{3}}=7625597484987 \approx 10^{13} \) is a lot to sample from...


[1] Random Number Generation
[2] When are cellular automata random?, J. B. Coe , S. E. Ahnert and T. M. A. Fink.
[3] ENT, A Pseudorandom Number Sequence Test Program
[4] Mathematica Reference
[5] Cellular Automata: Is Rule 30 Random?, Dustin Gage, Elizabeth Laub, Briana Mcgarry.
[6] Diehard Tests
[7] A Statistical Test Suite for the Validation of Random Number Generators and Pseudo Random Number Generators for Cryptographic Applications