Fast Randomness Tests
When examining whether a signal is random, it is probably easiest to reject most regular waveforms simply by looking at them. Using a computer to automate the task is desirable when the number of signals to monitor is large, and then a less subjective criterion is needed.
A large number of randomness tests can be devised: The definitive test must be the autocorrelation function- it will even expose many pseudo- random sequences. (It may be worth repeating the calculation on the low order bits only, to show any periodicity in them more prominently, if the number of bits per word is large.)
The drawback of the autocorrelation function becomes apparent when the number of samples is large: The computational effort involved is proportional to the number of samples squared, for a full- valued autocorrelation function (one which is calculated for all phase differences up to the interval corresponding to half the number of samples.)

It would be nice to dismiss most phoney contenders using a faster test:
- Subintervals of equal width should get the same number of hits, more or less.
-
The average magnitude difference function (AMDF) is calculated, between successive samples. (This is linear in the number of samples, in contrast to a full- valued autocorrelation function.) For most smooth waveforms, (even after sampling), the AMDF will collapse, but noise scores just above 30% of the interval width very consistently: A full- scale square wave at the Nyquist frequency, on the other hand, will score 100%.
There are a few high- frequency square waves which will record similar scores to noise, but these will flank the first (uniform distribution) test.
Suppose the details of the algorithm are leaked, is it easy to fabricate a periodic waveform to fool the computer? Well, a high- frequency square wave superimposed upon a low- frequency triangular wave seems to fit the bill. It is easy to see that a full- scale continuous waveform, sampled about, but not exactly six times per cycle will also foil both tests.
How to cope with these eventualities? Repeating the calculation at half the sampling frequency will demolish (or, at least, change notably) the AMDF, even for the exceptions just mentioned, embarrassing imposters. The algorithm now looks more robust, even if its implementation is compromised.
Now most signals aspiring to masquerade as noise have been cleared. The complete armoury of the full- valued autocorrelation function (and additional randomness tests) need only be used to sort true random from pseudo- random signals.
(Verifying that approximately half the samples are larger than the next sample looks useful, but regrettably this does not distinguish noise from most regular signals (excluding sawtooth waveforms.)
Back to figure 24: Would that be noise? Well, this is a composite signal, consisting of about 12% (peak- to- peak amplitude) sinewave, the remaining 88% being noise. Using anything up to 50% of the full- valued cross correlation function (but typically only 25%), the phase of the sine signal (with respect to the beginning of the window, in degrees!) is determined without ambiguity. How is that for detecting signals pretending to be noise without being it! The comment concluding the first paragraph of the correlation page is amplified effortlessly.
(The window captures an entire cycle of the sinewave. Cross correlation is with a pure sinewave. Below the threshold percentage indicated, successful placements dwindle rapidly. The number of samples was 256, but the image is shown compressed here.)