Computing with a Low Memory Budget

Back when I began working with computers, computers had a lot less memory than they do now. Most had a kilobyte RAM or less (or even no RAM at all, just a few shift registers) and used magnetic tapes for all significant data storage. Computers had the smarts we programmers could squeeze into them! (I'm sure many other old-timers had a chuckle over the patent fight about year 'windowing' to fix Y2K problems. All of us did things like that every day, in my case by the late 60's to cut year storage to 3 bits for an accounting system.)

One of the first significant things computers were used for was statistics. And, one of the most common scientific statistics is trend analysis - how one variable relates to another. (Commonly, one variable is time.) Well, there is a simple way of calculating any first-order trend from any amount of data you have: sum your your data, orders 0,1 and 2. Six storage locations are all that is needed, no matter how much data you have.

Call your two variables X and Y. Then read through your data, summing the number (N: order 0), X and Y (X1,Y1: order 1), X2, Y2 and X*Y (X2, Y2, C2: order 2). If your data is weighted, multiply each quantity you add to each sum by its weight. From these six sums you can calculate any trend statistic you need. Here are the most important ones:

mean X: X1/N
mean Y: Y1/N
X moment X9: X2-X12/N
Y moment Y9: Y2-Y12/N
XY moment C9: C2-X1*Y1/N
trend A (y=a*x+b): C9/X9
trend B (y=a*x+b): Y1/N-A*X1/N
standard error S of Y given X: sqrt((Y9-A*C9)/(N-2))
standard error of A: S/sqrt(X9)
standard error of B: S*sqrt((X1/N)2/X9+1/N)
correlation coefficient: C9/sqrt(X9*Y9)

A reference text: Freund & Walpole, Mathematical Statistics

John Sankey
other notes on computing