Lab Talk

1930 EEG Machine

Down a Rabbit Hole? A History of EEG Analysis

The analysis of the EEG signal in terms of frequency bands has its origins in the technical workarounds of the 1930s when there were no computers – workarounds acknowledged even then as not optimal.  Yet it has persisted.

When Hans Berger first reported that he could measure electrical activity from the brain using surface electrodes and a Siemens galvanometer in 1929 no one believed him (The image above is an early 1930s EEG machine).  Once it was finally acknowledged in the 1930s that these were indeed electrical potentials of the brain, the big challenge was to make any sense of them.

See related post Decoding the Electric Brain

Visual Inspection and Naming Signal Components

In the 1930s the state of the art was to capture the output of the EEG on an ink-writing oscillograph (expensive and not easily affordable however) and the primary analytical methodology available was ‘visual inspection’ – essentially you simply had to look at the signal and maybe using a ruler you could measure some aspect of it.  The most obvious thing you could find with this method was some durations of periodicity around 10 Hz in some parts of the signal.  Berger called these periods Alpha waves.  Everything else that was typically non-periodic and tended to have complex fluctuations of higher frequencies that could not be characterized was called beta waves.  Slower frequencies were first reported by Hoagland, Rubin, and Cameron in 1936 to which they gave the name delta waves.  That same year Jasper and Andrews claimed to have seen frequencies higher than 30 Hz and called them gamma waves but given technical limitations this was met with skepticism initially. The ink-writing oscillograph, which could not write faster than 30 or 50 Hz capped the frequencies that could be written. Furthermore, the equipment could not detect fluctuations smaller than 5 microvolts and the higher frequencies were of very small amplitude.  And as you can imagine, the method of ‘visual inspection’ misses quite a bit.

Attempting Mathematical Analysis of the EEG

In the 1930s harmonic analysis of signals was also becoming important in the context of various signal transmission problems pertaining to radio transmission and astronomy.  It was therefore the most prevalent mathematical approach to consider.  Fourier analysis was known but such analysis had to be done by hand and was laborious and impractical.  These analyses, painstakingly done by hand, limited the frequencies that they were able to address.  Some folks nonetheless took this on in the 1932 a German researcher G. Dietsch published a first fourier analysis of the EEG signal that consisted of a methodology and a table for a handful of frequencies.   I imagine this was worth a PhD in itself at the time.

Mechanical Analyzers

In 1935 a roboticist by the name of William Walter Grey visited Berger’s lab and subsequently set about developing better methodologies for characterizing these EEG potentials.  Given the tedious challenge of post facto analysis of signal, Walter set about building a device to photomechanically parse the signal by frequency.  Called the Walter Analyzer, this involved hooking up the recording apparatus with oscillators of different frequencies that created as output sine waves of the select frequency with an amplitude somewhat representative of the ‘power’ of that frequency in the EEG – essentially mechanical band pass filters.  Obviously it is not practical to have such analyzers for each frequency and it was therefore necessary to select just a few.  The first Walter Analyzer had four, later versions had 10.  Below is William Grey Walter alongside his analyzer in 1944 (image taken from Barlow 1997).

Walter was not the only one who took this on – others did as well, and over time this evolved and improved.  An output of such frequency analyzers is shown below (from Kozhevniko, 1957)

 

In 1963 Edmund Kaiser and colleagues built The Kaiser filter (image below from Kaiser et al., 1964) composed of six high Q resonators representing one delta filter, one theta filter, two alpha filters and two beta filters that gained some popularity in the community.  This in fact reduced the number of filters relative to others in use.  However, there was also the practical issue of which analyzer would be manufactured and marketed rather than remain a single lab novelty – a factor that would influence the direction of the literature.

It is also important to note that it was not until the 1960s that mainframe computers came into accessible academic use and it was 1965 by the time Cooley and Tukey came up with their FFT algorithm that we all use today.  Nonetheless, even in the sixties it was still no simple task to calculate the power spectrum of the EEG.  The oscillograph output had to be carefully digitized and shipped off along with code to a main frame computer, often far away.  From a practical standpoint of computational time and efficiency if you wanted any result at all it still made better sense to be focused on specific frequency bands.

The Challenges of Harmonic Analysis

Walter himself when defining his analyzer discussed at length its considerable limitations.  He expends considerable effort to explain how different phase relationships of the component frequencies can produce fundamentally different waveforms and frets over the challenge of maintaining his oscillators in phase with one another. Indeed, he questioned the very premise of such frequency-based separation of a signal that appeared so obviously non periodic. (Left image from Dawson and Walter, 1944)

He says

The chief limitation of automatic e.e.g. analysis with instruments at present available lies in the fact that whilst they will separate and measure mixed and modulated rhythms in the e.e.g. they give no information about the relative phases of the waves making up these rhythms. When harmonically related higher frequencies are added to a fundamental frequency, the shape of the resulting waves depends entirely on the phase relations of the harmonics to the fundamental. Therefore, two compound waveforms which have components identical in frequency and size, and so will show the same analysis, may yet have entirely different shapes. In the visual examination of  an e.e.g. record it is important to know some of the forms a given set of rhythms may produce by phase…. A series of harmonically related components may produce an infinite variety of waveforms. “

See related post The Blue Frog in the EEG

 Deep in the rabbit hole

That brings us to today.  Considerable advances in computing and algorithms allow us to shed the workarounds of the pre-computing era of the 1930s-50s.  Yet EEG as a field has not done so and has rather chosen to maintain its constrained legacy by collapsing the considerable information of the signal into frequency bands that were defined in the 1930s based on visual inspection and perpetuated based on equipment limitations.  Indeed, tens of thousands of papers published today simply report changes in broad frequency bands with no regard for either the overall power spectrum or the complex phase relationships and structure in the time domain that modern computing gives us access.

Perhaps as a field we should pay heed to Walter’s concerns and abandon the legacy of the technology constraints of the 1930s.  It’s time to emerge from the depths of the alpha-beta rabbit hole.

 

 

 

 

 

 

Leave a Reply