Lab Talk

Modular Brain

Building The Brain With Modules

Models in computational neuroscience that solve for one outcome cannot combine with one that solves for another.  Can a modular approach solve this? The short answer is No.  Here’s why.

In my preceding post I discussed a problem of knowledge superposition in computational neuroscience. This problem stems from the inability to superpose computational explanations to one phenomenon over the explanation to another phenomenon, and so on. That is, it is not possible to combine simulations of various individual brain and mind phenomena into one big meta simulation that exhibits behaviors of all of these individual simulations.

A straightforward way to address the superposition problem is to distribute functions across modules. If we have two separate modules, each with its own function and structure—that is, each with its own connectivity of synaptic weights—the two are likely to work without interfering one with another. We just need to ensure that module 1 is active only when we need to accomplish function 1 and module 2 when we need function 2. However, when it comes to explaining the mind, this is not so easy.

Brain theorists like modules

There are a lot of module-based explanatory efforts in the science of mind. Perhaps it all started with phrenology, which needed to be abandoned over time. Later, better module-based theories have been developed. For example, Samsonovich & De Jong (2005) suggest which modules need to interact in order to achieve self-awareness (see Figure 1). Another example is that of Hewlett et al. (1998) which attempts to explain speech production in children. There is even a general theory of how modules (called agents) need be organized in order to achieve human-like intelligence (Minsky, 1988). Also, there is a theory of how to go about building things out of modules in general (Fodor, 1985).

Figure 1. Modules needed to achieve self-awareness, according to Samsonovich & De Jong (2005).

 Making modules work in an actual computer simulations is tricky, but possible. One successful example is adaptive resonance theory (ART), which postulates specific types of interactions between modules for short-term memory, gain control, and an orienting subsystem (Carpenter & Grossberg, 1988).

How far can we push module-based theories of brain and mind?

The key question is: Can usage of modules fully explain the mind and satisfactorily solve the knowledge superposition problem? Encouraging precedent come from the tech world. We have been able to create great pieces of technology by combining specialized modules. Your computer has hardware modules (e.g., CPU, RAM, WiFi, etc.) and software modules (text processor, media player, video conferencing, etc.). A car has modules (engine, brakes, clutch, gear box). Your TV has modules. And so do your toaster, electric toothbrush and refrigerator. Also, our AI solutions rely heavily on the use of modules. For example, for a machine to win against humans in the game of Jeopardy, it was necessary to process the question through multiple modules until a correct answer could be spit out.

Figure 2. The modules of an AI named Watson who in 2011 beat top human competitors in the game of Jeopardy.

So, can we explain the mind by stacking modules? Clearly, biology is no stranger to modules. Separation of organs into liver, kidney, gut, heart etc. is a testament to modularity of the physiology of living systems. The fact that biological cells come in different types is also a clear case of modularity. Even within cells, there are modules. Our brains enjoy a considerable degree of modularity. I will talk about brain’s modular structure soon. But first, let me discuss two main problems of modular solutions.

Key limitations of module-based solutions

Knowledge sharing problem: By their very nature modules are encapsulated and inaccessible (Fodor, 1985). This means that they share only the end results of computations but not the knowledge on how they achieved those computations. Thus, if one module learns something new about recognizing a grandmother (e.g., grandmothers tend to have grey hair) the other modules do not get access to this knowledge. It is very difficult to transfer knowledge across modules.

In contrast, the mind enjoys a tremendous transfer of knowledge across mental “modules”. If I learn how to unscrew a nut on my bike, I am able to transfer this knowledge to all kinds of other nuts and bolts for example, on my car. But it gets even much better. I can abstractly understand the principles by which nuts and bolts work. I can then understand a metaphor or a joke connecting the shapes of a nut and a bolt to the differences in female and male anatomy. Something like this is nearly impossible for today’s explanations of how the brain works. And this is what our minds excel at: We do not just learn by rote; we instead understand the world. Understanding implies application of a piece of knowledge to a broad set of different situations. Somehow, our minds transcend knowledge encapsulations inherent to modularized systems. We need a theory that explains how this works.

Module scheduling problem: Even if one found an effective way for sharing knowledge among modules, another issue may remain problematic: Modules need be scheduled and prioritized. If I encounter a book, should I evoke the module for book reading, or the module for storing a book on a shelf? Or should I ignore the book completely or focus on something else? Out of thousands of possible things that I could do at any given moment, which one should I actually do?

The simple architecture that overcomes this problem is modules connected into a pipeline, like in the Watson-example in the picture above. Here, the output of the first module triggers the second module, and so on. The today very popular deep learning networks are strictly pipelined.

But pipelines are not flexible. This is why other architectures introduce more flexibility by adding scheduling/prioritizing modules. For example, in the mentioned ART theory (Carpenter & Grossberg, 1988), the specific function the orienting subsystem (a module) is to reset short-term memory (another module). Minsky postulated censors and suppressors—modules that have the power to prevent other activities (Minsky, 1988).  But then, who schedules the scheduling modules?

Modules in the brain

Let’s now go back to the modules of the brain. Our brains clearly have modules with different functions. We know that there are older subcortical parts of the brain responsible for things like regulation of physiological functions and newer cortical part for higher order functions such as language. Within the cerebral cortex, there are many functional differences among regions: Left and right hemisphere have somewhat different functions, and so do frontal and posterior regions, much like there are differences between dorsal and ventral streams of information processing. Moreover, sensory inputs and motor outputs have a clear delineation and so do different sensory modalities—smell, vision, touch, etc. By one type of anatomical definition, there are 52 anatomically separable areas in the cortex.

But how does this relate to nuts and bolts? The 52 brain areas is a small number compared to all the different things that I can do with nuts and bolts. Even, the most general faculties of our minds do not have specialized modules. I already discussed that visual WM and attention operate within the same brain areas. Similarly, language processing and concept processing overlap in the brain (e.g., Martin, 2007). And so on. And the same is true for doing math, driving a car, tidying up your room, making friends or falling in love. All these things happen through massive superposition of knowledge within our minds and brains.

Global workspace

One of the key problems for computational neuroscience to solve is: How is superposition of knowledge achieved in the brain? How does the brain break the spell of encapsulation and inaccessibility? The brain achieves the apparently impossible: It starts with a small number of modules and then unifies them into one big knowledge “field” capable of forming seemingly an unlimited number of thoughts, ideas and mental contents. When you learn a new skill (e.g., a new dance), new functions are achieved. But no new modules are being added. Rather, knowledge has been integrated across the existing modules.

This ability of the brain to defy the limits of modules has a name; Bernard Baars named it a global workspace. How does the brain achieve global workspace? How can information flow across brain areas in a seemingly unlimited way? What does the brain have that our computational models lack? What are we missing?

The author, Danko Nikolić, is affiliated with savedroid AG, Frankfurt Institute for Advanced Studies and Max Planck Institute for Brain Research

References:

Fodor, J. A. (1985). Precis of the modularity of mind. Behavioral and Brain Sciences8(1), 1-5.

Carpenter, G. A., & Grossberg, S. (1988). The ART of adaptive pattern recognition by a self-organizing neural network. Computer21(3), 77-88.

Martin, A. (2007). The representation of object concepts in the brain. Annu. Rev. Psychol.58, 25-45.

Minsky, M. (1988). Society of mind. Simon and Schuster.

Samsonovich, A. V., & De Jong, K. A. (2005). Designing a self-aware neuromorphic hybrid. In AAAI-05 Workshop on Modular Construction of Human-Like Intelligence: AAAI Technical Report (Vol. 5, p. 08).

Hewlett, N., Gibbon, F., & Cohen-McKenzie, W. (1998). When is a velar an alveolar? Evidence supporting a revised psycholinguistic model of speech production in children. International journal of language & communication disorders33(2), 161-176.

Leave a Reply