What are the necessary elements for a genuine theory of the brain and what will it take to construct one?
In my previous posts I talked about a number of functions and behaviors of the human brain that we still cannot explain. I also listed problems with the approaches we currently undertake to create brain theories.
The next question is then: How can we bring all these aspects together into a satisfactory theory of brain and mind? What would one expect from a successful theory? If we had a good one, how could we recognize it?
In this post, I would like to describe several criteria that have guided my own theoretical work on the brain-mind problems, and that I have adopted as having to be unquestionably satisfied if I was to deliver a good theory.
No competition in explanations
The basic assumptions of the theory should be such that they allow explanations of cognitive phenomena without any competition between explanations, as explained in the previous post. When one “implements” an explanation of attention, this same explanation must remain when implementing an explanation for working memory. And so on. Explanations should be cumulative rather than mutually exclusive.
Let us call this new wonderful theory, Theory X. If this existed, it would explain a large set of phenomena with the same set of assumptions. Not having to introduce new assumptions in studies would be immediately visible in the methods sections of the publications. Today’s papers have a section that explains the unique assumptions made in that particular study, tuned for that particular problem (assumptions made either mathematically or verbally). Instead, the future papers would ideally only need to state something like: “We investigate the properties of Theory X. In the present study we investigate how Theory X behaves under circumstances xyz.” The methods sections would explain only the circumstances under which the theory has been tested, lacking any list of new assumptions. Such papers would look more similar to the experimental papers, as they would explain stimuli, tasks and such things and then they would report what they observed as a result.
This would immediately allow new predictions for empirical studies, something that today’s connectionism-based brain-mind theories are utterly lacking.
An explanatory domino effect
But of course, occasionally, one has to make changes in the assumptions of a theory. No theory is perfect, and no theory should be ‘frozen’ and prevented from improvements. The same holds for Theory X.
However, these changes shall be permitted to become a part of the theory only if they satisfy much more stringent criteria than what is allowed today. Every new fundamental assumption should allow the theory to explain much more than what it has been able to explain without that assumption. This means that despite the new assumption, the theory should retain the ability to explain everything it has already been able to explain prior to the new assumption. And, in addition, it should be able to explain a lot more.
There is a simple way to know that we have done it right. We should observe an explanatory domino effect. We should clearly see a potential to provide explanations for many additional phenomena. This should occur without any additional assumptions and therefore without any superposition problems.
Moreover, the domino effect should not only work for problems that could be solved by traditional theories. Rather, it should also work for the “difficult” problems, those that the current theories really struggle with—i.e., those for which no solution has been found even by imposing extreme forms of specialized assumptions. Therefore, it is not only that working memory, attention, perception, and generation of spontaneous activity should be explained with the same set of assumptions. It is also that the difficult ones such as global workspace should fall in this long chain of the domino effect.
Each new assumption needs open a door of a huge new space of explanations with which we can navigate the universe of brain and mind phenomena. They must have facilitatory effects for understanding the brain-mind phenomena—instead of introducing superposition problems. New fundamental assumptions are thus, most welcome but by definition, should be rare events. A new assumption is therefore a big deal in Theory X and would imply a small revolution and while welcome, should be a rare event.
Third, Theory X should make predictions for empirical investigations. Connectionist models, those that simulate the behaviour of a network of neurons, have a poor track record in making useful predictions. In fact, I am not aware of a single significant prediction that such models have made in the entire history of neuroscience—and this is despite the 1000s of publications. It may be that there have been one or two successful predictions and that I missed, but still my point stands; connectionist models do a terrible job at predicting. They only work well as post-hoc explanations of empirical findings discovered in some other way—often, by chance. But as a guide to which experiments need to be performed next, connectionist models have failed.
If we take some assumptions from a connectionist model—for example, a new specific type of connection—and treat it as a prediction, then we very quickly end up finding that they do not fit into what we know about brain anatomy. But then, connectionist models are quick to find excuses such as for example, “we do not simulate individual neurons; we simulate population of neurons” or “we simulate only some parts of the network to understand how pieces work” or “my model is not designed to address this problem”. These are weak predictions, as they are not allowed to be treated as predictions that can refute a theory.
At the end of any theoretical work you must be able to list down your strong predictions. At best, the predictions would be specified in such a way that they can be immediately turned into an experiment. Moreover, it must be clear that if the specified experiments fail to find the stated results, the theory has to be abandoned.
If you cannot specify experimental predictions that have the power to refute your theory, you do not have a theory.
Given that connectionist models do not make predictions but rather provide a model of a narrow set of empirical phenomena, they are rarely, if ever, refuted. Rather, they somehow have the ability to survive disagreeing data, lingering on regardless of new experimental findings that may be contradictory. We never see major steps taken where a particular model is thrown out and the palette of acceptable models is narrowed down. In my opinion, when the palette of models keeps growing and not shrinking it is the opposite of progress.
In conclusion, in order to make significant progress in explaining phenomena of the brain and creating a successful explanatory connection between the mind and the brain, we need theories that make strong predictions and achieve this without challenges of superposition or competition among assumptions. This is fundamentally different from the present ‘engineering approach’ that dominates computational neuroscience.
Danko Nikolić, is affiliated with savedroid AG, Frankfurt Institute for Advanced Studies and Max Planck Institute for Brain Research
Read the entire blog series here.