There is a great deal of hype about neuroimaging technologies being able to read our thoughts. And yet, we don’t really know what a thought actually is.
The difficulty of defining a thought
Merriam Webster unsatisfactorily defines a thought as ‘the action or process of thinking’ where thinking is to ‘have something in mind’. Another, perhaps better, definition suggests that it is ‘aim-oriented flow of ideas and associations that can lead to a reality-oriented conclusion’ where ideas are mental representations of some sort. But what are the necessary and sufficient conditions of a thought?
Does a thought require retrieval of a memory? Must it be accompanied by conscious awareness? Does it require language? Is thought unique to humans? Or living organisms? Or complex open systems in general?
Even as we have an implicit sense of our own thoughts, the answers to these questions are far from resolved. The best answer we have as of now to any and all of the above questions is ‘Uh, Maybe?’.
Some linguists argue that a thought must necessarily contain language or some form of symbolic representation which would suggest that thought may be uniquely human. On the other hand, there are some who raise the possibility that plants may in fact, be able to think. It may not be anything like human thought, but plants have senses of smell (respond to chemicals), sight (the ability to detect light and dark) and the ability to form memories, and respond to stimulus in ways that could suggest they are thinking. So which is it? Is there a general definition of thought which can manifest in different ways in different systems? Or is thought something very specific to the human mind? How would we arrive at these answers?
Is thought a physiological object?
This brings us to the question: how would we recognize a thought if we saw one? Is it even a measurable physiological object? Or does it exist in some realm of mind that defies measurement? Neuroscientists assume a thought is represented in electrical activity of the brain. But if the linguists are right, then electrical activity in animal brains would not have thoughts, nor would babies. And yet the electrical activity in babies and animals have many similarities to language enabled humans. And then there is the trouble that all of the efforts to computationally reconstruct features of thinking end up being only part solutions and therefore do not represent what is actually going on. (see The Crisis of Computational Neuroscience). What if a thought is not constructed of electrical activity after all? It is entirely possible. This does not mean electrical activity has no influence on a thought. It may well. But it could be fully impossible to construct or reconstruct a thought out of a pattern of electrical activity alone.
Reading thoughts in the brain
This comes to the big question of whether we are on the path to reading human thought.
In 2008 the Gallant lab published the widely publicized paper Identifying natural images from human brain activity in Nature which led to headlines like Brain decoding: Reading minds and Scan a brain, read a mind?. Was it an example of mind reading? In this study people were asked to view natural scenes and these images were reconstructed from the early visual area of the brain using fMRI which measures differences in blood oxygen levels. While this is in and of itself quite a feat, does this mean they were able to read a thought? In their paper the authors draw the conclusion “Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone”. The early visual region is the part of the cortex where the visual signal first arrives from the retina via the thalamus. What this means is that they are essentially constructing a representation of the visual signal arriving in a spatially distributed way into the cortex – a representation of the input signal.
On the other end there are studies like this one titled Real-time decoding of question-and-answer speech dialogue using human cortical activity by Moses et al, also in Nature which garnered headlines like Mind-reading AI turns thoughts into words using a brain implant. Here they used an ECoG array placed on the auditory and sensorimotor areas of the cortex, using gamma activity to detect which response a person would make to a question from among a subset of possible answers. They were able to predict with up to 76% accuracy the sound that they were played (from auditory cortex) and up to 61% accuracy the sound they produced (from sensorimotor cortex). Here again the auditory cortex is where sound input first arrives into the cortex, and the sensorimotor area is where motor output is essentially initiated – i.e. where the decision made gets transmitted to the muscles of the face and body to execute speech and movement, which, in the end, is our only form of output.
Altogether what we have so far are quite remarkable decoding of the input and output signals – what is going in and what is coming out. But thought? Thought is what happens between the input and output. And we don’t even know what that is yet.