Why is the Hard Problem of Consciousness so Hard? Intentionality.

Someone I know sent me some material related to their research on consciousness, asking for some feedback. What follows is an edited version of what I wrote to them.
Flint Spear and Arrow-Heads
Solving the easy problem of consciousness — by explaining the causes and neural correlates of particular conscious experiences — is challenging but at least conceivable. This is not so for any claim to have solved the hard problem of consciousness — it is going to evoke skepticism from both scientists and philosophers. I’m one of the many neuroscientists that do not think the hard problem is solvable — to us it is not clear that phenomenal consciousness is a scientifically tractable phenomenon in the first place.

I’ve been reading Terrence Deacon’s book Incomplete Nature, where he stresses a key problem for any theory of consciousness: intentionality, which is the ability of mental states to be about things or concepts. In my experience most functionalist stories about consciousness presuppose the thing to be explained. The clue is the use of intentional language. Words like ‘model’, ‘map’, ‘representation’, ‘agent’, ‘report’, and even ‘control’ presuppose mind-like entities doing representing and controlling. What many neuroscientific theories of consciousness assume is that certain brain networks are already entities for which it is meaningful to attribute a ‘point of view’. But a fully physicalist theory of the emergence of phenomenal consciousness would only be persuasive to other adherents of physicalism if it avoids starting with intentionality-based terms like ‘representation’, ‘model’ or ‘point of view’. Such a theory would need to start with non-intentional language of the kind used by physicists and chemists. You would need to specify how something like ‘representing’ (and the accompanying point of view) emerges from matter and becomes an unavoidable description of the phenomenon.
I don’t actually think such a theory is possible, but it seems like that’s what people interested in physicalist theories of mind want. A possible exception is Daniel Dennett, who is both a physicalist and someone who wants to naturalize intentional language, claiming that goals/purposes etc. are ‘out there’ and fundamental to nature.
Panpsychists claim that points of view are generic features of matter. (A philosopher named Philip Goff recently wrote a book that has some interesting arguments for this.) The panpsychist has major problems to address: how can a unified point of view emerge from the myriad points of view of individual atoms or cells? And why might this point of view not incorporate all phenomena in the body?
Four turtles can be seen as though through the glass of an aquariumTheories that start with ‘agents’, ‘actors’, ‘modelers’ and so on often seem like anthropomorphizations — one gets the impression that it is agents all the way down. For many theoretical purposes, such as discussing meditation or mental health, this is not necessarily a problem. If (as I would argue) a fully physicalist theory of emergent points of view is impossible, then there is nothing wrong with discussing how the ‘quasi-agents’ inside a person’s body/mind interact with each other. Anthropomorphic explanations may actually turn out to be the best we can hope for in some situations. (I myself have dabbled in a little half-serious anthromorphization of brain processes.) But I do not really see such explanations as addressing the age-old question of what mind is or how it originates. I can imagine an analytic philosopher bringing up the mereological fallacy, and perhaps even the double-subject fallacy.
A consciousness researcher interested in selling a viable product in the marketplace of physicalist ideas must specify how intentionality arises. Otherwise, many of their statements will end up sounding circular. A parody example I’ve cooked up:
Consciousness arises through a brain’s representations of itself in the act of representing reality. In other words, consciousness is a map of mappings, or a model of modeling.
The concept of a map or a model already implies experience (since it presupposes intentionality/aboutness, as well as a notion of similarity, which must be assessed by something that is mind-like). So such statements can seem to be asserting tautologies.
In other words, a theory of how consciousness arises is incomplete if it does not tell us what experience is in the first place. And in attempting this it must use entirely non-intentional terminology borrowed from physics and chemistry (e.g., matter, energy, movement, chemical bond, phase transformation etc.). People who clearly see the paradoxical, quixotic nature of this quest may eventually gravitate towards substance dualism, mental monism, panpsychism, and/or mysterianism. These seemingly crazy ontologies become attractive as one comes to recognize that there is a large blind spot at the center of science: experience.
____
Further reading
These 3QD essays I wrote are also relevant:

1 Comment

Comments are closed