There are many things we can say about brains. A brain is a collection of particles; a brain extends over a certain area of space at any given time; a brain has a certain amount of kinetic energy; a brain undergoes chemical changes; and so on. These are all the sorts of things that we can say about any other physical entity, such as a boulder.
None of these things can be said about minds. A mind – whatever it is – is not a collection of particles; it does not extend over a certain area of space at any given time; in short, it has very little in similar with brains or boulders or any physical thing.
At this point, you might wonder whether or not I am begging the question. I would ask that you resist that intuition and consider the extent of the problem.
What would we need to provide a complete description of an atom (or, if you prefer, an even more fundamental particle)? Such a description might require recordings of position, electric charge, mass, and other physical characteristics. (For the purposes of my argument, which physical characteristics we choose to list is largely irrelevant.)
Consider, in turn, a complete description of a macroscopic object such as a boulder. Such an object could be described merely in terms of atoms and their relations with one another.
What about a human being? Can a human being be described merely in terms of relations among atoms – in terms of position, electric charge, mass, and other physical characteristics? A human body (or, more bluntly, a corpse) can, but a human being cannot. Human beings, after all, are self-aware, perceptive – conscious.
Is consciousness a physical characteristic? If it is, of what physical entity is it a characteristic? No one says that the individual atoms in our brain are conscious, and few (if any) people say that the individual neurons in our brains are conscious. Are our entire brains conscious? But a brain, physically speaking, is just a collection of neurons, none of which is conscious; they are merely (extremely) specific configurations of matter.
Perhaps we can say that consciousness arises from brains in some way – in other (more technical) words, that mental states supervene on physical states. The firing of some specific neuron causes some specific piece of my mental experience (say, the perception of the color red).
There are two main problems that I see with this point of view (call it the supervenience account) for the atheist.
The first is that other people’s mental states are empirically unobservable. There is a fundamental difference between a third-person perspective and a first-person perspective of the world. (This is a very important observation, because no scientific theory, to my knowledge, has been able to give a sufficient account of this distinction.) I do not know what it is like to be a bat, nor do I even know what it is like to be you – even if I am a neuroscientist who knows everything about your brain. The supervenience account, then, is a far cry from the neat monism that atheists might prefer.
The second problem is this: There is nothing physically special about brains such that an entire new category of being known as consciousness (or self-awareness, or perception, or what-have-you) would supervene on brain states. Brains, to a physicist, are hardly different from boulders. Thus, the supervenience account posits a relationship between brains and minds that appears completely arbitrary. How is the scientist (qua scientist) to explain this arbitrariness? I am not sure.
Much, of course, remains to be said on this matter – this post includes only some brief introductory parts on my part – but I hope that the problem for the atheist has been made clear.