NotPoliticallyCorrect

Home » Philosophy » It Is Impossible to Breach Our Mental Privacy Using AI and fMRI

It Is Impossible to Breach Our Mental Privacy Using AI and fMRI

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 292 other subscribers

Follow me on Twitter

Goodreads

2300 words

Introduction

Recent headlines on AI and so-called mind reading have been extraordinary. “AI can now read minds, Japanese scientists’ experiment sparks ethical debate“, “Mind-reading’ AI: Japan study sparks ethical debate“, “Goodbye privacy: AI’s next terrifying advancement is reading your mind“, “Scientists in Texas developed a GPT-like AI system that reads minds“, and “A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thoughts” are some titles of recent articles that make this outlandish claim. Claims like this are clearly ridiculous. They assume that through reading neuroimages of our brains, that we can then see what one is thinking. This is hopelessly confused. I will argue here that these claims don’t pass any muster and that’s due to the irreducibility of the mental.

A new article was published yesterday in Nature Neuroscience with the title Semantic reconstruction of continuous language from non-invasive brain recordings (Tang et al, 2023). AI hype has been growing over the past few months due to ChatGPT, and this new undertaking uses AI and fMRI to “read thoughts” through translating brain activity into semantic reconstructions. This is a gross kind of reductionism of mind to physiological brain activity (CNS). But since it’s impossible to localize cognitive processes in the brain, along with the privacy of the mental, then these undertakings are bound to fail. I will argue that it’s impossible for AI to mind-read and that our mental privacy will never be breached.

fMRI and AI

fMRI measures changes in blood flow and oxygenation in different brain regions which allows researchers to see which areas of the brain are more active during the action of cognizing. The assumptions of fMRI to localize cognitive processes, however, fail (Uttal, 2001, 2012, 2014). They fail for a modicum of reasons like individual differences in brain imagine aren’t stable, and so averaging (pooling) disparate studies obscures inter- and intra-subject variation. They are merely reporting random and quasi-random fluctuations in a complex system. Thus, if individual brain physiology is different second to second, minute to minute, hour to hour, how can we logically state that by pooling these images together we can derive where these cognitive processes are occurring in the brain? So the claim that fMRI can localized cognitive processes is false.

It looks like the AI hype train won’t end soon. Like with LAMDA, and ChatGPT, this looks like it will make headlines for a while. But is it true? I will argue that it isn’t, since the mental is private. We have privileged access to our intentional states.

Such articles like this Guardian article, titled AI makes non-invasive mind-reading possible by turning thoughts into text is the newest article reporting on such studies that make these outlandish claim. (In 2018, Mind Matters covered similar AI hype.) The article quite clearly assumes that thoughts are a physical process or a function of physical processes. The fact of the matter is, the paper does not in any way show that AI large language models (LLMs) can read minds. Thoughts are not something that can merely be read based on looking at brain physiology. The Guardian article states:

An AI-based decoder that can translate brain activity into a continuous stream of text has been developed, in a breakthrough that allows a person’s thoughts to be read non-invasively for the first time.

This claim, however, fails and it fails due to a priori considerations. In his paper Immaterial Aspects of Thought, Ronald Ross (1992) (also see Feser, 2013) argued that formal thinking is incompossibly determinate but no physical process or functions of physical processes are incompossibly determinate, so thoughts aren’t a a physical or functional process and no physical process is formal thinking so this then refutes functionalism and physicalism. Here is how Ross (1992: 137) puts it:

Some thinking (iudgment) is determinate in a way no physical process can be. Consequently, such thinking cannot be (wholly) a physical process. If all thinking, all judgment, is determinate in that way, no physical process can be (the whole of) any judgment at all. Furthermore, “functions” among physical states cannot be determinate enough to be such judgments, either. Hence some judgments can be neither wholly physical processes nor wholly functions among physical processes.

This is clearly a form of substance dualism. So thinking and judgment are mental processes which cant be reduced to physical or functional processed and explanations. So this argument has considerations for claims that we can use AI and fMRI to read minds. For if cognition isn’t able to be localized to certain parts of the brain, and if thoughts aren’t a a physical or functional process and, then the endeavor to read minds will.l ultimately fail.

fMRI can, of course, detect brain functioning. However, it can’t detect mental functioning, since the mental is irreducible to the physical (meaning states of the brain and CNS). Mind reading, then, would consist in detecting the content one’s mental states. This, of course, would include one’s subjective states like their beliefs, desires, and intentions. So brain imaging detects brain functioning, but since mind isn’t identical to the brain or its states—that is, since the mental is irreducible to the physical—then such reductive materialism and types of mind-brain identity are bound to fail. (See Glannon, 2017) Philsopher of mind Ed Feser puts it like this in his article Mindreading?:

Might the detection of some other kind of neural pattern amount to “reading” someone’s thoughts? No, for (among other things) the reasons outlined in my series of posts on short arguments for dualism. In particular (as I argued here), given a mechanistic (i.e. final causality-denying) conception of the material world, any material process must be devoid of intentionality. But thoughts are inherently intentional. Hence nothing detectable in any purely material processes (again, where “material” is understood in mechanistic terms) could possibly reveal the content of any thought.

This leaves it open that, at least given certain background assumptions, we might guess with some measure of probability what someone is thinking. Indeed, we can do that already, just by observing a person’s behavior and interpreting it in light of what we know about him in particular, his circumstances, human nature in general, and so forth. And of course, further knowledge of the brain might give us even further, and more refined, resources for making inferences of this sort. But what it cannot do even in principle is fix a single determinate interpretation of those thoughts, or reduce them entirely to neural activity. So, no entirely empirical methods could, even in principle, allow us to “read” someone’s thoughts in anything more than the loose and familiar sense in which we can already do so.

These outrageous claims assume that thinking is a physical process or a function of physical processes, when it’s quite simply impossible for them to be. These kinds of studies assume a kind of mind-brain identity, which is falsified by multiple realizability arguments. (It should also be noted that computational models of the mind are also invalid; Tallis and Aleksander, 2008.)

The fact of the matter is, we have private access to the contents of our minds—it is completely internal. Mind privacy is different from brain privacy; of course we can look at the brain’s neurophysiology, but since there is non-identity between mind and brain, this means that it’s impossible to read minds just from looking at brain states (Gilead, 2014). Gilead concludes:

If the mental is irreducible to the physical, brain privacy does not entail mental privacy. Moreover, if the mental is irreducible to the physical, there is certainly more to persons than their bodies.

My arguments above clearly show that brain imaging allows no access to our mind and that mind privacy is quite different from brain privacy, as the latter can be breached by brain imaging, whereas the former cannot. We should not worry whether brain imaging can or will be able to read our mind. We have nothing to worry about regarding our mental privacy, for there is no external access to one’s mind. Each of us has exclusive access to his or her own mind. I also show above that a reduction of the mind to the body inescapably fails, as there is a difference of categories between mind and body or brain, which is compatible with their inseparability.

The mental is irreducible to the physical (including, of course, thinking) and science (third-personal) can’t study mind (first-personal subjective states), so these claims outright fail on a priori grounds.

Arguments for mind-privacy

Using fMRI and AI to read minds isn’t possible now, and it won’t ever be possible.

Here is an argument that mind reading itself isn’t possible:

P1: If mind-reading were possible, then people would be able to read others’ thoughts accurately.
P2: People cannot read others’ thoughts accurately.
C: Therefore, mind-reading is impossible.

Premise 1 is the basic definition of mind-reading. It refers to the ability to accurately perceive the thoughts of others. If it were possible, then people would be able to accurately ascertain the thoughts of others. So the accuracy of mind-reading is a necessary condition for it to be possible.

Premise 2: While we can infer what others are thinking based on their behavior, language, and certain other cues, we cannot accurately perceive one’s thoughts since they are not directly accessible. People also have different interpretations of the same cues.

So the Conclusion then follows that mind-reading is impossible. Since the accuracy of mind-reading is a necessary condition for it to be possible, then the lack of the ability makes it impossible.


P1: If it were possible to read minds using AI and fMRI, then we would have clear and consistent evidence of this ability.
P2: We do not have clear and consistent evidence of this ability.
C: Therefore, it’s impossible to read minds using AI and fMRI.


P1: If it were possible to breach one’s subjective mental states, then someone would be able to access another person’s thoughts or mental processes without their consent.
P2: It is not possible for someone to access another person’s thoughts or mental processes without their consent.
C: Therefore, it is impossible to breach one’s subjective mental states.

Mental privacy refers to the right of one to keep their thoughts private, and breaching this privacy would require accessing thoughts in some other way. But it’s not possible to access one’s thoughts on this way, and brain imaging technologies don’t do this since mind isn’t identical to brain.


P1: If mind-reading using AI and fMRI were possible, then there would be consistent and reliable patterns in the brain that correspond to different thoughts.
P2: If there were consistent and reliable patterns in the brain that correspond to different thoughts, then AI algorithm la would be able to accurately interpret them.
P3: There are no consistent patterns in the brain that correspond to different thoughts, since mind-brain identity is false.
C: Therefore, mind-reading using AI and fMRI is impossible.

The irreducibility of mind to brain and the falsity of mind-brain identity theory means that there can be no consistent and reliable brain patterns that correspond to different thoughts.


Case 1: If it were possible to read minds using AI and fMRI, then there would be physical evidence in the brain that corresponds to specific thoughts or mental processes.
Case 2: If it were not possible to detect physical evidence in the brain that corresponds to specific thoughts or mental processes, then it would not be possible to read minds using AI and fMRI.
Case 3: There is no physical evidence in the brain that corresponds to specific thoughts or mental.processes (due to what we know about the multiple realizability of psychological traits).
C: Therefore, it is impossible to read kinds using AI and fMRI (by proof by cases, case 1 and case 3).

There is no empirical evidence to support case 1, and we know that it’s not possible to detect thoughts or mental processes based on brain physiology alone. As case 2, the absence of physical evidence linking brain and mental states 1-to-1 would mean that AI/fMRI cannot detect them. This also suggests that the brain isn’t a purely mechanistic system, which can be fully understood and predicted using computational models. This is similar to Libet experiments, in which it was claimed that unconscious brain activity preceded conscious intention to move; the brain does not initiate freely-willed processes (Radder and Meynen, 2012). Lastly foe the third case, neuroimaging studies consistently fail to detect specific thoughts or mental states from brain states alone. And even if patterns of brain activity can be associated with certain mental states, it’s impossible to determine with certainty what specific thoughts or mental processes a person is experiencing.

Conclusion

While our technology is quickly increasing, a priori arguments show that the explanatory gap between science and subjective mental states is impossible to close. Due to the radically different properties the mental and the physical have, this means that we can’t use science to study our subjective mental states. While there is a ton of fanfare recently about LLMs and the ability of them and fMRI to show that mind-reading is possible, these claims are nothing but hot air. For if the mental were reducible to the physical, then it could be possible in principle that we could read minds based on neurophysiology and brain images. However, since the mental is irreducible, then we can’t use these technologies to read minds.

These claims, though, will increase in frequency since physicalist views are held by the super majority. However, the arguments here show that mind-reading using AI and fMRI is impossible, since mind and brain are not identical.

Thus, our mental privacy is safe from physical systems that attempt–in vain—to breach it.

Advertisement

1 Comment

  1. Jack says:

    Very well done

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Please keep comments on topic.

Blog Stats

  • 874,542 hits
Follow NotPoliticallyCorrect on WordPress.com

suggestions, praises, criticisms

If you have any suggestions for future posts, criticisms or praises for me, email me at RaceRealist88@gmail.com

Keywords

%d bloggers like this: