Brain Imaging and Diagnosis Ethical Implications in Disorders of Consciousness
Main Article Content
Abstract
Current theories of the mind hold that it is physical in nature; however, much debate exists about whether the mind can be interpreted with brain imaging research. The purpose of this paper is to provide a review of theories of the mind and how they relate to current debates on disorders of consciousness, and to explore the ethical implications of medical diagnoses and misdiagnoses.
Questions concerning the nature of the mind and consciousness have been raised since ancient times, and scientists are now turning to cognitive neuroscience for answers. Empirical research and brain imaging studies provide information to interpret states of consciousness, particularly in patients who have suffered brain injuries. Research about the nature of the mind and consciousness is important because it may affect ethical, legal, and medical decision-making.
In 2005 a 15-year legal battle between the family and husband of Terri Schiavo, a woman diagnosed to be in a persistent vegetative state (PVS), came to an end. The conclusion: The courts ruled that the feeding tube keeping Schiavo alive should be removed.[i] The Schiavo case gained notoriety as the ethical, scientific, and legal arguments surrounding the case were broadcast, in great detail, by media outlets. This case highlights conflicting opinions of whether Terri was conscious or could ever hope to regain consciousness. Michael Schiavo, Terri’s husband, believed medical prognoses stating that Terri had no hope of ever recovering consciousness; Terri’s parents thought otherwise.
Terri was diagnosed as being in a persistent vegetative state when, after awaking from a coma, she showed no signs of awareness of herself or her environment although she maintained sleep and wake cycles and some involuntary reflexes.[ii] Despite evidence from brain imaging tests, which found that Terri suffered from severe brain damage and a lack of activity in her cerebral cortex, Terri’s parents maintained that she should be kept alive since they interpreted Terri’s unconscious reflex behaviors as meaningful and intended actions.
The Schiavo case shows that the nature of consciousness is of practical concern today and has far-reaching implications. In this paper, I will briefly describe several theories of the mind, how they have evolved with respect to scientific discoveries, and how these theories can be applied to current research on consciousness within vegetative state (VS) and minimally conscious state (MCS) patients.
Brief Overview of the Mind-Body Problem
Dualist theories of the mind claim that the mind is separate from the body and is nonphysical in nature.[iii] Today, many people, especially those in the scientific community, support a physical theory of the mind, which holds that the mind is physical in nature; however there is still conflict within the physicalist community. Physicalists disagree on whether the mind is identical to brain states, as identity theorists hold, or whether the mind is the functioning of brain states, as functionalists believe. Debates surrounding these two theories, identity theory and functionalism, express differing opinions on whether the mind can exist in non-carbon-based matter, such as computers. Functionalists believe that the brain can be recreated in other substances, and identity theorists believe that the mind, being a collection of brain states, cannot.
Functionalism Versus Identity Theory
Turing Test
Alan Turing, a functionalist, proposed the Turing Test, also known as the Imitation Game, in 1950.[iv] The Turing Test is a hypothetical test to determine whether computers possess intelligence. To summarize this test: A human and a computer communicate with a second human through written language, as opposed to verbal language. If the human observer cannot differentiate between the human and the computer based on the responses, then the computer has passed the Turing Test—it has fooled a human into believing that the computer is another human. In this hypothetical situation, Turing operationally defines human intelligence as passing a Turing Test.
The Turing Test has relevance for diagnosing brain-damaged individuals. Humans signal understanding to other humans through behavioral responses, such as head nodding, or verbal responses. Brain-damaged individuals may be incapable of such behaviors. When a person suffers brain damage and cannot communicate through overt behavior, it is still possible to determine if understanding exists within this person by comparing her brain scans to those of healthy individuals. The Turing Test has inspired many debates and criticisms, including philosopher John Searle’s “Chinese Room” thought experiment.
Chinese Room Overview
Searle, as an identity theorist, believes that the mind is defined as brain states and processes. As such, a mind cannot exist in matter that is different from the carbon-based matter of our brains. The Turing Test can never prove intelligence or understanding in a computer because a computer is not made of the same matter as our brains. In the Chinese Room thought experiment, a native English speaker is placed in a room and, through a hole in the wall, is presented with a piece of paper depicting Chinese symbols.[v] The English speaker does not speak Chinese, but he consults a rulebook that tells him which Chinese symbols should be used to reply to the Chinese symbols received. Because the English speaker provides the correct response to the Chinese symbols, a native Chinese speaker, standing on the other side of the wall, assumes that the person responding understands Chinese. We, as observers, know that the native English speaker does not understand Chinese. Searle uses this thought experiment to show that the intelligence attributed to a computer that passes the Turing Test does not reflect true understanding; instead, the computer is merely creating responses through formal symbol manipulation according to specified rules.
Chinese Room Conclusions
Searle’s thought experiment demonstrates the difficulty in proving that computers may possess human intelligence. Nonetheless, the doubt that Searle casts upon computer intelligence may be overshadowed by Searle’s biases. He does not apply the same standard to interpreting human understanding that he does to computers. With humans, we assess understanding in another person, not by proving it, but by perceiving a behavioral or verbal response. This is because we, as observers, do not have access to another person’s inner thoughts. Thus, Turing might reply to Searle by pointing out that Searle’s doubts about whether computers that pass a Turing Test possess true understanding also apply to human understanding. In essence, Turing would claim that Searle should apply the same standard measure of understanding to computers as he does to humans.
Is Brain Activation Enough to Prove Consciousness in VS Patients?
Connections Between the Turing Test and Current fMRI Studies
The ability to measure understanding by observing inner mental processes in an individual, instead of behavior, might sound far-fetched. In recent years, however, technology has advanced to the point where scientists are finding that some VS patients may understand commands and can respond through brain activation, as measured by functional magnetic resonance imaging (fMRI). John Stins discusses how current studies on VS patients are like modern-day Turing Tests and believes that criticisms of these studies are similar to those raised against Searle’s Chinese Room thought experiment.[vi] The Turing Test, which tested for intelligence or understanding in computers, is comparable to testing for consciousness in VS patients for three reasons:
(1) Both the Turing Test and fMRI studies of VS patients apply external stimuli to an object under observation (computer or VS patient); the external stimuli consists of questions for the computer and verbal commands for the VS patient.
(2) Both the computer and the VS patient respond to the stimuli (input); the computer responds by answering questions and the VS patient responds through brain activation.
(3) The output that the computer and VS patient produce is analyzed by an external observer, or experimenter, for conscious intelligence; for the computer to have intelligence it must respond in a way that is indistinguishable from a human’s response, and for the VS patient to demonstrate conscious intelligence, her brain activation, recorded through fMRI, must be indistinguishable from the brain activation of healthy individuals.
So, if a VS patient responds to external stimuli in such a way that her brain scans are the same as the brain scans of a healthy individual, then she passes a modern-day Turing Test for conscious understanding.
Brain Activation Suggests Consciousness in VS Patients
Adrian Owen and his colleagues have conducted fMRI research on VS patients.[vii] They wondered whether fMRI studies are capable of identifying pockets of consciousness in VS patients. If this is the case, then VS patients might be capable of communicating their thoughts via neural brain activation instead of motor activation, which, by definition, VS patients cannot control. VS patients are diagnosed as being in a vegetative state when, after awaking from a coma (“an unarousable unresponsiveness state”), they do not show any signs of conscious intelligence. VS patients are not aware of their environment or themselves, nor do they show observable voluntary reactions to stimuli presented to them. Although VS patients do not display an external awareness of their environment, Owen theorizes that they may still possess some residual cognitive abilities, which could only be discovered through brain imaging studies.
Misdiagnoses of disorders of consciousness are relatively high; some studies report up to 43 percent of patients are misdiagnosed. Owen believes that fMRI studies could possibly provide a supplementary method to measure cognitive functions.[viii] Several disorders of consciousness may be misdiagnosed as VS; these include the minimally conscious state (MCS) a disorder of consciousness in which a small amount of awareness of self or environment is displayed in a patient, or locked-in syndrome (LIS) in which patients’ bodies are paralyzed but they may retain sufficient control over eye movements that enables communication through blinking.
Summary of Owen’s Study
Method
In a 2006 study, Owen used fMRI to measure brain activation in VS patients.[ix] Studies using fMRI allow researchers to observe the oxygenated blood flow over time in a patient’s brain. With this information, researchers hope to determine whether patients are participating in conscious processing, which activates the brain for a significant amount of time, or unconscious processing, which activates the brain for a short amount of time. In Owen’s study, two experiments were conducted on a 23-year-old woman who was diagnosed as being in a vegetative state.
In the first experiment, researchers used fMRI scans to measure the patient’s response to verbal sentences and to “acoustically matched noise sequences.” They found that there was activation in the patient’s middle and superior temporal gyri when the patient was listening to verbal speech. Further, when this activation was compared to the activation in a healthy individual’s brain, there was no significant difference in the brain scans. In addition, the patient showed increased activation when ambiguous sentences were presented, which suggests that the patient was processing the meaning of the speech, although researchers were unsure if this was a conscious or unconscious process.
In order to determine if this patient could consciously respond to external stimuli, a second experiment was conducted. fMRI images of the patient’s brain were recorded after the patient was told to imagine either playing tennis or walking through her house. Researchers used both spatial navigation tasks (walking through the house) and motor imagery tasks (playing tennis) because these tasks have been found to produce the most robust activation in the brain compared to other mental imagery tasks.[x] Results indicate that the activated areas of the patient’s brain were the same areas that were activated when a healthy individual engaged in the mental imagery tasks. So, when the patient was asked to imagine playing tennis, her supplementary motor area (SMA) was activated, and when the patient was asked to imagine walking through her house, the parahippocampal gyrus (PPA), the posterior parietal lobe (PPC), and the lateral premotor cortex (PMC) were activated. Researchers found that activation of these brain regions persisted for 30 seconds and until the patient was instructed to rest. Owen concluded from this experiment that the patient possessed some awareness of her surroundings; she was able to respond to external stimuli through brain activation that was indistinguishable from how a healthy individual responds.[xi]
Criticisms of Study
Daniel Greenberg claims that Owen’s study did not address viable alternative explanations in its interpretation of the fMRI brain scans.[xii] Greenberg suggests that the fMRI activation could have reflected the patient’s automatic and unconscious reactions to the external stimuli. That is, instead of the patient consciously imagining playing tennis, she might merely have been reacting to the last word said to her (in this case, “tennis”). Greenberg recommends that Owen conduct further experiments with sentences such as “Sharleen was playing tennis?” to see if the same brain areas are activated. If the patient understands that this alternative sentence is not a command to imagine playing tennis, then her supplementary motor area would not necessarily show activity. If it did, then one might conclude that the patient is unconsciously reacting to the word “tennis.” Also, sentences like “Imagine visiting the rooms in your home after playing tennis” should be presented to the patient to determine whether she understands the instructions or if she is just responding to the last word. If, in the previous example, areas of the patient’s brain related to imagining spatial navigation become activated, then, Greenberg maintains, researchers can attribute understanding and intention to the patient.
In addition to Greenberg’s criticism, Parashkev Nachev and Masud Husain claim that in order for Owen to attribute consciousness to the patient, Owen must ensure that the same activation in the brain that was observed with the stimuli would not occur without the stimuli.[xiii] Further, they criticize Owen’s conclusions because they are based on a comparison of brain activation when subjects were resting to brain activation when subjects were responding to mental imagery instructions. Nachev and Husain recommend instead that Owen compares brain activation when opposing verbal instructions are provided. For instance, researchers should compare the difference between patients’ responses to “imagine playing tennis” and “do not imagine playing tennis.” Nachev and Husain also believe that one cannot attribute conscious decisions to brain activation alone because brain activation can also occur when a person is not conscious. This line of reasoning is similar to criticisms of the Chinese Room argument because Nachev and Husain do not believe that indistinguishable brain scans (between a brain dead and conscious patient) are sufficient to assign understanding to a person. Similarly, Searle believes that indistinguishable responses between a computer and a person do not prove a computer’s understanding in the Turing Test.
Response to Criticisms
In his first response to these criticisms, Owen claims that the patient would not have automatically been responding to words such as “tennis” and “house” because the brain activation lasted for 30 seconds and occurred in areas of the brain associated with mental imagery.[xiv] Past studies have found that brain activation is usually transient for unconscious, automatic processing. Further, the areas of the brain that were activated were areas involved in mental imagery tasks, not areas involved in word comprehension. Owen also believes that Nachev and Husain’s claim that activation must be proven to be absent without the stimuli would be impossible to test because it would require an infinite number of fMRI scans. Owen addresses Greenberg’s criticisms by suggesting that future research should compare brain activation in healthy individuals and brain-damaged individuals when both non-instructive sentences are presented, including words such as “tennis” and “house,” and instructive sentences are used, such as “imagine playing tennis” or “imagine walking through a house.”
Conclusion
Although there is much debate about the interpretations of fMRI studies of VS patients, the evidence for a small amount of consciousness in certain VS patients is compelling. Future studies will be needed to conduct additional research on such issues in order to conclusively determine which interpretations of consciousness in brain-damaged patients are correct. With more research and improved technologies, answers about the nature of the mind and consciousness may be found.
REFERENCE NOTES
[i] Timothy Quill, “Terri Schiavo – A Tragedy Compounded,” The New England Journal of Medicine 352, (2005): 1630-1633. Retrieved from http://content.nejm.org/cgi/content/full/352/16/1630
[ii] Ibid.
[iii] Jay Friedenberg and Gordon Silverman, “The Philosophical Approach: Enduring Questions,” Cognitive Science: An Introduction to the Study of the Mind (Newbury Park, CA: Sage Publications, 2006), 29-64.
[iv] Robert French, “The Turing Test: The First Fifty Years,” Trends in Cognitive Sciences 4, (2000): 115-121. Retrieved from http://www.u-bourgogne.fr/LEAD/people/french/TICS_turing.pdf
[v] Ibid.
[vi] John Stins, “Establishing Consciousness in Non-communicative Patients: A Modern-day Version of the Turing test,” Consciousness and Cognition 18, (2009):187-192. doi:10.1016/j.concog.2007.12.005
[vii] Adrian Owen, Martin Coleman, Melanie Boly, Matthew Davis, Steven Laureys, and John Pickard, “Detecting Awareness in the Vegetative State,” Science 313, (2006): 1402. doi:10.1126/science.1130197
[viii] Adrian Owen, Martin Coleman, Melanie Boly, Matthew Davis, Steven Laureys, Dietsje Jolles, John Pickard, “Response to Comments on ‘Detecting Awareness in the Vegetative State,’” Science 315, (2007):1221c. doi:10.1126/science.1135583
[ix] Owen et al., Science, 1402.
[x] Melanie Boly, Martin Coleman, Matthew Davis, Adam Hampshire, Daniel Bor, Gustave Moonen, Pierre Maquet, John Pickard, Steven Laureys Adrian Owen, “When Thoughts Become Action: An fMRI Paradigm to Study Volitional Brain Activity in Non-communicative Brain Injured Patients,” NeuroImage 36, (2007): 979-992. doi:10.1016/j.neuroimage.2007.02047
[xi] Owen et al., Science, 1402.
[xii] Daniel Greenberg, “Comment on “Detecting Awareness in the Vegetative State.” Science 315, (2007):1221b. doi:10.1126/science.1135284
[xiii] Parashkev Nachev and Masud Husain, “Comment on ‘Detecting Awareness in the Vegetative State,’” Science 315, (2007): 1221c. doi:10.1126/science.1135096
[xiv] Owen et al., Science, 1221c.