On the "Neuroscience of Ethics" - Approaching the Neuroethical Literature as a Rational Discourse on Putative Neural Processes of Moral cognition and behavior

James Giordano1,2*, Kira Becker1,3, John R. Shook4

1Neuroethics Studies Program, Pellegrino Center for Clinical Bioethics, Georgetown University Medical Center, Washington, DC, USA
2Departments of Neurology and Biochemistry, Georgetown University Medical Center, Washington, DC, USA
3Department of Neuroscience, Amherst College, Amherst, MA, USA
4Graduate Program in Science and the Public, University of Buffalo, Buffalo, NY, USA

Neuroethics is a relatively new, yet ever expanding discipline, which focuses on the “neuroscience of ethics” and the “ethics of neuroscience”. In this essay, we discuss the literature describing the “neuroscience of ethics”. Current approaches to employing neuroscientific techniques and tools to elucidate brain processes serving ethical decision making has evolved from prior psychological studies of how and why humans believe and act in ways deemed to be moral. While a number of neuroanatomical pathways have been defined as participatory in certain types of decision-making, it appears that none are exclusively dedicated to moral cognition or actions. Moreover, attempts at enhancing morality through neurological interventions are plagued by differing constructs of what constitutes moral action in various contexts. Herein, we review developments in neuroscientific studies of morality, and present a rational view of the capabilities, limitations and responsibilities that any genuine neuroethical address and discourse should regard.


 

In response to an invitation to review our recently published work, “A four-part working bibliography of neuroethics: part 2 – neuroscientific studies of morality and ethics”1, we herein provide a synopsis of that literature, with a prefatory look back at the search for possible bases of moral thought and action. As with any scholarly examination, it first becomes important, if not necessary to define the object of study. “Morality” can be defined as “beliefs about what is right behavior and what is wrong behavior”2. However, the relativity and ambiguities of any such definition of those thoughts and actions (or inactions) are “right and “wrong” – and by extension, deemed “good” and/or “bad” - have historically incurred problems when attempting to determine or develop standards for “morality”3.

Deliberations about justifying moral standards are the focus of moral philosophy, not moral psychology or moral neuroscience. Yet, one and the same organ, the brain, makes moral judgment and deliberation possible. The iterative rise of the natural sciences, and the growth of empirical and experimental studies during the latter part of the eighteenth and throughout the nineteenth century, respectively, were instrumental to re-situated investigations of moral thought and action within the then emerging discipline of psychology3.

To be sure, the profound, and seemingly perdurable questions of moral philosophy (e.g.- “what is right and wrong?” “how do people intuit or establish what is good, bad and/or evil?”) persisted, but not as mere fodder for discourse, but rather as the basis for observational and experimental inquiry, to be approached through the application of ever newer and more sophisticated methods and tools. The turn of the nineteenth to twentieth century – and the strong influence of scientific and industrial revolutions – fostered numerous studies into putative biological bases of thought and behavior (e.g.- psychophysiology and physiological psychology)3. Taken together, the concatenation of these fields of study (i.e.- physiology, anatomy, chemistry, psychology, anthropology, sociology, and to a large extent philosophy, and in particular the philosophy of mind) ultimately became the discipline of neuroscience (or perhaps more accurately, the neurosciences, a term that explicitly conveys the multi-focal dimensions of brain science, and the breadth of inquiry and applications, covering a span “from the synaptic to the social”)4.

Since the start of the 21st century, studies of neurobiological processes and mechanisms involved with cognitions, emotions and behaviors have proliferated, and many have been directed toward those ascribed to be “moral” in nature3. The discipline of neuroethics quickly arose as broader ethical and philosophical implications of such studies erupted into view. As described by cognitive scientist and philosopher Adina Roskies, neuroethics addresses both “the neuroscience of ethics” (i.e.- somewhat colloquially referring to the aforementioned studies of putative neurological bases of moral thought and actions) and “the ethics of neuroscience” (i..e- those ethical concerns, questions, problems and solutions fostered by brain research and its varied applications and effects in the social realm)3.

Reviewing our ten-year bibliography of neuroscientific studies of ethics and morality3, we posit that two major “domains” of studies – and findings – can be identified. First are attempts at mapping neural sites and networks that are involved in particular types of moral and ethical cognitions and behaviors (i.e.- mapping the “moral brain”). Second are investigations and discourses about the possibility, and implications of attempts to modify (i.e.- direct, enhance or diminish) moral thought and actions via neurological interventions.

As reflected in our bibliography3, there have been numerous studies that have attempted to identify the functional neuroanatomy of moral and ethical thought(s) and behavior(s). Employing a variety of neuroimaging techniques, such as quantitative electroencephalography (qEEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and diffusion tensor and kurtosis imaging (DTI/DKI), several brain sites and networks have been elucidated to be participatory in particular types of thought processes and actions that were posited to be representative of “morality” or ethics. Such studies have shown the involvement of areas of the hippocampus, specific regions within the amygdala, ventromedial/dorsolateral prefrontal cortex, posterior cingulate cortex, precuneus, and temporo-parietal junction.

For neuroethics, a core question is whether there is anything special about the engagement of these neuroanatomical substrates in regard to specifically moral cognitions and actions. In the main, the answer seems to be that there is not, per se. It appears that moral decisions are engaged and processed much in the same way(s) as other decisions of high referent value (i.e.- those that the individual regards as important) that involve weighing relative gains and losses to both self and others. As we - and others - have noted there is not a “nucleus moralis” or “fasiculus ethicus”; there is no brain region, network, or architecture that is largely dedicated only to determining what an individual (or a group) considers to be moral, or to processing ethical ideas and planning moral conduct7-11,. At most, studies to date have revealed patterns of neurobiological activity in response to, and involved in certain types of situations, dilemmas, and activities that are construed to be moral and/or ethical (by research subjects and by experimenters, if not the society-at-large in which the studies were conducted).

Neuroethics isn’t trapped by a false dichotomy holding that unless the brain is moral - thanks to dedicated neural processing for morality - the brain must instead be devoid of morality altogether. The underlying fallacy supposes that just because there is some expected mental ability for personal morality, the news that nothing neuronal is conducting that ability must expose morality as somehow being unreal. Even after non-natural powers are set aside as relics from unscientific ages, plain folk psychology cites all sorts of mental events and ways of thinking generally taken to be signs of authentic moral conduct, or at least indicative of efforts to be moral. The example of voluntary choice, or acting from one’s own free willing, has animated neuroethical debate, and will doubtlessly continue to do so12-14.

Regardless of the eventual fate of notions about mental affairs, humans continue to try to behave well, more or less, despite the headlines.. Given that certain types of cognitions (e.g.- beliefs, ideals and decisions) and actions are held to be of moral value (by individuals and groups), then it follows that assessing the neural mechanisms involved are important, not simply for a deepened understanding of why and how certain cognitive and behavioral functions occur, but to access and affect these processes to elicit some change in the ability to think and/or act “morally”15.

But morality isn’t just in the lone head. Context matters a great deal to becoming moral, and becoming more moral. One’s own social context and place in historical time exert considerable influence over which concepts and constructs of moral rectitude are held in highest regard. Moral abilities are not inborn at full capacity; rather humans learn to become moral members of society16-18. Moreover, no matter which ethical code is preferred when one is an adult, there remains much situational variability in how moral judgments are formed on a moment-by-moment basis. It appears that to at least some extent, human moral cognition entails multiple modes of moral discernment and ethical justification19-21. Both utilitarians and deontologists, for example, can find human brains (and often the same brains) that are able to assess crafted moral situations according to theoretical expectations22,23. From neuroscientific, psychological, and sociological standpoints, then, the stance of moral pluralism is acquiring plausibility – “morality” is just a conventional covering term for quite distinct modes of judgment and ways of conduct, each finding a way to contribute to individual and social flourishing24.

Neuroethics takes into account the neuroscience of morality while undertaking its second main task: addressing the ethics of modifying brain structure and function. With ongoing developments in neuroscientific techniques and technologies, there is increasing interest in, and discourse about using such approaches to alter moral thought and behaviors19,20. The notion of moral “enhancement” typically evokes positive and even utopian hopes about peaceful and just societies.

However, ends don’t always justify means, even in ethics, especially if ends are only vaguely conceived. So, while elements of a “common morality” for adults have been proposed25, and perhaps components of this morality discerned among all the particularities of culture, there is no universal standard for what concretely counts as ethical conduct, beyond a few moral platitudes we expect children to follow. Constructs of a “golden rule” are applicable in part26, but there are abundant examples of intentional violations of the “do unto others” maxim.

So , if the goal is moral enhancement, such a task couldn’t simply amount to adjusting brain function to better conform to some cognitive pattern approved by one or another theory of ethics. As the previous section recounts, no theory about some essence to morality is surviving scrutiny by moral neuroscience or moral psychology. Only the strictly empirical route remains secure. Having a pre-approved set of moral behaviors already in hand, and letting neuroscientific investigations discern correlations between those behaviors and neural function, it is possible to infer that modifications to certain neural processes would affect moral behaviors (but in potentially unexpected and unwanted directions).

Before any techniques are employed for “moral enhancement” the first steps should entail specifying what counts as “moral” conduct to be attained, and ascertaining which cognitive processes can be targeted for modification to thereby “improve” that designated moral conduct. Moral enhancement through neural modification could never be species-wide, trans-historical, or aligned with just one moral theory, for reasons already discussed. At most, local and provisional moral improvements could be developed that target specific kinds of behaviors in highly limited ways, not unlike advances in pharmaceuticals.

Still, neuroethical discourse should not dismiss the idea of moral enhancement as entirely unrealistic. There are many considerations and constraints to any type of biologically-induced treatments that are posited to represent enhancements or optimizations that must be seriously taken into account31. Achieving measurable moral improvement would necessitate meaningful deliberation about what morality is, and which moral precepts and standards are of most value. Of course, each culture already harbors its own views about moral rules and ethical priorities.

Neuroethical address therefore reaches a stage where two paths diverge. The first path allows moral psychology and moral enhancement, to vary from culture to culture (and likewise for sub-cultures). What counts as moral enhancement accordingly varies across societies, so that moral relativism corresponds with relativism for enhancing morality. On this path, neuroethics is splintered and divided by cultural preferences. The second path encourages moral psychology to seek what is common to all human morality. What counts as moral enhancement would be the improvement of moral capacities common to our species that support any culture’s cohesion. On either path, there is no destination that enables arriving at some neuroscientific determination of what is “really” moral, or some verdict by moral psychology about which cultures are more moral than others. Brain science can never do the work of ethics. However, if neuroethics is highly multi-disciplinary and sensitive to inter-cultural deliberations, then the future of moral enhancement could contribute to the greater civility and harmony that any society should seek.

Neuroethics will be a successful discipline to the extent that its two main tasks concerning human morality are continually coordinated and adjusted to each other as neuroscience progresses. This calls for neuroethics to be thoroughly neurophilosophical, and less beholden to false dichotomies behind dire headlines about morality’s evaporation. As Wiseman has noted, what may be drawn from attempts to depict neuroanatomical loci and networks that subserve moral thought and action is that such construals of a “moral brain” represent a “myth”, in the most literal sense as mythos (μ?θος) – an explanatory story, typically based upon limited information. We concur, and add that denoting something as mythic need not be pejorative. As matter of fact, a myth can serve to represent partially understood truths, convey profound meanings, and serve object lessons.

A key truth is that the human brain evolved to manage an intensely social life with predispositions favoring joint cooperation and group solidarity27,28. The prevalence of norms and rules, and more rules about enforcing important norms, is quite characteristic of our species. However, that truth can be twisted into falsehood by further assuming that human brains are hard-wired to be moral. Predispositions for sensitivities to certain socially relevant cues that are important to individuals’ interactions, survival and flourishing in groups appear to have been developed and preserved as a consequence of hominid evolution. If “morality” is taken to mean anything more specific than that, then one would be talking only about socio-cultural constructs, not human nature. No specific moral code is inherent to humanity, although developing some sort of moral mindset and instilling it in the young, which every culture accomplishes, is as naturally human as anything else.

The human capacity to create and sustain particular cultures is always deeply at work. Let us not forget that individuals constitute cultures, and their psycho-social interactions affect and are affected by the structure and function of their brains. Do brains engage in cognitions and decisions that evoke behaviors that may be considered to be “moral” and/or “ethical”? Absolutely; and therein rests the truth in the myth. Are there nodes and networks that are exclusively functional in moral thoughts and actions? It seems not. But brains are embodied in individuals who are nested in, and interactively responsive to their environments. Indeed, the merging of experimental neuroscience and cognitive science may provide new methods – and ways – of understanding and predicting the relationship of neural activity and cognitive dynamics in attempts to afford a bridge between the brain and its functions that are categorized as the “mind”28.

And here we encounter the pragmatic temperance of the mythos of a moral brain by the logos ( λ?γος) – the rational discourse – of neuroethical investigation and deliberations about using the tools and information of the brain sciences in ways that preserve the realistic capabilities of techniques employed, and the facts (and persistent unknowns) of the information obtained. This is the core significance of “an ethics of neuroscience” (i.e.- neuroethics’ “second tradition”), and the key to insuring the validity and preserving the value of any attempts at, and information gained from a “neuroscience of ethics”.

This work was supported in part by funding from the Lawrence Livermore National Laboratory (JG), William H. and Ruth Crane Schaefer Endowment (JG), and the Neuroethics Studies Program of the Pellegrino Center for Clinical Bioethics of Georgetown University.

  1. Darragh M, Buniak L, Giordano J. A four-part working bibliography of neuroethics: part 2 – neuroscientific studies of morality and ethics. PEHM. 2015; 10(2): 1-22.
  2. Merriam-Webster [Internet]. Springfield, MA: Merriam-Webster, Incorporated; 2015 [2015; August 1st, 2016].
  3. Darragh M, Buniak L, Giordano J. A four-part working bibliography of neuroethics: part 2 – neuroscientific studies of morality and ethics. PEHM. 2015; 10(2): 1-22.
  4. Giordano J. Neuroethics: Traditions, tasks, and values. Human Prospect. 2011; 1(1): 2-8.
  5. Shook JR. Giordano J. Will brain science understand and modify morality? A neuropragmatic and neuro-ecological approach to neuroethics. 2016; 7(1): 20-31.
  6. Young, L. Dungan, J. Where in the brain is morality? Everywhere and maybe nowhere. Soc Neuroscience. 2012; 7(1): 1-10.
  7. Verplaetse et al. The Moral Brain: Essays on the Evolutionary and Neuroscientific Aspects of Morality. New York, NY: Springer, 2009.
  8. Giordano, J. Neuroethics : Interacting “traditions” as a viable meta-ethics. AJOB Neuroscience. 2011; 2(2): 17-19.
  9. Avram, M, Giordano J. Neuroethics: some things old, some things new, some things borrowed…and to do. AJOB-Neuroscience. 2014; 5(4): 1-3.
  10. De Jong, BM. Neurology of widely embedded free will. Cortex. 2011; 47(10): 1065-1160.
  11. Mele, A. Unconscious decisions and free will. Philos Psychol. 2013; 26(6): 777-789.
  12. Roskies AL. How does the neuroscience of decision making bear on our understanding of moral responsibility and free will? Curr Opin Neurobiol. 2012; 22(6):1022–1026.]
  13. Shook, JR. Giordano, J. Will brain science understand and modify morality? A neuropragmatic and neuro-ecological approach to neuroethics. Pragmatism Today. 2016; 7(1): 20-31.
  14. Decety J, Michalska KJ, Kinzler KD. The developmental neuroscience of moral sensitivity. Emot Rev. 2011; 3(3):305–307.
  15. Decety J, Michalska KJ, Kinzler KD. The contribution of emotion and cognition to moral sensitivity: a neurodevelopmental study. Cereb Cortex. 2012; 22(1):209–220
  16. Moll J, Schulkin J: Social attachment and aversion in human moral cognition. Neurosci Biobehav Rev 2009, 33(3):456–465
  17. Levy N. Empirically informed moral theory: a sketch of the landscape. Ethical Theory Moral Pract. 2009; 12(1): 3-8.
  18. Cushman, F. Action, outcome, and value: a dual system for morality. Pers Soc Psychol Rev. 2013; 17(3): 273-292.
  19. Cikara M, Farnsworth RA, Harris LT, Fiske ST. On the wrong side of the trolley track: neural correlates of relative social valuation. Soc Cogn Affect Neurosci. 2010; 5(4):404–413.
  20. Berns GS et al. The price of your soul: neural evidence for the non-utilitarian representation of sacred values. Philos Trans R Soc Lond B Biol Sci. 2012; 367(1589): 754-762.
  21. Kahane G et al. The neural basis of intuitive and counterintuitive moral judgment. Soc Cogn Affect Neurosci. 2012; 7(4):393–402.
  22. Sinnott-Armstrong W, Wheatley T. The disunity of morality and why it matters to philosophy. The Monist. 2012; 95(3):355-377.
  23. Gert B. Common Morality: Deciding What To Do. New York, NY. Oxford University Press; 2004.
  24. Pfaff D. The Neuroscience of Fair Play: Why We (Usually) Follow The Golden Rule. New York, NY. Dana Press; 2007.
  25. Shook J, Giordano J. Neuroethics beyond normal. Cambridge Quarterly of Healthcare Ethics. 2016; 25(1): 121-140.
  26. de Waal, F. Primates and Philosophers: How Morality Evolved. Princeton, NJ. Princeton University Press; 2009.
  27. Churchland PS: Braintrust: What Neuroscience Tells Us about Morality. Princeton, NJ. Princeton: Princeton University Press; 2011.
  28. Turner BM, Forstmann BU, Love BC, Palmeri TJ, Van Maanen L. Approaches to analysis in model-based cognitive neuroscience. J Mathematical Psychol. 2016. doi: 10.1016/j.jmp.2016.01.001
 

Article Info

Article Notes

  • Published on: September 07, 2016

Keywords

  • Neuroscience

  • Neuroethics
  • Enhancement
  • Morality
  • Ethics
  • Pragmatism

*Correspondence:

Prof. James Giordano, PhD
Neuroethics Studies Program, Pellegrino Center for Clinical Bioethics Georgetown University Medical Center Bldg D Rm 238, 4000 Reservoir Road Washington DC, USA 20057
Email: james.giordano@georgetown.edu