Skip to content

Can pills change our morals?

February 25, 2013
A mixture of tablets spilling out of a bottle.

A mixture of tablets spilling out of a bottle.

Can pills change our sense of right and wrong? Molly Crockett is a Sir Henry Wellcome Postdoctoral Fellow.  Along with colleagues at the Wellcome Trust Centre for Neuroimaging she is investigating whether manipulating people’s brain chemistry with antidepressant pills could change the way they respond to moral situations. Here she tells us more about this intriguing area of research.

It’s a classic moral dilemma: there’s a trolley hurtling out of control down the tracks toward 5 workers. If you do nothing, they will die. You have access to a switch that will divert the trolley onto a different set of tracks, where there is a single worker. If you flip the switch, this single worker will die but the other five workers will be saved. Is it morally permissible to flip the switch?

Now, there’s no objectively right or wrong answer to this question. In fact, there are two schools of moral thought that take opposing views. The utilitarian school, promoted by the philosopher David Hume, judges actions based on their outcomes: morally appropriate actions are those resulting in the greatest good for the greatest number. In contrast, the deontological school, promoted by the philosopher Immanuel Kant, judges the actions themselves: there are right actions and wrong actions, and outcomes are irrelevant. In the case of the trolley problem, utilitarians say it is appropriate to kill one to save many, because more lives are saved. Deontologists, on the other hand, say it is inappropriate to kill one to save many, because killing is wrong.

My colleagues and I asked 30 volunteers to judge the appropriateness of actions in a series of scenarios like the trolley problem. We wanted to see whether we could change people’s judgments of right and wrong by tinkering with a specific brain chemical called serotonin. We used a drug called a selective serotonin reuptake inhibitor (SSRI) similar to the antidepressant Prozac, which enhances the effects of serotonin in the brain. In one session, people made moral judgments while under the influence of the SSRI, and in another session, people made moral judgments while on a placebo pill.

We were interested in responses to two types of scenarios: ‘impersonal’ and ‘personal’ scenarios. ‘Impersonal’ scenarios are those like the one described above, in which, for example, flipping a switch diverts the trolley to hit one person instead of five. ‘Personal’ scenarios also involve harming one to save many – but in these scenarios, the actions required to do so are much more violent. For example, in one personal scenario, instead of flipping a switch, you can stop the trolley by pushing a man wearing a heavy backpack onto the tracks.

What’s interesting is that even though both flipping the switch and pushing the man cause the same outcome – killing one person to save several others – people usually say it’s much worse to push the man than to flip the switch. Scientists suspect this discrepancy is because personal scenarios evoke stronger emotions, so the harms “feel” more wrong.1

We found that the SSRI influenced people’s moral judgments. On placebo, subjects were less likely to endorse personal harms, relative to impersonal ones, just as other studies have shown. When we enhanced serotonin function with an SSRI, this difference became even more pronounced: the SSRI made people significantly less likely to say it is morally acceptable to kill one person to save many others, especially in those emotionally salient personal scenarios. In other words, the drug made people less utilitarian.2

Pause for a moment and consider that the debate between utilitarians and deontologists has been raging for hundreds of years. Yet we were able to shift people’s judgments, from more utilitarian to more deontological, by manipulating their brain chemistry. Could the difference between Hume and Kant boil down to a few chemicals in their brains? And what implications might this have for other ethical questions?

In this study changes in serotonin levels were artificially induced, but out in the real world serotonin levels fluctuate naturally in response to changes in diet and stress levels. What this means is that our moral values are probably shifting a little bit all the time, without us knowing it.

So why is this important? Well, it turns out that simply acknowledging that beliefs are changeable, as opposed to fixed, can have a dramatic effect on people’s willingness to negotiate with those who disagree with them.

The Israel-Palestine conflict is one of the biggest ideological clashes of our time. Eran Halperin, Carol Dweck, and colleagues recently reported that beliefs about whether groups have a changeable versus a fixed nature influenced Israeli and Palestinian attitudes towards each other, and their willingness to compromise for peace.3 In their experiment, Israelis and Palestinians were randomly assigned one of two articles to read. One article suggested that aggressive groups have a fixed nature, the other that aggressive groups have a changeable nature. Those who read the article about changeable groups were significantly more willing to meet with the opposing side and hear their point of view, and more willing to make compromises on issues like the status of Jerusalem and settlements in the West Bank.

It seems that if we can just wrap our heads around the idea that peoples’ attachment to their ideals is not fixed, but can change, we’re more likely to listen to each other. It’s unclear whether we will ever be able to create a “morality pill”—in part because we have yet to reach consensus on what is “moral” in the first place.4 And we still have a long way to go before we fully understand how brain chemistry shapes moral judgment and behaviour.5 But preliminary work suggests we ought to cultivate a healthy skepticism towards our own sense of right and wrong – it may well be vulnerable to factors below our awareness and beyond our control.

References

  1. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science293(5537), 2105-2108 PMID: 11557895.
  1. Crockett, M. J., Clark, L., Hauser, M. D., & Robbins, T. W. (2010). Serotonin selectively influences moral judgment and behavior through effects on harm aversion. Proceedings of the National Academy of Sciences107(40), 17433-17438 PMID: 20876101.
  1. Halperin, E., Russell, A. G., Trzesniewski, K. H., Gross, J. J., & Dweck, C. S. (2011). Promoting the Middle East peace process by changing beliefs about group malleability. Science333(6050), 1767-1769 PMID: 21868627.
  1. Persson, I., & Savulescu, J. (2012). Unfit for the Future: The Need for Moral Enhancement. OUP Oxford.
  1. Crockett, M. J. (2013). Moral bioenhancement: a neuroscientific perspective. Journal of Medical Ethics PMID: 23355048.
6 Comments leave one →
  1. February 25, 2013 3:28 pm

    Hume did not advocate Utilitarianism, John Stuart Mill did. Hume, along with Adam Smith, introduced the notion of Moral Sentiment, which probably would have made a more interesting basis for your experiment than abstract Ethical theories that no one actually employs in real life.

    • Alisdair Cameron permalink
      February 25, 2013 8:42 pm

      But Mill’s utiltarianism was itself not devoid or morals, nor moral sentiment: a strong case can be made this his doctrine of higher pleasures is underpinned by moral sentiment.

  2. February 27, 2013 3:36 pm

    @ Seth. “Theories that no one employs in real life”? Really? How do you think decisions are made? Whether you realize it or not you appeal to ethical principles when making ethical decisions. If you don’t you are likely inconsistent. Leaders of countries seem to appeal to utilitarianism, I know law enforcement officials usually do.

    • Seth P permalink
      February 28, 2013 3:51 am

      Thanks for the responses. Here are a couple of thoughts.
      @Alisdair Moral Sentiment in Hume and Smith is a ‘technical’ term. It indicates a capacity for empathy with the suffering of another, the ability to use one’s imagination to picture oneself in another’s shoes. This generates a sentiment that is the motive power for moral action. Hume was explicit that while reason can determine a course of action by itself, it can’t motivate one to actually do it.
      Utilitarianism by contrast is a strictly reason-based ethical scheme: one determines which course of action will generate the greatest good (or do the least harm) without regard for one’s sentiment towards the impact on those involved.

      @Justin I do assert that people in general do not make appeal to either Utilitarianism or Deontology when making moral decisions in day to day life. Only a very unusual person would say to him- or herself before taking moral action, “How can I act so that what I do I could will to be universal law?” or “What action can I take that would maximize the greatest good for the greatest number of people?”
      In practice people make moral judgements based on a whole variety of factors, not the least of which is their sense of fairness and empathy. Thus my claim that the article would have been more interesting if the author had focused on how drugs affect people’s empathetic response to the actors in the hypothetical situations.

Trackbacks

  1. Changing Behaviors With SSRI?
  2. Environmental HOS

Leave a comment