By Lisa M. Saksida, Reader in Cognitive Neuroscience, University of Cambridge.
Nearly everything that we do has an impact on our brains. Changes in our behaviour and in our environment can lead to structural and functional alterations in our brains. These changes can happen at a number of different levels, from molecular and cellular changes that happen as a result of learning, up to the reorganization of entire cortical areas as a result of injury. This process is sometimes called experience-dependent plasticity, and it occurs at all ages, although the degree of plasticity is relatively high in childhood and decreases over the course of our lifetime. Neuroplasticity is what allows us to learn, to remember, to adapt and to modify our actions on the basis of experience.
One specific aspect of neuroplasticity that has received much attention over the past two decades is adult neurogenesis – the notion that new neurons can be produced in an adult brain. Until the mid-1960s it was firmly believed that neurogenesis in mammals ends in the period just after birth. Technological developments in the 1990s led to an ongoing period of intensive research in this area, and it is now well-established that every day thousands of new neurons are produced in the adult mammalian brain (Cameron and McKay, 2001; Spalding et al., 2013). Many of these new neurons are produced within a region of the brain called the hippocampus, which has long been established as being critical for learning and memory processes.
Although the process of neurogenesis has been well-studied, it is only very recently that the specific functional or behavioural consequences of neurogenesis have been considered. Increased neurogenesis generally correlates with better memory, as might be expected when the part of the brain associated with learning and memory is increased in volume. But what is the specific role of these new neurons in learning and memory?
There are several theories, but the largest body of evidence so far (although it is still very preliminary) supports the idea that new neurons in the hippocampus are important for a memory process known as “pattern separation” (Clelland et al., 2009) . In contrast to our usual notion of memory as the ability to retain information over time, pattern separation at the behavioural level refers to the ability to keep memories distinct and resistant to confusion. Imagine you are asked to remember where you parked your car this morning, yesterday morning and the day before. This task is difficult not because you need to remember something that happened a long time ago– it is easy to remember much of what happened three days ago – but because the similar memories of parking your car in the same car park over three consecutive mornings are so easily mixed up.
One very interesting aspect of neurogenesis is that it is highly responsive to environmental influences, some of which are described below. A number of simple factors have been shown to enhance neurogenesis. Less research has been done on the specific knock-on effects of increased neurogenesis on cognition, but some promising initial studies have been performed.
By Mairéad MacSweeney, Wellcome Trust Senior Research Fellow at the UCL Institute of Cognitive Neuroscience.
I work with people who are born severely or profoundly deaf in order to inform our understanding of how the brain processes language. People born deaf can provide a unique insight into how the brain processes language. This is because the vast majority of the population are born hearing, and hear spoken language even before they are born, while still in the womb. They are always surrounded by it. But for a child who is born profoundly deaf, the situation is very different. By definition they have incomplete access to spoken language. In addition some deaf children may access a purely visual language – in this country – British Sign Language. We can look at language development and brain development in these children to gain unique insights into how language is processed in the absence of sound. I look at sign language processing and also spoken language processing in the form of lip-reading and reading.
Reading forms a major part of my current research. Deaf children find it incredibly difficult to learn to read, because when we read we read a spoken language written on the page and deaf children have impoverished access to spoken language. Although deafness is not a learning disability – most deaf children have normal non-verbal IQs – a typical deaf child will have a reading age of 10-11 years old when they leave school aged 16. This has major consequences for their educational and vocational attainment.
A major skill in developing reading accuracy is learning how to map the written letters onto the correct sounds. This is what phonics training is all about – mapping sounds to letters or letter combinations. Broader language skills are also very important to reading comprehension, such as semantics (the meaning of words) and syntactic structure (how words are put together to form sentences).
Some deaf children do become very good readers. If we can understand how they achieve this, despite the fact that they can’t hear the language that they are reading, then this can give us insights into how we might be able to better teach all deaf children. More broadly, an understanding of how important hearing spoken language is for learning to read has the potential to inform our understanding of reading in hearing children too, not just deaf children.
For example, one of the core deficits in hearing children with dyslexia is that they have poor phonological awareness skills, such as knowing that “chair” rhymes with “bear,” or that “split” without the “l” is “spit.” One of the main theories of dyslexia in hearing children is that they have some kind of low-level auditory processing deficit. With deaf children we can look at a group who have very little or no auditory input, and look at the impact of that on their reading.
Cognitive Neuroscience of Attention and Motivation
By Masud Husain, Wellcome Trust Principal Fellow, University of Oxford.
Recent developments in the neuroscience of attention and motivation have moved forward at a rapid pace. We now understand a great deal about the brain systems, networks and neurotransmitters that underpin the deployment of attention and human motivation, two fundamental processes that are widely considered to play key roles in learning and educational outcome. However, application of these findings to education and improved performance is still in its infancy.
Neurofeedback is perhaps the technique which is closest to being used in educational settings. In this relatively new approach students are given real-time feedback on their own neurological state. This is typically achieved by visualising brainwaves using electroencephalography (EEG; Gruzelier, 2013), although neurofeedback has also been attempted with real-time functional magnetic resonance imaging (Weiskopf, 2012) in adults. It is proposed that by monitoring their brain activity through the use of neurofeedback, students are able to train their brains to produce specific patterns of activity that are optimal for learning (Enriquez-Geppert et al., 2013).
Several studies and, more recently, randomized trials have been conducted using EEG in children with attention deficit hyperactivity disorder (ADHD) (Loo, et al, 2012; Lofthouse, et al, 2012; Moriyama, et al, 2012; Gevensleben, et al, 2012). One recent six month study has even reported that neurofeedback outperformed standard drug treatment (methylphenidate) for ADHD in terms of academic outcome (Meisel, et al, 2013).
Other potential interventions include transcranial magnetic stimulation (TMS) (Demirtas‐Tatlidede, et al, 2013), transcranial direct current stimulation (tDCS) (Kuo, et al, 2013) and cognitive enhancement using drugs (Husain & Mehta, 2011). The use of these techniques to enhance cognitive function has been explored, to varying extents, in adults. For example, tDCS has been reported to improve numerical abilities in adults (Cohen Kadosh, et al, 2010) but the use of such techniques in children, in particular, raises both safety and ethical issues (Cohen Kadosh, et al, 2012). Similarly, studies of cognitive enhancement using drugs in adults have shown signs that the individuals most likely to benefit are those who have the lowest performance (Husain & Mehta, 2011), but these studies have not been systematically tested. For most drugs, the long-term safety profile and effects on cognition have not been established in children.
In order for techniques to be translated effectively into the classroom, to become useful and safe, they need much further testing, considering both ethical and safety issues. Obtaining regulatory and ethics committee permissions for such studies would not be straightforward, but a case could be made, particularly if there is an unmet need, for instance, for students with different types of learning disability.
By Roi Cohen Kadosh, Wellcome Research Career Development Fellow, University of Oxford.
Harnessing neuroplasticity for education using neuromodulation
I like movies. One of my favourite scenes is in “The Matrix” where the hero, Neo (no relationship to the cortex), took a short nap while a plug was inserted into the socket at the back of his head. A few seconds later, Neo woke up and said, “I know Kung Fu!” What a lovely idea! It would be much easier to learn maths, languages, and just to upload all the articles and books that I have not yet been able to read. Unfortunately, this remains as science fiction, and depending on the subject or type of learning it can take days, months, and even years of intensive labour. In the case of those with learning difficulties, such efforts do not improve their performance as they may in other people. My aim is not to invent a machine like the one Neo used, but to make a smaller step by examining the possibility of modulating neuronal activity in the brain whilst people are learning a skill and subsequently using it. Still, to the non-expert reader, it does sound like science fiction, and one of the questions that I was asked four years ago during my interview at the Wellcome Trust was if this idea is indeed feasible. In this blog I introduce the neuroscientific principles of the technique that I use in my research and describe the progress so far. I also discuss the work that still needs to be done in our efforts to improve the speed and quality of learning, and thus, educational achievements.
Stimulating the Brain using Electricity: A shocking idea?
The first association that comes to mind when electricity and brain stimulation are mentioned is electroconvulsive therapy (ECT) in which strong electrical current is induced in anaesthetised psychiatric patients for therapeutic effect. This is not what we are doing. Rather, my lab is using a technique called transcranial electrical stimulation (tES), in which we apply a small electrical current (for example, 1milliAmp, or one thousandth of an Ampere) to the scalp to modulate neuronal activity during training in order to enhance learning and high-level cognitive functions. The current is generated from a low power source, such as two AA batteries, and is delivered to the scalp using one or more electrodes.
Our research is supported by more basic research in neuroscience, in which animal studies have shown that low electrical currents can affect neuronal excitability and make it either easier or harder for neurons to fire (Bindmann, et al., 1964). More recently, research has shown that this is safe and effective in humans (Paulus, 2013; Cohen Kadosh, 2013). By employing mostly basic tasks that have little to do with education, this early research has shown that it is possible, for instance, to aid finger movements (Nitsche & Paulus, 2000) or help the brain to detect motion more accurately (Antal, et al., 2004).
Animal studies have shown that this low level of electricity can enhance the secretion of a growth factor (brain-derived neurotrophic factor) which is crucial for synaptic learning (Fritsch, et al., 2010). Furthermore, in humans, the modulatory effect of tES affects regional levels of neurochemicals (gamma-aminobutyric acid and glutamate) that are involved in learning, memory, and neuroplasticity (the brain’s ability to change in response to new experiences or learning; Stagg, et al., 2009). In line with these findings, some studies have shown that tES in combination with training can improve motor skills acquisition (Reis, et al., 2009).
This research led to a great deal of hope for rehabilitative applications of tES, mainly in rehabilitation of those with acquired neurological damage, such as stroke patients, or degenerative illnesses (Cohen Kadosh, 2013). However, I envisage that this tool can be used to improve educational outcomes, such as learning of maths, or other functions that are critical for optimum educational outcome, such as literacy, working memory or attention (Kraus & Cohen Kadosh, 2013; Cohen Kadosh, et al., 2013).
Such an approach—to modulate neuronal excitability whilst people are learning, inducing physiological changes and harnessing neuroplasticity—is, in my view, one of the most exciting synergies between neuroscience and education. We are not only examining the neural correlates of learning or cognitive skills, but rather we are affecting the brain to increase learning outcomes in a given cognitive area. In the next section, I will describe some results from my lab, which focus mainly on one of the most sophisticated human abilities: mathematical cognition.
By Sarah-Jayne Blakemore, Royal Society Research Fellow and Professor of Cognitive Neuroscience at UCL.
1. Adolescent brain development: What have we learned in the past 15 years?
Until about 15 years ago it was assumed that the vast majority of brain development takes place in the first few years of life. Up until that point, scientists did not have the technology to look inside the living, developing human brain. In the past decade, mainly due to advances in brain imaging technologies, in particular magnetic resonance imaging (MRI), neuroscientists have started to scan the living human brain at all ages, in order to track development changes in the brain’s structure – its organisation, including how much grey matter it contains – and also how it functions, across the lifespan. Many groups around the world are working in this area, and we now have a rich and detailed picture of how the living human brain develops. This picture has significantly changed the way we think about human brain development, by revealing that development does not stop in childhood, but continues throughout adolescence and well into adulthood.
Adolescence is defined as the period of life that starts with the biological changes of puberty and ends at the point at which an individual attains a stable, independent role in society. There are clearly large cultural differences in the age range associated with adolescence, and yet there are reports of adolescent-typical behaviour, such as heightened risk-taking and peer influence, in many very different cultures. There are also similarities in descriptions of adolescents throughout history. For example, in The Winter’s Tale Shakespeare portrayed adolescents as follows:
I would there were no age between 16 and three and twenty, or that youth would sleep out the rest; for there is nothing in the between but getting wenches with child, wronging the ancientry, stealing, fighting.
Thus, almost 400 years ago, Shakespeare painted a similar picture of adolescents as we do now, and we are trying to understand this kind of adolescent-typical behaviour in terms of the underlying changes in the brain that characterise this period of life. One of the brain regions that undergoes the most striking and prolonged changes during adolescence is the prefrontal cortex. This is the part of the brain at the very front, and is involved in a wide variety of high level cognitive functions, including decision-making and planning, inhibiting inappropriate behaviour, stopping you taking risks, social interaction and self-awareness.
One of the main findings is that grey matter, which contains brain cell bodies and connections between cells in the prefrontal cortex, increases in volume during childhood, peaks in early adolescence and then starts to decrease in adolescence, and this decline continues throughout the twenties. So, the prefrontal cortex loses grey matter during adolescence. It has been proposed that this decline in grey matter volume partly reflects an important neurodevelopmental process: the loss of connections between brain cells (synapses) during development. This process, which is known as synaptic pruning, partly depends on the environment in that connections that are used are strengthened; connections that aren’t used are lost – they are pruned away. Synaptic pruning fine tunes brain tissue according partly to the environment. You can think of it as a bit like pruning a rose bush. You prune the weaker branches in order for the remaining branches to grow stronger. This is happening throughout adolescence in several cortical regions, including the prefrontal cortex.
A second line of inquiry involves scanning the brain using functional MRI (fMRI) to track changes in brain activity with age. Many fMRI studies have shown that brain activity associated with tasks such as decision-making, planning, inhibiting a response and reasoning, changes across adolescence. For example, in my research group, we are particularly interested in the social brain – that is the network of brain regions that is used to understand other people. We bring adolescents into the lab to have a brain scan, and while they are being scanned we give them tasks that involve thinking about other people’s emotions, thoughts and feelings. Studies from our lab and from other labs shows consistently higher levels of activity in a social brain region called the medial prefrontal cortex in adolescents when they carry out social tasks that require understanding irony, thinking about social emotions such as guilt or embarrassment, or thinking about someone else’s intentions, for example. The different levels of activity within regions of the social brain might be because adolescents and adults use a different cognitive strategy (mental approach) to make social decisions. This is a hypothesis currently under investigation.
In order to look at cognitive approaches to social cognition, we carry out behavioural studies with adolescents, and we and other labs are finding that the ability to understand other people, for example to take another person’s perspective to guide decisions, is still developing in adolescence. At the same time, many studies have shown that the ability to plan and delay gratification is still developing during this period of life. Another area of adolescent research is risk-taking. It is well documented that teenagers tend to take risks, especially when they are with their peers. There appears to be a drive towards seeking the approval of peers, and becoming independent from one’s parents, in adolescence. Even adolescent rats and mice take more risks immediately after puberty than before puberty or in adulthood. One proposal that attempts to explain why risk-taking peaks in adolescence is to do with the brain’s limbic system – this is the brain system that gives us a rewarding feeling when we taking a risk. There is some evidence that in adolescence the limbic system is particularly sensitive to this rewarding feeling. And at the same time, the prefrontal cortex – which stops us taking risks and acting on impulse – is still developing.
By Moheb Costandi
Our understanding of how the brain works has advanced rapidly in the past few decades, and there is now more public interest in neuroscience than at any time in the past. We have reached the point at which neuroscience has the potential to inform classroom practice and improve children’s educational outcomes – consequently there has been a significant increase in so-called “brain-based” classroom interventions which purport to do so.
At the same time, research suggests that myths and misconceptions about the brain are prevalent among schoolteachers, and that those who are enthusiastic about the potential applications of neuroscience on teaching practice find it difficult to distinguish between pseudoscientific claims and scientific facts. Indeed, a recent survey by the Wellcome Trust found that many teachers use or have used educational activities which they believe to be based upon neuroscience but which rarely have a sound basis in the science – nor have they been systematically proven to improve performance. Teachers are clearly eager to improve their practice, but are trying to do so before the educational applications of neuroscience have been fully developed and tested.
As well as providing approximately £90 million per year in funding for neuroscience research, the Wellcome Trust is committed to improving science education and has funded a number of education and engagement projects about neuroscience in the past. One focus of the Trust’s 2010-2020 Education Strategy is to examine the ways in which brain research is being used to inform teaching and learning and, where possible, to develop further investigations into the strength of the evidence and how it can support and improve the quality of education.
To this end, the Trust’s education team has commissioned an expert review that examines the interface between neuroscience and education. Researchers, each with expertise in an area of brain research that has the potential to be applied to educational practice, were asked to: examine the readiness of their field to shape education; make judgements about whether or not their field is likely to yield testable and fruitful educational interventions; and provide recommendations about how it would be best to approach funding the testing of such interventions.
The review consists of a series of articles written by the contributing researchers. Selected summaries of these will be published on this ThInk blog over the coming weeks, covering a wide range of topics, from brain stimulation to neurogenesis and learning. The review was not intended to be exhaustive, but to help the Wellcome Trust decide on its next action in this area. For instance, research into circadian rhythms – that is, your body’s biological clock – is not covered in the review, but has already been tested in some schools and provides an interesting example of how neuroscience might improve educational outcomes. Circadian rhythms change dramatically during puberty and adolescents experience a delay of approximately two hours in their sleep/wake cycle as a result. In practical terms, this means that the school day starts too early for most adolescents, at a time when pupils are reaching the end of the sleep phase of their cycle. It follows that starting the school day an hour or two later would be beneficial, because timing the start of school with the onset of the wake phase of the cycle would optimise pupils’ potential for learning.
Circadian neuroscience researcher Russell Foster is one of the biggest advocates of this approach, which has recently been implemented in a UK school, although not in a way which allowed controlled assessment of its impact. It has, however, been tested in the US. In 1997, seven high schools in and around the Minneapolis area shifted their start times from 7.15 to 8.40am on the basis of these findings, and the first longitudinal study has shown that later start times had significant benefits, including improved attendance rates and less fatigue and sleeping in class.
To complement the Wellcome Trust’s work, the Education Endowment Foundation, with which the Trust has been partnering in this initiative, commissioned a review of the educational literature to identify other areas of research that could potentially be applied in the classroom. Conducted by Paul Howard-Jones, the review examines the available evidence about education initiatives that are, or purport to be, informed by neuroscience, and was guided by questions considering the validity of the alleged scientific basis of the educational concepts and approaches and the quality of evidence for impact.
Together the views of expert neuroscientists and the understanding of current practice from teachers themselves, as well as the educational literature review, have convinced the Wellcome Trust and the Education Endowment Foundation to embark on a new funding initiative – Education and Neuroscience. This £6 million one-off scheme aims to develop, evaluate and communicate the impact of education interventions grounded in neuroscience research. Do check back or sign-up for updates to read more of the expert opinions that helped to shape this work over the coming weeks.
In the past 5 years or so, there has been a huge increase in lifestyle use of prescription drugs that can enhance cognitive function in various ways. These so-called “smart drugs” include the stimulants methylphenidate (better known by its trade name, Ritalin), which is used to treat attention deficit hyperactivity disorder, and modafinil (also known as Provigil), used as a treatment for narcolepsy.
Off-label use of smart drugs is particularly prevalent among students, who face increasing pressure to improve their academic performance. They therefore take these drugs in an effort to focus their attention for longer periods of time and boost their overall productivity.
According to a 2008 survey conducted by the journal Nature, the use of smart drugs is increasing among academics, too. One in five of the approximately 1,600 researchers who responded to the survey said that they had used smart drugs – with Ritalin being the most popular – to focus their attention, memory or concentration.
Is it okay to boost brain function in this way? The question has divided the scientific community. Some researchers say ‘no’ for safety reasons: we still don’t know the consequences of taking smart drugs for long periods of time, and youngsters are particularly at risk because their brains continue to develop well into early adulthood. And the ease with which anyone can buy smart drugs online also raises concern.
Some object to cognitive enhancement on ethical grounds: it may increase the inequalities already present in society, because not everyone could afford to buy the drugs. And what about those who object because they think it would give an unfair advantage? Would they feel pressured into popping brain-boosting pills just to keep up with the others?
Others say that enhancement is not a dirty word, that more research should be done, and that the public should work together with scientists and policy makers to regulate the use of smart drugs. They emphasize the potential benefits that cognitive enhancement could bring to society. Recent research shows, for example, that smart drugs can improve the performance of sleep-deprived surgeons and nightshift workers. The U.S., British, French and Chinese military forces now use Modafinil routinely to combat fatigue in troops, and the drug has also been shown to improve some aspects of cognitive function in psychiatric patients.
Last year, the Wellcome Trust commissioned the second wave of its Monitor Survey, which was designed to assess the UK general public’s level of awareness and attitude toward this controversial issue. This is the most representative such survey to date, and included responses from nearly 1,400 adults and 400 young people aged 14-18.
The results show that opinion is similarly divided: About one-third of adults and young people said that long-term use of smart drugs to improve focus, memory or attention, or occasional use to improve exam performance or something similar, was acceptable, while about one-third said that it was unacceptable.
The results also suggest that the use of smart drugs is less widespread among the general public than within universities, with only 29 adults (or 2% of the total sample) and 9 young people (or 1%) saying that they had ever taken prescription medications for that purpose.
What’s your opinion? Join the debate using the Wellcome Trust’s Big Picture app.
Stress hormones released by a pregnant mother can cause the placenta to shrink and can directly affect the developing brain of the foetus. Now, researchers have identified the mechanism through which stress may damage an unborn child in the womb. An enzyme in the placenta of the mother and the brain of the foetus acts as a barrier to protect the unborn baby from chemicals released in times of stress. But during periods of prolonged stress – such as anxiety and depression or due to a traumatic event such as abuse – levels of the hormones can soar and are believed to overwhelm the protective barrier, resulting in a host of problems. The damage may make the child more likely to develop mood disorders such as depression, anxiety, and even schizophrenia.
Professor Megan Holmes of the University of Edinburgh has been looking into the mechanisms involved. She identified that an enzyme in the mother and baby, called 11-β HSD2, works by mopping up stress hormones called glucocorticoids (GCCs) and converting them to their inactive form. Using pregnant mice genetically engineered to lack the enzyme, her team showed that the increased exposure to GCCs (like cortisol) resulted in smaller pups, which went on to exhibit the signs of mood disorders. The mothers also had smaller placentas which meant a reduced flow of nutrients to pups in the womb – which could directly contribute to their mental condition.
When the team blocked the enzyme in the brains of the developing pups, but left the enzyme barrier in the placenta, the baby mice still showed some signs of damage. This indicates that both sites, the placenta and the foetal brain, play a role. The team are looking to see if one of the two sites has an overriding effect, although it’s thought to be a combination of the two.
This enzyme barrier is crucial during pregnancy as it maintains the difference between the relatively high levels of stress hormones in the mother and the low levels in the foetus. If too much GCC reaches the foetus it can affect the development of growing tissues. For instance, if the developing brain is exposed to cortisol it can cause the young cells to stop dividing and to start maturing instead. Although this is a key step in the normal developmental process, if it happens too early things can go wrong and it can result in faulty wiring of the brain. “The neurons may not be in right place yet and may be differentiating too soon” says Holmes.
But Holmes’ work suggests that stress exposure doesn’t just impact the brain in the womb, it can have an effect in adolescence too. Puberty is another key point in the timeline of the brain’s development, as it’s when existing connections and networks are strengthened or weakened. It’s a time when the brain is particularly sensitive to environmental factors, including stress.
In experiments, adolescent rats were conditioned to associate a flashing light with an electric shock and then had their brains scanned using functional MRI (fMRI). When they were shown the cue of a flashing light their emotional fear pathways were activated. In rats that had been stressed, the amygdala – the part of the brain which deals with fear and emotion – was overactive compared with rats that hadn’t been stressed. This indicated that the way in which the brain processed emotional stimuli had been changed.
The results suggest that the early teenage years are another critical period in the brain’s development in which stress could have an impact on the network of connections. The rewiring of emotional response pathways in the brain could result in long-term problems with mood disorders and emotional behaviour.
Presenting these findings at the British Neuroscience Association’s Festival of Neuroscience conference in London last month, Professor Holmes said that she hopes to use the animal models to uncover more about the pathways involved and to find more accessible targets for treatment. “We think this a really good translational model, so we can do the same tests or comparative tests to what are done in patient populations.”
It’s not all just mice and rats either, the damaging effect of stress hormones on the developing brain has demonstrated in human studies. Trials showed that the children of women who suffered from anxiety or depression during the pregnancy were more likely to develop the mood disorders themselves. In a telephone interview, Professor Vivette Glover, of Imperial College London, explained to me that in pregnant mothers with anxiety, production of the enzyme 11-β HSD2 decreases and this could expose the unborn baby to more cortisol. “The first thing is to look after pregnant women better,” said Glover. Although whether or not it’s a case for drug treatment isn‘t clear at this stage, “it’s an interesting idea”, she added.
Although genetic predisposition and environmental factors play a strong role in influencing the risk of developing mood disorders, this research hints at the potential for early therapeutic intervention. Currently, targeting 11-β HSD2 directly for drug treatment is difficult, so clinical trials may not be on the horizon just yet. “At the moment our intention is to use our models to see exactly which pathways are changing through development,” said Holmes, “and to try and find an alternative target that’s more easily targetable therapeutically.”
- Holmes M (2013). Perinatal programming of stress-related behaviour by glucocorticoids. Abstract presented at BNA 2013, London.
- O’Donnell, K., Bugge Jensen, A., Freeman, L., Khalife, N., O’Connor, T., & Glover, V. (2012). Maternal prenatal anxiety and downregulation of placental 11β-HSD2 Psychoneuroendocrinology, 37 (6), 818-826 DOI: 10.1016/j.psyneuen.2011.09.014
- Giedd, J., Blumenthal, J., Jeffries, N., Castellanos, F., Liu, H., Zijdenbos, A., Paus, T., Evans, A., & Rapoport, J. (1999). Brain development during childhood and adolescence: a longitudinal MRI study Nature Neuroscience, 2 (10), 861-863 DOI: 10.1038/13158
It starts with a bit of chaos. Makeup artists slap thick pale, foundation onto women’s faces as the stylists produce a cloud of hair spray. Everybody is wriggled out of their ordinary jeans and t-shirts and into finery; long ball gowns or tight fitting suit jackets.
This is how The Salon Project begins. It’s an immersive theatre piece that has formed part of the Wonder Season at the Barbican this month. It’s intended to re-enact the salons of Paris at the turn of the 20th century, where people gathered to hear influential speakers, share ideas and rub shoulders with the intellectuals of the time. To create this open environment and sense of drama, the audience doesn’t sit in chairs, but are dressed up in period costumes and encouraged to interact and discuss throughout the night.
Inside, the Salon is a perfect white room, on which to paint your own imaginings. In the corner of the room is a pianist playing period music on a grand piano. A gramophone DJ creates a landscape of ambient noise. Two actors strike up the first debate on animals, and how we should treat them.
A grandfather clock chimes and the audience is told to close their eyes. The atmosphere is as if everyone is saying a silent prayer, and when we open them the room is full of nude models engrossed in smartphones, laptops and other bits of technology. It’s a bizarre and surreal experience and the audience is offered no explanation.
“Who is the real you out of costume?,” asks one of the actors. The Salon is a breathing space. To free your mind and explore the ideas you push aside in daily life.
The Salon is, after all, about more than pretty faces. John Bowers, Professorial Research Fellow at the Interaction Research Studio, Goldsmiths University of London spoke about the future from the perspective of the 19th century. The lie detector, colour photography, teabags, ecstasy and translucent concrete all fall within a long list of inventions in the past 100 years. He said new inventions don’t just invent an object; they reinvent us by fulfilling a need or want we didn’t know we had. “Let us play about with history and invention and our imagined fundamentals of desire,” he says when contemplating the future.
Enter the neuroscientist. Dr Molly Crockett, resplendent in period costume regales the audience gathered about the piano on the science of moral enhancement. Could manipulating levels of chemicals in the brain have an impact on moral decision making? Putting it to the audience, she explains a number of classic quandaries in moral decision making. Do you walk past the drowning child in the lake? Do you flip the switch so the out of control train kills one instead of five? And if so, would you instead push a portly bystander onto the track if it meant saving lives of five others? Crockett explains how she uses these very moral quandaries to gauge subjects in the laboratory, and that the answers can be swayed depending on whether they have been given a drug to boost a certain brain chemical. Incidentally, the bystander is safe. Those who took the pill were less likely to let the portly man come to any harm.
So it would seem that one little chemical, serotonin, has the potential to influence our judgement in moral situations. The same is true of balance and fairness, where lowering the level of serotonin made people less likely to accept relatively fair offers.
“We care deeply about fairness,” she says, “and we care so much about it that we would rather have nothing than see an unfair proposer take the lion’s share”.
If our brain chemistry is variable, might that also mean that our mindset has a degree of flexibility and the views we may hold today could be subject to change? Dr Crockett stresses the importance of understanding that perhaps the views of other people may not always be that way, nor will ours.
“We can’t yet turn sinners into saints,” she explains, “but I’m optimistic that the more research we do into this area, the better we’ll understand what makes us decide what’s right and wrong and what actually makes us better to do right”.
And then with the sound of the clock, the illusion ends. We are ushered back to the dressing rooms, return our hairpieces and jewelry, suits and ball-gowns, and are sent back into the present.
Theresa Taylor and Ryan O’Hare
Theresa and Ryan are interns at the Wellcome Trust.
The Salon Project was part of the Wonder Season, supported by the Wellcome Trust. Read more about the Wonder season on our sister blog ThInk.
A: You get people playing Pong with their brainwaves, children decorating cardboard neurons, people giggling as they try (and fail) to touch their own noses, primal art, knitted neurons and live “brain surgery”. These were just some of the intriguing and amazingly popular activities on offer at Wonder Street Fair that resulted in lots of excited children (and adults) talking about brains.
It was back in December that we first introduced Wonder Season, a collaboration of the British Neuroscience Association (BNA), the Wellcome Trust, and the Barbican Centre. The BNA had approached us with the idea of doing some public outreach around their scientific conference and we jumped at the idea of finding creative ways to bring scientists, artists and the public together.
Wonder Season was an experiment to see if we could combine a scientific conference with a series of public events all taking place at the Barbican, one of Europe’s largest arts venues. The season attracted over 15,000 people to the various events including music, theatre, film talks and drop-in activities.
We’ve had some really wonderful feedback from neuroscientists and visitors and are conducting some more in depth evaluation and research over the next few months.
In the meantime, Wonder Season organiser Amy Sanders has this advice on how to plan and deliver a public engagement event alongside a conference:
Get your venue involved. Don’t be afraid to use their skills and expertise.
Wonder Season was a success because the Barbican programmers know their audiences and were able to help us gauge what was likely to appeal to the different types of people who come through their doors and how to market the events to them. They were also up for taking a few calculated risks and trying creative new things.
Tailor your activities. There is a huge public appetite for engaging activities that meld science and art – activities like this appeal to people who want to learn, create, question and enjoy themselves. Activities do need to be tailored to a non-expert audience – you can’t just open a poster session to the public and expect to get an audience, but that doesn’t mean that activities have to be superficial or ‘dumbed down’.
Don’t underestimate scientist’s willingness to get involved. There is huge appetite from researchers to open up their work to the public but they have different amounts of time and energy to give to public engagement. Offer opportunities for different levels of commitment – some researchers spent weeks making costumes and props and refining their street fair activities, others were able to take part in short informal discussions like Packed Lunch without needing to prepare too much.
Have the researchers present the activities. Having the actual practising scientists and researchers running the activities has enormous benefits – it gives visitors a chance to find out what research, and researchers, are really like, to ask tricky questions and have them answered by the people who really know (or don’t as is sometimes the case), and gets researchers in contact with the people who will be at the receiving end of the products of their research – lots of surprises on both sides
Hands-on activities are not just for children. Creative and ‘hands-on’ activities don’t need to be just for children, adults like to get involved too. Providing things that people of any age can do gets people of different ages interacting, and gives grown-ups the license to behave like inquisitive kids again
Variety is important. In the Street Fair we found that it really worked to have a variety of activities. We had a mix of some that were quick and easy to do (testing your reactions) and some that took a bit longer (knitting a neuron). There were some that were really active (the brain treasure hunt) and others that were more passive (watching a short film). We mixed up social activities (competing in a game of EEG Pong) with more personal contemplative ones (Sonic Tour of the Brain). Think about activities you can place in chill out areas where you can escape the noise and buzz as well as roaming activities that can go to where people are. Visually engaging activities were very popular.
Recruit interested volunteers. Volunteer explainers/guides can be vital element of a successful event – they meet and greet the visitors, help them find activities that match their interests, and generally help to make things run smoothly. It’s a massive bonus if the volunteers are also knowledgeable about the subject as a conversation that’s starts off with ‘what’s going on here?’ can develop into ‘so how can I get involved in your research?’
Hopefully these tips will give you a good starting point to think about your own events. The Wellcome Trust is keen to support researchers interested in public engagement and there is funding available through our public engagement grants.
For Wellcome coverage of Wonder Season and the BNA Festival of Neuroscience conference visit ThInk, our blog about art, neuroscience and the brain or see the Barbican site for more on the Consciousness event with Marcus Du Sautoy and James Holden.
Amy Sanders is Programme Manager in Engaging Science at the Wellcome Trust.