Reading Without Seeing

Melissa Wright | 13 NOV 2018

When the seeing brain goes blind

In the late 90’s, a blind 63-year old woman was admitted to a university hospital emergency room. After complaining to co-workers of light-headedness that morning, she had collapsed and become unresponsive. Within the 48 hours following her admission, after what was found to be a bilateral occipital stroke, she recovered with no apparent motor or neurological problems. It was only when she tried to read that an extraordinary impairment became apparent: despite the damage only occurring in the occipital lobe, which is typically devoted to vision, she had completely and specifically lost the ability to read Braille. Braille is a tactile substitution for written letters, consisting of raised dots that can be felt with the fingertips. Before this, she had been a proficient Braille reader with both hands, which she had used extensively during her university degree and career in radio (Hamilton, Keenan, Catala, & Pascual-Leone, 2000). So what happened?

The Visual Brain

It is estimated that around 50% of the primate cortex is devoted to visual functions (Van Essen, Anderson, & Felleman, 1992), with the primary visual areas located right at the back of the brain within the occipital lobe (also known as the visual cortex). Visual information from the retina first enters the cortex here, in an area named V1. Within V1, this information is organised to reflect the outside world, with neighbouring neurons responding to neighbouring parts of the visual field. This map (called a retinotopic map) is biased towards the central visual field (the most important part!) and is so accurate that researchers have even managed to understand which letters a participant is reading, simply by looking at their brain activity (Polimeni, Fischl, Greve, & Wald, 2010). These retinotopic maps are found in most visual areas in some form. As information is passed forward in the brain, the role of these visual areas becomes more complex, from motion processing, to face recognition, to visual attention. Even basic visual actions, like finding a friend in the crowd, requires a hugely complex chain of processes. With so much of the cortex devoted to processing visual information, what happens when visual input from the retina never occurs? Cases such as the one above, where a person is blind, suggest that the visual cortex is put to use in a whole new way.

Cortical Changes

In sighted individuals, lexical and phonological reading processes activate frontal and parietal-temporal areas (e.g. Rumsey et al., 1997), while touch involves the somatosensory cortex. It was thought that braille reading activated these areas, causing some reorganisation of the somatosensory cortex. However, as the case above suggests, this does not seem to be the whole story (Burton et al., 2002). Remember, in this instance, the unfortunate lady had  damage to the occipital lobe, which is normally involved in vision, but as the lady was born blind it had never received any visual information. Although you might expect that damage to this area would not be a problem for someone who is blind, it turned out instead to impair abilities associated with language and touch! This seriously went against what scientists had understood about brains and their specialised areas, and had to be investigated.

Neuroimaging, such as functional Magnetic Resonance Imaging (fMRI), allows us to look inside the brain and see what area is activated when a person performs a certain task. Using this technique, researchers have found that in early blind individuals, large portions of the visual cortex are recruited when reading Braille (H. Burton et al., 2002). This activity was less apparent for those who became blind in their later years, though was still present, and it wasn’t there at all for sighted subjects. That the late-blind individuals had less activity in this region seems to show that as we get older and as brain regions become more experienced, they become less adaptable to change. A point to note however – fMRI works by correlating increases in blood oxygen (which suggests an increase in energy demand and therefore neural activity) with a task, such as Braille reading. As any good scientist will tell you, correlation doesn’t equal causation! Perhaps those who cannot see are still somehow ‘visualising’ the characters?

So is there any other evidence that the visual areas can change their primary function? Researchers have found that temporarily disrupting the neural activity at the back of the brain (using a nifty technique called Transcranial Magnetic Stimulation) can impair Braille reading, or even induce tactile sensations on the reading fingertips (e.g. Kupers et al., 2007; Ptito et al., 2008)!

Other fMRI studies have investigated the recruitment of the occipital lobe in non-visual tasks and found it also occurs in a variety of other domains, such as in hearing (e.g. Burton, 2003) and working memory (Harold Burton, Sinclair, & Dixit, 2010). This reorganisation seems to have a functional benefit, as researchers have found that the amount of reorganisation during a verbal working memory task is correlated with performance (Amedi, Raz, Pianka, Malach, & Zohary, 2003). As well, it has been reported that blind individuals can perform better on tasks such as sound localisation (though not quite as good as Marvel’s Daredevil!) (Nilsson & Schenkman, 2016).

But Is It Reorganisation?

This is an awesome example of the ability of the brain to change and adapt, and this seems true also for areas that are so devoted to one modality. How exactly this happens is still unknown, and could fill several reviews on its own! One possibility is that neuronal inputs from other areas grow and invade the occipital lobe, although this is difficult to test non-invasively in humans because we can’t look at individual neurons with an MRI scan. The fact that much more occipital lobe activity is seen in early-blind than late-blind individuals (e.g. H. Burton et al., 2002) suggests that whatever is changing is much more accessible to a developing brain. However, findings show that some reorganisation can still occur in late-blind, and even in sighted individuals who undergo prolonged blindfolding or sensory training (Merabet et al., 2008). This rapid adaptation suggests that the mechanism involved may be making use of some pre-existing multi-sensory connections that multiply and reinforce following sensory deprivation.

Cases of vision restoration in later life are rare, but one such example came from a humanitarian project in India, which found and helped a person called SK (Mandavilli, 2006). SK was born with Aphakia, a rare condition in which his eye developed without a lens. He grew up near blind, until the age of 29 when project workers gave him corrective lenses. 29 years with nearly no vision! Conventional wisdom said there was no way his visual cortex could have developed properly, having missed the often cited critical period that occurs during early development. Indeed, his acuity (ability to see detail, tested with those letter charts at the optometrists) showed initial improvement after correction, but this did not improve over time suggesting his visual cortex was not adapting to the new input. However, they also looked at other forms of vision, and there they found exciting improvements. For example, when shown a cow, he was unable to integrate the patches of black and white into a whole until it moved. After 18 months, he was able to recognise such objects even without movement. While SK had not been completely without visual input (he had still been able to detect light and movement), this suggests that perhaps some parts of the visual cortex are more susceptible to vision restoration. Or perhaps multi-sensory areas, that seem able to reorganise in vision deprivation, are more flexible to regaining vision?

So Much left to Find Out!

From this whistle-stop tour, the most obvious conclusion is that the brain is amazing and can show huge amounts of plasticity in the face of input deprivation (see the recent report of a boy missing the majority of his visual cortex who can still see well enough to play football and video games; https://tinyurl.com/yboqjzlx). The question of what exactly happens in the brain when it’s deprived of visual input is incredibly broad. Why do those blind in later life have visual hallucinations (see Charles Bonnet Syndrome)? Can we influence this plasticity? What of deaf or deaf-blind individuals? Within my PhD, I am currently investigating how the cortex reacts to another eye-related disease, glaucoma. If you want to read more on this fascinating and broad topic, check out these reviews by Merabet and Pascual (2010), Ricciardi et al. (2014) or Proulx (2013).

Edited by Chiara Casella & Sam Berry

References:

  • Amedi, A., Raz, N., Pianka, P., Malach, R., & Zohary, E. (2003). Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nature Neuroscience, 6(7), 758–766. https://doi.org/10.1038/nn1072
  • Burton, H. (2003). Visual cortex activity in early and late blind people. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23(10), 4005–4011.
  • Burton, H., Sinclair, R. J., & Dixit, S. (2010). Working memory for vibrotactile frequencies: Comparison of cortical activity in blind and sighted individuals. Human Brain Mapping, NA-NA. https://doi.org/10.1002/hbm.20966
  • Burton, H., Snyder, A. Z., Conturo, T. E., Akbudak, E., Ollinger, J. M., & Raichle, M. E. (2002). Adaptive Changes in Early and Late Blind: A fMRI Study of Braille Reading. Journal of Neurophysiology, 87(1), 589–607. https://doi.org/10.1152/jn.00285.2001
  • Fine, I., Wade, A. R., Brewer, A. A., May, M. G., Goodman, D. F., Boynton, G. M., … MacLeod, D. I. A. (2003). Long-term deprivation affects visual perception and cortex. Nature Neuroscience, 6(9), 915–916. https://doi.org/10.1038/nn1102
  • Hamilton, R., Keenan, J. P., Catala, M., & Pascual-Leone, A. (2000). Alexia for Braille following bilateral occipital stroke in an early blind woman. Neuroreport, 11(2), 237–240.
  • Kupers, R., Pappens, M., de Noordhout, A. M., Schoenen, J., Ptito, M., & Fumal, A. (2007). rTMS of the occipital cortex abolishes Braille reading and repetition priming in blind subjects. Neurology, 68(9), 691–693. https://doi.org/10.1212/01.wnl.0000255958.60530.11
  • Mandavilli, A. (2006). Look and learn: Visual neuroscience. Nature, 441(7091), 271–272. https://doi.org/10.1038/441271a
  • Merabet, L. B., Hamilton, R., Schlaug, G., Swisher, J. D., Kiriakopoulos, E. T., Pitskel, N. B., … Pascual-Leone, A. (2008). Rapid and Reversible Recruitment of Early Visual Cortex for Touch. PLoS ONE, 3(8), e3046. https://doi.org/10.1371/journal.pone.0003046
  • Merabet, L. B., & Pascual-Leone, A. (2010). Neural reorganization following sensory loss: the opportunity of change. Nature Reviews Neuroscience, 11(1), 44–52. https://doi.org/10.1038/nrn2758
  • Nilsson, M. E., & Schenkman, B. N. (2016). Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences. Hearing Research, 332, 223–232. https://doi.org/10.1016/j.heares.2015.09.012
  • Park, H.-J., Lee, J. D., Kim, E. Y., Park, B., Oh, M.-K., Lee, S., & Kim, J.-J. (2009). Morphological alterations in the congenital blind based on the analysis of cortical thickness and surface area. NeuroImage, 47(1), 98–106. https://doi.org/10.1016/j.neuroimage.2009.03.076
  • Polimeni, J. R., Fischl, B., Greve, D. N., & Wald, L. L. (2010). Laminar analysis of 7T BOLD using an imposed spatial activation pattern in human V1. NeuroImage, 52(4), 1334–1346. https://doi.org/10.1016/j.neuroimage.2010.05.005
  • Proulx, M. (2013, February). Blindness: remapping the brain and the restoration of vision. Retrieved 28 March 2018, from http://www.apa.org/science/about/psa/2013/02/blindness.aspx
  • Ptito, M., Fumal, A., de Noordhout, A. M., Schoenen, J., Gjedde, A., & Kupers, R. (2008). TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers. Experimental Brain Research, 184(2), 193–200. https://doi.org/10.1007/s00221-007-1091-0
  • Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience & Biobehavioral Reviews, 41, 64–77. https://doi.org/10.1016/j.neubiorev.2013.10.006
  • Rumsey, J. M., Horwitz, B., Donohue, B. C., Nace, K., Maisog, J. M., & Andreason, P. (1997). Phonological and orthographic components of word recognition. A PET-rCBF study. Brain: A Journal of Neurology, 120 ( Pt 5), 739–759.
  • Van Essen, D. C., Anderson, C. H., & Felleman, D. J. (1992). Information processing in the primate visual system: an integrated systems perspective. Science (New York, N.Y.), 255(5043), 419–423.

Can we solve problems in our sleep?

Sam Berry | 19 MAR 2018

Have you heard the song “Scrambled Eggs”? You know:

“Scrambled eggs. Oh my baby how I love your legs.”

No? Perhaps you would recognize the tune.

A young Paul McCartney woke up one morning with an amazing melody in his head. He sat at the piano by his bed and played it out, and he liked it so much he couldn’t quite believe it had come to him in a dream. The tune was there, but he just couldn’t find the right words to fit. For several months he tried, but he couldn’t get past “Scrambled Eggs” as a working title.

So how did the famous Beatle complete his masterpiece? He did some more sleeping. Another fateful day, he woke up and the song was there, fully formed with lyrics and the now famous title “Yesterday.”

“Yesterday, all my troubles seemed so far away.”

Recognise it now? A critically acclaimed worldwide hit had formed itself in his sleep. Boom. A chart smashing phenomenon.

—— —– —– —– —— ——

It may seem obvious, but not sleeping is extremely bad for you. Symptoms of sleep deprivation include a marked decline in the ability to concentrate, learn, and retain new information. It can affect your emotions, self-control, and cause visual and auditory hallucinations.

Whether not sleeping at all would actually kill you has not yet been established. The record time for someone staying awake is 11 days and 25 minutes during a science experiment in 1965. The subject was kept awake by two ‘friends’ as they observed him become a drooling delusional mess. Yet there are plenty of studies that demonstrate serious detrimental health effects of both short and long-term sleep deprivation.

Being mentally and physically alert will certainly help you to solve problems, but many scientists think something much more interesting is going on during sleep. Your brain is still learning whilst you are snoring.  

You are only coming through in waves…

Okay, so do we know how sleep can help us to learn? We’re getting there. Using brain imaging technology like fMRI scanners (giant magnets that use blood flow changes to see how different parts of the brain react to things) and EEG (funky hats with electrodes that measure how our neurons are firing in real time), we can have a look at what the brain is doing while we’re dozing off.

Our brains remain active while we sleep. Sleep can be split into different stages, and what happens during these stages is important for memory and learning. Broadly speaking, your sleep is split into non-REM (Stage 1, 2, and Slow Wave) and REM (Rapid Eye Movement) stages. These are traditionally separated depending on what the pattern of electrical output from the EEG is showing. I’ll briefly take you through what these different stages are and how our neuron activity changes as we go through them:

Stage One sleep is when we start to doze off and have our eyes closed. Have you ever noticed a grandparent falling asleep in their chair, but when you ask them to stop snoring they wake up insisting they were never asleep in the first place? That’s stage one sleep; you can be in it without even knowing.

Stage Two is still a light sleep, but when brain activity is viewed using EEG you can see an increase in spiking brain activity known as sleep spindles.

Slow Wave Sleep is so called because in this stage neurons across the brain activate together in unison, creating a slow, large coordinated electrical pattern. This makes the EEG output look like a wave. Slow wave sleep also contains some of Stage Two’s sleep spindles, and as well has something called sharp wave ripples. This is where a brain area called the Hippocampus (involved in memory and navigation) sends bursts of information to the Neocortex (involved in our senses, motor movement, language, and planning to name a few).

REM sleep is when our bodies are paralysed but our eyes dart around. Our blood pressure fluctuates and blood flow to the brain increases. While we dream throughout sleep, our dreams during REM become vivid and our brain activity looks similar to when we’re awake.

We cycle through these stages in 90 -120 minute intervals throughout the night, our sleep becoming deeper and more REM-based as the cycle progresses. Disruptions to the sleep cycle are associated with decreases in problem-solving ability as well as psychiatric and neurodegenerative disorders like Alzheimer’s.

Spikey learning

Problem solving requires memory: you need to use information you already have and apply it to the problem at hand. You also need to remember what you tried before so that you don’t keep making the same mistakes (like singing “Scrambled Eggs” over the same tune forever). The stages of sleep most relevant to helping us keep hold of our memories are the non-REM ones, and in particular Slow Wave Sleep.

Recent research reveals that sleep spindles, slow waves, and sharp wave ripples work together so when a slow wave is at its peak the brain cells are all excited, creating the perfect environment for the sleep spindles to activate. When the wave is crashing down, the sharp wave ripples from the Hippocampus are more likely to fire to the Neocortex. Recent research tells us this coupling of spindles and slow waves is associated with how well you retain memories overnight. Interestingly, in older adults spindles can fire prematurely before the wave reaches its peak, suggesting a possible reason why memory gets worse with age.

Researchers say this pattern of brain activity is a sign of the brain consolidating, or crystallising, what was learned or experienced whilst awake. This process strengthens the neural connections of the brain. Studies show that the pattern of neurons that get excited when we learn something are reactivated during sleep. This could mean that during sleep our brains replay experiences and strengthen newly formed connections.

Getting freaky

So what do our dreams mean? We’ve all had bizarre ones—how about that common dream where all your teeth fall out?

During REM sleep, our brain activity looks similar to when we’re awake. Scientist Deirdre Barrett suggested we think of REM sleep like merely a different kind of thinking. This type of thinking uses less input from the outside world or from the frontal parts of our brain in charge of logical thinking. REM is thought to be involved in consolidating our emotional memories, but it is also when we tend to have the vivid visual dreams that may defy logic. This combination enables REM “thinking” to be creative or even weird. REM sleep may allow us to form connections between ideas that are only distantly related.

Recently, a team in Germany suggested that Non-REM sleep helps put together what we know while REM breaks it up and puts it back together in new ways.

Thoughts before bed

So “sleeping on it” really can help solve problems. It strengthens the memories you make during the day and it helps learn and see things more clearly when you wake up. REM sleep may also allow thinking to be unconstrained by logic and divide and reshape ideas during REM. If reading this article made you sleepy, go ahead and take a nap. You might learn something.

Edited by Becca Loux. Becca is a guest editor for Brain Domain and an avid fan of science, technology, literature, art and sunshine–something she appreciates more than ever now living in Wales. She is studying data journalism and digital visualisation techniques and building a career in unbiased, direct journalism.

References:

  • Barrett, D. (2017). Dreams and creative problem-solving: Dreams and creative problem-solving. Annals of the New York Academy of Sciences, 1406(1), 64–67. https://doi.org/10.1111/nyas.13412
  • Carskadon, M. A., & Dement, W. C. (2005). Normal human sleep: an overview. Principles and Practice of Sleep Medicine, 4, 13–23.
  • Chambers, A. M. (2017). The role of sleep in cognitive processing: focusing on memory consolidation: The role of sleep in cognitive processing. Wiley Interdisciplinary Reviews: Cognitive Science, 8(3), e1433. https://doi.org/10.1002/wcs.1433
  • Haus, E. L., & Smolensky, M. H. (2013). Shift work and cancer risk: Potential mechanistic roles of circadian disruption, light at night, and sleep deprivation. Sleep Medicine Reviews, 17(4), 273–284. https://doi.org/10.1016/j.smrv.2012.08.003
  • Helfrich, R. F., Mander, B. A., Jagust, W. J., Knight, R. T., & Walker, M. P. (2018). Old Brains Come Uncoupled in Sleep: Slow Wave-Spindle Synchrony, Brain Atrophy, and Forgetting. Neuron, 97(1), 221–230.e4. https://doi.org/10.1016/j.neuron.2017.11.020
  • Klinzing, J. G., Mölle, M., Weber, F., Supp, G., Hipp, J. F., Engel, A. K., & Born, J. (2016). Spindle activity phase-locked to sleep slow oscillations. NeuroImage, 134, 607–616. https://doi.org/10.1016/j.neuroimage.2016.04.031
  • Landmann, N., Kuhn, M., Maier, J.-G., Spiegelhalder, K., Baglioni, C., Frase, L., … Nissen, C. (2015). REM sleep and memory reorganization: Potential relevance for psychiatry and psychotherapy. Neurobiology of Learning and Memory, 122, 28–40. https://doi.org/10.1016/j.nlm.2015.01.004
  • Lewis, P. A., & Durrant, S. J. (2011). Overlapping memory replay during sleep builds cognitive schemata. Trends in Cognitive Sciences, 15(8), 343–351. https://doi.org/10.1016/j.tics.2011.06.004
  • Ólafsdóttir, H. F., Bush, D., & Barry, C. (2018). The Role of Hippocampal Replay in Memory and Planning. Current Biology, 28(1), R37–R50. https://doi.org/10.1016/j.cub.2017.10.073
  • Sio, U. N., Monaghan, P., & Ormerod, T. (2013). Sleep on it, but only if it is difficult: Effects of sleep on problem solving. Memory & Cognition, 41(2), 159–166. https://doi.org/10.3758/s13421-012-0256-7
  • Staresina, B. P., Bergmann, T. O., Bonnefond, M., van der Meij, R., Jensen, O., Deuker, L., … Fell, J. (2015). Hierarchical nesting of slow oscillations, spindles and ripples in the human hippocampus during sleep. Nature Neuroscience, 18(11), 1679–1686. https://doi.org/10.1038/nn.4119

How to read a baby’s mind

Priya Silverstein | 3 OCT 2017

Priya, a guest writer for The Brain Domain, is a second-year PhD student at Lancaster University. She spends half her time playing with babies and the other half banging her head against her computer screen.

Okay, I’ll admit that was a bit of a clickbait-y title. But would you have started reading if I’d called it ‘Functional Near Infrared Spectroscopy and its use in studies on infant cognition’? I thought not. So, now that I’ve got your attention…

Before I tell you how to read a baby’s mind, first I have some explaining to do. There’s this cool method for studying brain activity but, as one of the lesser used technologies, it’s a bit underground. It’s called fNIRS (functional Near Infrared Spectroscopy). Think of fNIRS as fMRI’s cooler, edgier sister. Visually, the two couldn’t look more different – with an MRI scanner being a human-sized tube housing a massive magnet that you might have seen on popular hospital dramas, and NIRS simply looking like a strange hat.

MRI.png   fNIRS_cover
Left: MRI scanner, Right: NIRS cap
Picture credit left: Aston Brain Centre, right: Lancaster Babylab

What these two methods do have in common is that they both measure the BOLD (Blood Oxygen Level Dependent) response from the brain. Neurons can’t store excess oxygen, so when they are active, they need more of it to be delivered. Blood does this by ferrying oxygen to the active neurons faster than to their lazy friends. When this happens, you get a higher concentration of oxygenated to deoxygenated blood in the more active areas of the brain.

Now, to the difference between fMRI and fNIRS. fMRI infers brain activity due to oxygenated and deoxygenated blood having different magnetic properties. When the head is put inside a strong magnetic field (the MRI scanner) changes in blood oxygenation, due to changes in brain activity, alter the magnetic field in that area of the brain. fNIRS on the other hand, uses the fact that oxygenated and deoxygenated blood absorb a different amount of light, as deoxygenated blood is darker than oxygenated blood. Conveniently, near-infrared light goes straight through the skin and skull of a human head (don’t worry, this is not at all dangerous and a participant would not feel a thing). So, shining near-infrared light into the head at a source location, and measuring how much light you get back at a nearby detector, gives a measurement of how much light has been absorbed by the blood in that area of the brain. Therefore, you get a measure of a relative change in oxygenated and deoxygenated blood in that area. All of this without the need for a person to lie motionless in a massive cacophonous magnet, with greater portability, and for about a hundredth of the price of an MRI scanner (about £25,000 compared to £2,500,000).

fNIRS_tech.png
The source and detector are placed on the scalp, so that the light received at the detector is reflected light following banana-shaped pathways

Picture credit: Tessari et al., 2017

“That sounds amazing! Sign me up!” I hear you say. However, I must put a little disclaimer out. There are reasons why fMRI is still the gold standard for functional brain imaging. As fNIRS relies on the measurement of light that gets back to the surface of the scalp after being in the brain, it can’t be used to measure activity from brain areas more than about 3 cm deep. This is being worked on by using cool ways of organising sources and detectors on the scalp. However, it is not thought that fNIRS will ever be able to produce a whole-brain map of brain activity. Also, as fNIRS is looking at the centimetre level, rather than millimetre, its spatial resolution and accuracy of location is limited in comparison to fMRI. Despite this, if the brain areas you’re interested in investigating are closer to the surface of the head, and not too teensy tiny, then fNIRS is a great technology to use.

So, what has this all got to do with babies? Well, fNIRS has one vice, one Achilles heel. Hair. Yes, this amazingly intelligent technology has such a primitive enemy. If your participants are blonde or bald, you’ll probably be fine. But anything deviating from this can block light from entering the head, and therefore weaken the light reaching the brain and eventually getting back to the detectors. However, do you know who has little to no hair? Babies. Plus, babies aren’t very good at lying still, particularly in a cacophonous magnet. This is why fNIRS is especially good for measuring brain activity in infants.

fNIRS is used to study a variety of topics related to infant development.  One of the most studied areas of infant psychology is language development. Minagawa-Kawai et al (2007) investigated how infants learn phonemes (the sound chunks that words are made up of). They used fNIRS to measure brain activation in Japanese 3 to 28-month-olds while they listened to different sounds. Infants listened to blocks of sounds that alternated between two phonemes (e.g. da and ba), and then other blocks that alternated between two different versions of the same phoneme (e.g. da and dha). In 3 to 11-month-olds, they found higher activation in a brain area responsible for handling language for both of these contrasts. So, this means that infants were treating ‘da’ and ‘ba’ and ‘dha’ as three different phonemes. However, 13 to 28-month-olds only had this higher activation when listening to the block of alternating ‘ba’ and ‘da’. This means that the older infants were treating ‘da’ and ‘dha’ as the same phoneme. This is consistent with behavioural studies showing that infants undergo ‘perceptual narrowing’, whereby over time they stop being able to discriminate between perceptual differences that are irrelevant for them. This has been related to why it’s much easier to be bilingual from birth if you have input from both languages, than it is to try to learn a second language later in life.

Another popular area of infant psychology is how infants perceive and understand objects. Wilcox et al (2012) used fNIRS to study the age at which infants began to understand shapes and colours of objects. They measured brain activation while infants saw objects move behind a screen and emerge at the other side. This study used a live presentation, made possible by the fact that fNIRS has no prerequisites for a testing environment except to turn the lights down a bit.

fNRIS_study.png

The shape change (left), colour change (middle), and no change (right) conditions of Wilcox et al. (2012). Each trial lasted 20 seconds, consisting of two 10 second cycles of the object moving from one side to the other (behind the occluder) and back again.

These objects were either the same when they appeared from behind the screen, or they had changed in shape or colour. They found heightened activation in the same area found in adult fMRI studies for only the shape change in 3 to 9-month olds, but for both shape and colour changes in the 11 to 12-month-olds. This confirms behavioural evidence that infants are surprised when the features of objects have changed, and that babies understand shape as an unchanging feature of an object before they understand colour in this way. This study shows how you can use findings from adult fMRI and infant behavioural studies to inform an infant fNIRS study, helping us learn how the brain’s complex visual and perceptual systems develop from infancy to adulthood.

There’s a lot more to learn if you wish to venture into the world of infant fNIRS research; it’s a fascinating area filled with untapped potential. fNIRS can help us to measure the brain activity of a hard-to-reach population (those pesky babies), enabling us to ask and answer questions about the development of language, vision, social understanding, and more! Questions being investigated in the Lancaster Babylab (where I am doing my PhD) include:

  • Do babies understand what pointing means?
  • Are bilingual babies better at discriminating between sounds?
  • Why do babies look at their parents when they are surprised?

And beyond this, the possibilities are endless!

If you are intrigued by fNIRS and want to learn more, I’d recommend review papers such as the one by Wilcox and Biondi (2015), and workshops such as the 3-day Birkbeck-UCL NIRS training course.

Edited by Jonathan Fagg and Rachael Stickland

References:

  • Minagawa-Kawai, Y., Mori, K., Naoi, N., & Kojima, S. (2007). Neural Attunement          Processes in Infants during the Acquisition of a Language-Specific Phonemic Contrast. Journal Of Neuroscience, 27(2), 315-321.
  • Otsuka, Y., Nakato, E., Kanazawa, S., Yamaguchi, M., Watanabe, S., & Kakigi, R.   (2007). Neural activation to upright and inverted faces in infants measured by near infrared spectroscopy. Neuroimage, 34(1), 399-406
  • Tessari, M., Malagoni, A., Vannini, M., & Zamboni, P. (2015). A novel device for non-invasive cerebral perfusion assessment. Veins And Lymphatics, 4(1).
  • Wilcox, T., Stubbs, J., Hirshkowitz, A., & Boas, D. (2012). Functional activation of the infant cortex during object processing. Neuroimage, 62(3), 1833-1840.

♥ Achy-breaky heart? Try touchy-feely brain! ♥

Laura Smith | 14 FEB 2017

As today is Valentine’s day, let’s get a bit touchy-feely. Whether you’re looking forward to a date with your significant other; planning to profess your feelings to a special someone; or hoping your soulmate will sweep you off your feet, you’d probably like to share a romantic caress. But what happens in the brain when we anticipate touching the one we desire? Using functional magnetic resonance imaging (fMRI), scientists in Italy set out to answer just that question. Isn’t that convenient!

fMRI uses the same principle as standard MRI: a large, very powerful electromagnet detects differences in the magnetic properties of different bodily tissues, and some fancy maths turns these signals into pictures. In fMRI, people in the scanner perform tasks, and scientists can locate brain areas where activity levels change in response to this.

In their fMRI study, published in the journal Frontiers in Behavioural Neuroscience, Ebisch, Ferri & Gallese (2014) wanted to find out if how much someone loved their partner was reflected in their brain activity when they anticipated caressing them. Participants in the MRI scanner were instructed to affectionately touch either a ball or their partner’s hand, which were both placed close to them.  They received a “touch” or “do not touch” instruction 3 seconds after they were told which item to touch (the hand or the ball). Therefore, they would anticipate performing the touch each time, which would involve a change in brain activity.  The task was performed many times but, 67% of the time, participants were asked not to perform the touch.

Participants also completed the Passionate Love Scale (PLS) (Hatfield & Sprecher, 1986) : a 15-item questionnaire measuring the intensity of their desire for their partner from “extremely cool” to “extremely passionate”, so that the researchers could see whether it was related to the changes in brain activity.  There was such a relationship in the right posterior insula: an area of cortex believed to act as a processing-hub for information about the body’s current physiological state (Augustine, 1996) .  Insula activity decreased during anticipation of touching but the more passionate the love, the less deactivation there was for anticipation of romantic but not non-romantic touching. So, when participants’ desire for their partners was higher, there was more neural response to anticipation of touching the partner versus the ball.  Additionally, insula activity increased when touches were actually performed, and significantly moreso for romantic versus non-romantic touches.

laurainsula

Location of right posterior insula. Retrieved from Ebisch et al. (2014)1

The insula interacts with brain areas involved in bodily sensation (Zweynert et al., 2011) , in particular the somatosensory cortex.  This area’s activity in response to touch was previously shown to be influenced by anticipation of a reward (Pleger et al., 2008). Taking this into account, the researchers suggest that the posterior insula, via its connection with somatosensory cortex, may influence how we actually experience touches. As such, because desire for the partner was associated with less insula deactivation during anticipation of touching them, it may be that wanting to touch someone actually makes the experience of doing so all the more pleasant.

So spare a thought for your clever insula today, and have a happy Valentine’s day.

References:

  • Augustine, JR. (1996). Circuitry and functional aspects of the insular lobe in primates including humans. Brain Res. Rev., 22, 229-244.
  • Ebisch SJ, Ferri F & Gallese V. (2014). Touching moments: desire modulates the neural anticipation of active romantic caress.
  • Hatfield E. & Sprecher S. (1986). Measuring passionate love in intimate relations. Adolescence, 9, 383-410.
  • Pleger B, Blankenburg F, Ruff C, Driver J & Dolan R. (2008). Reward facilitates tactile judgments and modulates hemodynamic responses in human primary somatosensory cortex. Neurophysiol., 39, A9.
  • Zweynert S, Pade JP, Wustenberg T et al. (2011). Motivational salience modulates hippocampal repetition suppression and functional connectivity in humans. Hum. Neurosci, 5, 144.

Featured image by Alex Van

Who the hell is MEG, and how can she help us understand the brain?

Rachael Stickland | 8 NOV 2016

Let me tell you about a MEG who doesn’t get her fair share of the limelight. MEG uses her SQUIDs to catch your brain activity, after it has left your head. She’s quite a fast mover, and can do this at a millisecond rate! Strangely though, she’s kept locked in a room with really thick walls. Poor MEG.

Still confused? Of course you are.  I guess it is time for me to admit MEG isn’t a woman. Similar to an MRI scanner, MEG is a technique researchers use to learn about the brain.  MEG is short for Magnetoencephalography (magneto refers to magnetic fields, encephalon means the brain, and –graphy indicates the process of recording information). Nothing to do with the guy in the purple cape.

This is a MEG scanner:

meg-scanner

Source of this image: Magnetoencephalography Wikipedia

I’d like to say this brain imaging technique was inspired by a woman getting a perm in the 80s, as that’s what it has always reminded me of.  I’m afraid that’s not the case.

So how does MEG measure these magnetic fields? Any electrical current will produce a magnetic field. Even the electrical currents in your brain. If a big group of neurons (brain cells) are facing the same direction and send electrical impulses to each other, they induce a weak magnetic field, with a certain direction and strength. These magnetic fields leave the brain, and can still be measured outside the skull.

Picture1.png

Source of this image: Magnetoencephalography Wikipedia

Since the magnetic fields that leave the head are so weak (around a billion times weaker than the magnetic field of a typical fridge magnet!), a MEG scanner measures them using really sensitive instruments called SQUIDs (Superconducting Quantum Interference Devices). SQUIDs are quite high maintenance though; they only work at temperatures below -296°C! Bathing them in liquid helium keeps them this cold. As SQUIDs are so sensitive, they also pick up stronger magnetic fields from the environment, which can mask the ones we want to measure from the brain. Because of this, a MEG scanner has to be kept in a magnetically shielded room, with a door like this:

msr_layered_door

Source of this image: Magnetoencephalography Wikipedia

Why is it useful to measure these magnetic fields anyway? MEG allows us to measure brain activity in a non-invasive way; there is no discomfort for the person being scanned, and no side effects. MEG helps us to learn about how and where the brain responds to certain tasks, improving knowledge of the link between brain function and human behaviour. Brain function measured with MEG has been shown to be different in many neurological and psychiatric diseases.  MEG has a key role to play in helping localise regions of the brain that are faulty, and that might need to be surgically removed, for example in epilepsy.

I hope you’ve enjoyed being introduced to a new MEG. Watch this space for more articles on what she gets up to.