Features

EYE SUPPLEMENT Research: How rhythm and tone are key for the early language-learning brain

A study recently concluded babies first learn language via rhythm and tone instead of phonetic information. Professor Usha Goswami shares more about the BabyRhythm project.
Centre for Neuroscience in Education

Children’s language skills are fundamental to their ability to benefit from the opportunities offered by education. Recent brain imaging studies with babies are showing us some of the mechanisms that the brain uses to create a language system. Rhythm and rhyme are fundamentally important.

HOW DO BABIES LEARN LANGUAGE?

The new brain research uses the fact that when we speak, we are creating sound waves, moving energy through the air. The brain picks up these energy changes, aligning its own intrinsic rhythms to these energy waves, which are heard as different rhythm patterns in speech.

We can think of the brain waves ‘surfing’ the sound waves. Brain waves occur naturally at a range of different speeds. ‘Speech-brain alignment’ is the neural tracking of speech rhythms at different speeds by automatic matching with the correct brain waves. Speech-brain alignment is an automatic aspect of how we listen.

This is the case even before birth. Studies of newborn infants have shown that the slower rhythms of speech, which are transmitted through the amniotic fluid, are already being encoded by the acoustic brain. For example, if a mother reads the same story aloud to her ‘bump’ every day for the last three months of pregnancy before birth, at birth her baby can distinguish the familiar story from a new story being read aloud by their mother. The baby indicates recognition by changes in sucking behaviour.

The slower rhythms of speech are also exaggerated during BabyTalk (parentese), even though this happens quite unconsciously on the part of the speaker. When we talk to infants, many of us automatically use a sing-song form of speech that enhances key rhythm patterns. We increase the energy changes that correspond to stressed (stronger) syllables.

These key energy changes are also enhanced by singing nursery rhymes, again quite unconsciously on the part of the singer. Nursery rhymes are often perfect metrical poems, and present the rhythm patterns required for the brain to learn language in an optimal format. In BabyTalk and in nursery rhymes, there are sets of acoustic statistics – consistent dependencies between energy patterns at different speeds – which are at the core of speech-brain alignment. These core rhythmic statistics repeat across different sentences when BabyTalk is used, or across different nursery rhymes.

These acoustic statistics based on rhythm have been discovered by computational modelling of infant-directed speech and child-directed speech. Studies in European languages modelling nursery rhymes or stories read aloud by a teacher show that the same sets of acoustic statistics are found in rhythmic speech across languages. Whenever you sing or chant a nursery rhyme with a child, or read a story in a child-directed manner, you are unconsciously emphasising these statistics.

The duplication of these rhythm patterns across languages helps to explain the long-standing puzzle of how the human brain acquires language. There are 6,000-plus world languages, yet most infants learn to speak and comprehend these languages without difficulty. Automatic speech-brain alignment to rhythm patterns are at the centre of how they achieve this.

PHONEMES OR SPEECH RHYTHM?

This recognition of the key role of rhythm has come from infant brain imaging studies. Prior linguistic analyses had assumed that the building blocks of any language were phonemes, the smallest sound elements in words, which in English are represented by the alphabet.

Phonemes are the units typically taught in phonics programmes. A long-standing view has been that infants learn phonemes and then gradually add them up to make words. By contrast, brain imaging studies are supporting an alternative view, which is that speech rhythm patterns are the key to language acquisition. These rhythm patterns provide similar sets of acoustic statistics across languages, they reflect whole words, and they are heard even by the foetus.

THE RESEARCH

A new study from scientists at the University of Cambridge and Trinity College Dublin helps to explain why language acquisition indeed begins with speech rhythm.

The Cambridge UK BabyRhythm project recorded brain responses while 50 infants listened to nursery rhymes being sung by an early years teacher. New methods from the Dublin group enabled the researchers to recreate the heard speech from the brain responses of the infants. In effect, the researchers could ‘read out’ which linguistic units were being encoded by the infants’ brains.

The data showed that the neural tracking of phonetic features emerged rather slowly acrossthe first year of life, and was far from complete when they stopped measuring at one year. By contrast, rhythmic information was recorded with high accuracy from the first measurement periods, and showed comparable accuracy to the adult brain.

Phonetic information was first reliably encoded at around seven months of age – an age when infants can already recognise familiar whole words like ‘banana’. Phonetic information was still sparse at 11 months, the age when most infants begin to say their first words. Speech rhythm information was encoded robustly from the beginning of the study (age two months).

SINGING TO BABIES

These new findings support the critical importance of rhythmic speech activities with babies and young children. Singing to infants will enhance language acquisition, as will talking to them in BabyTalk. Any activities based on rhythmic language will support language development in toddlers and pre-schoolers.

The infant studies suggest that speech rhythm patterns are the hidden ‘glue’ underpinning the development of a well-functioning language system. Indeed, other Cambridge studies have shown that children with language disorders like dyslexia have difficulties in hearing these acoustic speech rhythm patterns.

DYSLEXIA AND LANGUAGE LEARNING

For most infants and young children, learning the rhythm patterns of their language is automatic and unconscious. The infant brain learns the key acoustic statistical patterns described above via speech-brain alignment. However, babies at family (genetic) risk of dyslexia are poor at discriminating the acoustic cues that trigger automatic speech-brain alignment.

In infant studies carried out with colleagues in Sydney, the Cambridge team found that infants who were at family (genetic) risk of dyslexia were poor at discriminating the acoustic rhythm cues that help the brain to ‘lock on’ to the rhythms in speech. Their findings suggested that the ‘dyslexic brain’ had difficulty in accurately surfing the slower rhythms in speech. The at-risk babies in the Sydney-Cambridge study subsequently showed slower word learning, and had developed smaller vocabularies when they were toddlers.

Other studies measured what happens when the electrical rhythms in dyslexic children’s brains – their brain waves – sync with the sound waves in rhythmic speech. These brain imaging studies showed that the brains of children diagnosed with dyslexia were ‘out of time’ for the slowest speech rhythms. While the dyslexic brain coped well with faster rhythms, the slower rhythm patterns that the infant studies suggest are involved in phonetic learning were encoded less accurately.

When you read a script, alphabetic or non-alphabetic, you are recognising speech when it is written down. The difficulties with encoding rhythm patterns shown by dyslexic children in the brain imaging studies using natural speech could help to explain why they find phonetic learning difficult when they are taught phonics. Speech-sound learning is more difficult for dyslexic children than for typically developing children, in part because the dyslexic brain does not compute acoustic rhythm patterns as accurately as other brains. Some of the electrical rhythms (brain waves) in parts of the dyslexic brain are out of time.

This has been revealed by using the Dublin methods for recreating heard speech from brain responses. Electrical responses were recorded during story listening from the brains of children with dyslexia. Their brains were found to be less accurate in recording slower rhythm patterns during natural speech listening, but not faster rhythm patterns.

We can think of the dyslexic brain always coming in slightly too early (or late) in terms of catching the sound wave, but only for the slower energy patterns in speech. Consequently, the dyslexic brain waves do not surf these slower sound waves as accurately.

These differences are subtle, and do not mean that children with dyslexia cannot learn spoken language. Perceiving some of the energy patterns in the sound wave differently is a bit like being colour blind.

If you are colour blind, you can still see, but your sensitivity to certain wavelengths of light is reduced. You cannot really distinguish reds, greens, browns and oranges, they look very similar. So if you are continually forced to make red/green distinctions, you will struggle.

In dyslexia, the brain research suggests that affected children can still hear. They pass medical hearing screens and they can still learn language, but their sensitivity to syllable stress patterns is reduced. They cannot easily distinguish whether a word like ‘zebra’ has first syllable stress.

Although they can still differentiate words from one another, this reduced sensitivity affects phonetic learning. When they are continually having to reflect on the exact constituents of spoken words at the phonetic level, they struggle.

For most of us, the spelling system reflects speech-written-down very efficiently. The dyslexic brain hears speech in a subtly different way. So for those with dyslexia, the spelling system does not reflect what they are hearing very efficiently.

HOW PRACTITIONERS CAN PROVIDE SUPPORT

So how can we help? One way is to devise methods for helping children to recognise rhythm patterns in speech. Oral language games or other routines that help children to pick out stressed syllables (the syllables carrying more acoustic weight, as in ‘ZE-bra’ or ‘DI-no-saur’), or to count the syllables in words, seem to help dyslexic children across languages.

These routines can be supplemented by oral activities based on rhythm and rhyme, such as learning poetry out loud or rapping.

The dyslexic brain appears to be helped by direct teaching of the sound structure of speech at the level of syllables, rhymes and rhythm patterns.

The Cambridge researchers have also developed a learning app, GraphoGame Rime, in collaboration with Finnish researchers. The app teaches English letter-sound correspondences through rhyme, emphasising statistical patterns in the English spelling system (rhyme-based spelling patterns) which reflect some of the acoustic statistical patterns in speech discussed above. GraphoGame Rime is one of a family of over 20 GraphoGames in different languages, all developed by the Finnish team.

The Cambridge researchers are also developing speech processing algorithms that alter the speech signal to provide the dyslexic brain with acoustic amplifications that mimic those in BabyTalk.

Hearing these exaggerated rhythms may help the dyslexic brain to surf the speech signal more accurately. These studies are still ongoing.

Nevertheless, it seems that rhythm and rhyme are important for the literacy-learning brain, as well as for the language-learning brain.