Retracing the steps to the source of language learning has to be one of the more fascinating pastimes for any language enthusiast. How do children acquire language? How does growing up in a bilingual family affect us? Does learning a language influence our success in adult life? We could go on. There are many variables influencing the answers to any such questions, each highly dependent on your own background. Still, there are researchers out there trying to elucidate the mysteries of language development by employing new methods and leveraging emerging technology. Let’s review three of the more common thoughts and debunk some myths along the way, too.
You wouldn’t be surprised to learn that newborns prefer to listen to the voice of their mother over that of a stranger. Perhaps more surprisingly though, research by Anne Cutler of the Western Sydney University has shown that a child recognizes the language it has been hearing in the womb during the last trimester of pregnancy. Furthermore, newborns prefer the sound of languages which are rhythmically similar to the one spoken by their mother and can distinguish between two phonetically different languages.
Another interesting piece of information comes from the research of Dr. Patricia Kuhl of the University of Washington. In a 2010 TED Talk, she presented evidence showing that an infant of six months can discriminate sounds of any language in the world – something which adults cannot. However, a child loses this ability in the second six months of its life. This is when the child starts to perceive the vocabulary of the language their mother uses. This has been confirmed by a more recent research report by Anne Cutler, too.
In another research project by Anne Cutler, she studied adults of Korean descent who were adopted by Dutch parents at an early age. The subjects had no previous knowledge of Korean and were put through an intensive language training program. The results show that the adoptees had retained phonological knowledge of Korean. Essentially, learning the language came easier to them. This sheds some light on the way bilingual children acquire language. Infants exposed to at least two languages while in the womb and in the early months of their lives have an easier time learning a second language at a later stage in life.
The common perception nowadays is that multicultural parents should each be talking to their child in their respective native language, at least as often as they can. This doesn’t mean the child will automatically become bilingual. One is likely to overtake the other as the “main” language and that is fine. Still, the child can reap the benefits of the exposure to both languages later. Developing true bilingualism would require massive exposure, – and screen time does not really count as a method of acquiring language.
Another misconception we can put on the shelf is the notion that children growing up in such families have a weaker command of either language than their monolingual peers. Naturally, the vocabulary of a child using two languages will initially be more limited than that of a monolingual one, but that’s because the child is developing two vocabularies at the same time. What’s more, children of this background are able to tell the difference between, for example, the English word “cat” and the French equivalent “chat”.
The ability to use words from both languages in one sentence doesn’t mean the child is confused. If anything, this “code-switching” is a sign that they are able to freely move from one language into another. Code-switching happens to adults too, although we might not be aware of it.
You may not have given much thought to how language learning in the early stages of a child’s life can affect his/her chances at a better life later on. But it does. University of Kansas child psychologists Betty Hart and Todd R. Risley conducted a research project to answer this question back in 1995. They published their study under the title, “Meaningful Differences in the Everyday Experience of Young American Children.” They observed 42 infants with different societal backgrounds, from children just learning to talk until they were three years old. They spent an hour each month with the families, observing the interactions between the children and their parents. Then they waited until the children were nine-years-old and circled back to see how they were doing in school. In the meantime, the researchers kept busy transcribing and analyzing the collected data. Here’s what they found:
The research found that the social background of a child’s parents influences the way he/she acquires language:
According to this research, these differences amount to a roughly 30-million word head-start that three-year-old children from higher-class families have on their peers from less fortunate backgrounds. Still, it’s not just about the numbers.
So, what can we make of this data? It points to a gap in language development which is affecting children from birth up to three-years-old. Twenty years later, Language Environment Analysis (or LENA) revived the research of Hart & Risley. The organization offers what they call a “talk pedometer.” It is a special piece of speech recognition technology that helps parents or teachers increase talk and conversations. By extension, it can aid their development.
The various research projects and advice from pediatricians on child language acquisition have one common thread. They all agree that exposure to language happening early in life lays the groundwork for the facility with which a child can adopt a second (or multiple) languages later on in life. It may also pave the way for future success. The more exposure, the better. However, we’d be amiss to think this is an automatic guarantee. Nor does there seem to be one universally accepted method telling parents this is how you should do it.
À bon entendeur !
Language technology providers are scrambling to jump on the speech-to-text bandwagon which means users can view machine-generated live subtitles (translated from the original) as well as multilingual captions (monolingual transcripts available for different languages)of speeches in their preferred language.
This report is the first in an ongoing Business Confidence Study series that Nimdzi is kicking off to keep a pulse on the industry.
Cologne-based DeepL has announced the beta launch of DeepL Write, an AI-powered authoring tool intended to improve texts by fixing errors and making suggestions for word replacements while keeping an eye on style, grammar and formatting.