Culture | Johnson

In the world of voice-recognition, not all accents are equal

But you can train your gadgets to understand what you’re saying

Listen to this story.
Enjoy more audio and podcasts on iOS or Android.

IN A spoof advertisement on a humorous website, a woman asks her Echo, Amazon’s voice-controlled speaker system and assistant, to play “the country music station”. The device, mishearing her southern American accent, instead offers advice on “extreme constipation”. Soon she has acquired a southern model, which understands her accent better. But before long, the machine has gone rogue, chiding her like a southern mother-in-law for putting canned biscuits on the shopping list. (A proper southern lady makes the doughy southern delicacy herself.) On the bright side, it corrects her children’s manners.

The outcome may be far-fetched. But the problem is not. More and more smartphones and computers (including countertop ones such as the Echo) can be operated by voice commands. These systems are getting ever better at knowing what users tell them to do—but not all users equally. They struggle with accents that differ from standard British or American. Jessi Grieser, a linguist at the University of Tennessee, Knoxville, speaks the “Northern Cities shift”, a set of vowels around America’s Great Lakes that differ from the standard set. Her smartphone hears her “rest in peace” as “rust in peace”.

To train a machine to recognise what people say requires a large body of recorded speech, and then human-made transcriptions of it. A speech-recognition system looks at the audio and text files, and learns to match one to the other, so that it can make the best guess at a new stream of words it has never heard before.

America and Britain, to say nothing of the world’s other English-speaking countries, are home to a wide variety of dialects. But the speech-recognisers are largely trained on just one per country: “General American” and Britain’s “Received Pronunciation”. Speakers with other accents can throw them off.

Some might consider that an unlucky but avoidable consequence of “having an accent”. But everyone has an accent, even if some are more common or respected. The rise of voice-activated technologies threatens to split the world further into accents with privileges—in this case, the ability to command the Echo, Apple’s Siri, Google Assistant and other such gadgets—and their poor relations.

As part of her PhD in linguistics at the University of Washington, Rachael Tatman studied automatic speech-recognition of various regional accents. In one study, she looked at the automatic subtitling on YouTube, which uses Google’s speech-recognition system. Ms Tatman focused on speakers of five different accents, reading a list of isolated words chosen for their susceptibility to differing pronunciation. The automatic captioning did worst with the Scottish speakers, transcribing more than half of the words incorrectly, followed closely by American southerners (from Georgia). It also did worse with women: higher-pitched voices are more difficult for speech-recognition systems, one reason they tend to struggle with children. In a follow-up experiment, Ms Tatman used both YouTube and Bing Speech, made by Microsoft, to test only American accents. Both found black and mixed-race speakers harder to comprehend than white ones.

The makers of these systems are aware of the problem. They are trying to offer more options: you can set Apple’s Siri or the Echo to Australian English. But they can still reach only so many accents, with a bias towards standard rather than regional ones. India, with its wide variety of English accents, presents the firms with both a tempting market and a huge technical challenge.

One solution is for people to train their own phones and gadgets to recognise them, a fairly straightforward task, which lets users take control rather than waiting for the tech companies to deliver a solution. The Echo already allows this. And a new function, called Cleo, works like a game, to tempt users into sending Amazon new data, whether on new languages Echo has not yet assimilated or accents for a language it in theory already knows.

Janet Slifka of Amazon describes the chicken-and-egg nature of such adaptive systems: they get better as customers use them. An app lets users tell Echo whether they have been understood properly, for example, supplying further training data. But if they don’t work well immediately, people will not use them and thus will not improve them. Those with non-standard accents may have to persevere if they are not to be left behind.

This article appeared in the Culture section of the print edition under the headline "Alexa’s biscuits"

Heading back to hell: Congo in peril

From the February 17th 2018 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

Could your marriage survive a shipwreck?

“Maurice and Maralyn”, a new book about a couple stranded almost four months at sea, makes you wonder

What lies behind Beyoncé’s country turn?

The star singer has always been a canny interpreter of musical trends


Museums are becoming more expensive

Will it kill off future patronage and attendance?