AI-assisted computer helps people whose speech has been affected by stroke

A woman unable to speak communicates via an AI neural recording device.

A woman unable to speak communicates via an AI neural recording device. Source: Reuters

Get the SBS Audio app

Other ways to listen

Artificial Intelligence is helping people with strokes and other debilitating illnesses produce speech and even subtle facial expressions. The brain-interface technology is a world first developed at the Chang Lab at the University of California Berkeley.


Listen to Australian and world news, and follow trending topics with


TRANSCRIPT:


News stories about artificial intelligence are usually dominated by how technology is being developed to steal human words and ideas.

Recently The Atlantic revealed that thousands of writers' books have been used without permission to train large language models.

The pirated works include those by big-name authors like Margaret Atwood, Stephen King and Elena Ferrante.

Cases like this raise serious ethical questions about what humans lose when A-I uses our language to improve itself.

But what if AI could give you back your words?

That's the possibility being explored by scientists at the University of California Berkeley's Chang Lab.

They're training an AI-assisted computer to help people whose speech has been affected by stroke, paralysis or other illnesses to communicate.

PhD candidate Kaylo Littlejohn is one of the researchers leading the study.

"It's almost like a digital vocal tract of sorts. It's replacing the function that is lost. And it's bridging the connection from brain to speech, using the neural signals corresponding to muscle track movements or vocal tract movements."

The brain-interface computer technology is a specialty developed by the lab's Edward Chang.

Not only does it decode signals from the brain and translate them into speech, it is able to render subtle facial expressions.

Experts say that's a world first.

It's life-changing for Ann Johnson, the study participant that researchers have worked with.

She suffered a stroke age 30 which damaged her brain stem and made her unable to talk or move her arms.

Mr Littlejohn says seeing Ms Johnson express herself through A-I has been emotional for all involved.

"It was very encouraging and very exciting. For me personally, it was... I found it to be emotional and heartwarming just to see all of this work come to fruition and know that we actually can decode speech from the brain. And this was the very first proof of principle of intelligible speech synthesis and avatar control from someone who can't speak.

"And we knew from Ann's feedback that this was an emotional moment for her. And it's very cool to see that we can restore her personal voice, because that personal aspect, that human aspect, is very important."

Previously, Ms Johnson used a letter board to communicate.

That allowed her to spell out words using small head movements to form letters, at a speed of about 14 words per minute.

With the Chang Lab's AI-assisted computer, that number has increased to 78.

That's much closer to human speech, which naturally operates at around somewhere between 140 and 160 words per minute.

So how, exactly, does the technology work?

"In a nutshell, a neural recording device is placed on to the surface of Ann's brain. Those signals recorded from her brain are then sent to a pedestal screwed on top of her skull, and then to a computer where an A.I. model converts the neural signals into decoded text, audible speech synthesis and avatar facial animation. And then those outputs are then displayed to both her and anyone who is watching. And this would ideally allow her to communicate using a brain to speech neural prosthesis."

People sitting with Ms Johnson can watch an animated avatar of a face moving as speech is heard.

The facial animation came from an avatar chosen and personalised by Ms Johnson. Its voice was derived from her own voice, captured on a video filmed at her wedding.

For now, though, you're unlikely to mistake the A-I avatar for a human voice.

"Don't be afraid to ask me questions. Wait for the others. Thanks for stopping bye. I think you are wonderful."

Mr Littlejohn said the team wants to continue to improve the accuracies and believes they can make this a clinically viable solution.

They hope to build on the project by shrinking the size of the machine that runs the AI decoding models and by making the connection wireless.

For now, the technology is already putting a smile on Ann Johnson's face - and it's one she can shar

Share