They create an application that changes the accents

They do not understand it. He was fluent in English and Spanish, very friendly and an expert in computer engineering. Why couldn’t he stop the call center job?

The friend said his accent made it difficult for many clients to understand him; Some spoke slanderously because of the way he spoke.

All three students realized that the problem was bigger than their friend’s experience. So they set up a startup to solve it.

Now their company, SANAS, is testing artificial intelligence-powered software aimed at eliminating miscommunication by changing people’s accents in real time. For example, a call center employee in the Philippines can talk casually into a microphone and sound like someone from Kansas to a client on the other end.

Call centers are just the beginning, say startup founders. The company’s website calls its plans “talk, redesign.”

Ultimately, they believe in the application they create Used by various businesses and individuals. This will help doctors better understand patients, they say, or help grandchildren understand their grandparents better.

“We have a great vision for Sanas,” says CEO Maxim Serebryakov.

For Cerebriakov and his co-founders, the project was unique.

People’s ‘Voices Are Not Heard Like Their Pronunciation’

The three founders of SANAS met at Stanford University, but they were all originally from different countries – now Seripriakov, who is now the CEO, is from Russia; Andrs Perez Soderi, now Chief Financial Officer, is from Venezuela; And Shan Zhang, now chief technology officer, are from China.

They are no longer Stanford students. Serebryakov and Pérez graduated; Zhang left to focus on reviving Sanas.

They started the company last year and gave it an easily pronounced name in a variety of languages, saying, “We want to highlight our global mission and bring people closer together.”

See also  College football award winners 2022: USC QB Caleb Williams wins Maxwell, Walter Camp awards for player of year

Over the years, all three say they have experienced how accents can come their way.

“We all come from international backgrounds. We have seen firsthand how people treat you differently because of the way you speak,” Seripriakov said. “It breaks the heart sometimes.”

Zhang says his mother, who came to the United States from China 20 years ago, talks to the cashier because they are embarrassed when they go to the grocery store together.

“This is one of the reasons I teamed up with Max and Andrews in creating this company, and I try to help people who think their voices are not heard like their accents,” he says.

Seripriakov says how his parents are treated in hotels when they come to see him in the United States – how people make assumptions when they hear their accents.

“They talk a little louder. They change their behavior,” he says.

After attending a British school, Perez says he first had difficulty understanding American accents when he came to the United States.

Don’t start him off about what happens if his dad tries to use the Amazon Alexa his family gave him for Christmas.

“When Alexa lit the lamps in random places in the house and turned them pink, we quickly found out that Alexa didn’t understand my dad’s accent,” Perez says.

Call centers are testing the technology

English is dHe is the most widely used language in the world. 1.5 billion people speak this – most of them are not native speakers. In the United States alone, millions of people speak English as a second language.

This has created a growing market for applications that help users practice their English pronunciation. But Sanas uses AI to take a different approach.

See also  Jerry Nadler beats Carolyn Maloney in NY House Democratic primary

Preface: Rather than learning to pronounce words differently, technology can do it for you. Expensive or time consuming pronunciation reduction training is no longer required. And the understanding will be almost instantaneous.

Cerebriakov says he knows people’s accents and identities can be closely linked, and that the company is not trying to erase accents or that one way of speaking is better than another.

“There is no need to change the way people speak in order to hold a position and a job. Identity and pronunciation are important. They are intertwined,” he says. “You never want someone to change their accent to satisfy you.”

Currently Sanaz’s algorithm can convert English to American, Australian, British, Philippine, Indian and Spanish accents, and the team plans to add more. By training a neural network with audio recordings of professional actors and other data they can add a new accent to the system – which can take weeks.

The Sanas group held two demonstrations for CNN. In one, a man with an Indian accent is asked to read a series of literary sentences. Then the same phrases are translated into American pronunciation:

Another example contains the most common phrases in the call center system, such as “If you give me your full name and order number, we can go ahead and make corrections for you.”

The US-pronunciation results sound somewhat artificial and boring, like virtual-aid voices like Siri and Alexa, but Perez says the team is working to improve the technology.

“The accent changes, but the intuition is maintained,” he says. “We continue to work on how to make the decision as natural, emotional and as exciting as possible.”

See also  Russian soldiers sabotaging own efforts in Ukraine, says UK spy chief

The initial perception of call centers trying out technology is positive, Perez says. So when words spread about their endeavor submit comments on their website.

How the founders of the startup see its future

This allowed Sanas to grow its staff. Most of Palo Alto, California-based employees are from international backgrounds. It’s not a coincidence, says Seripriakov.

“What we create resonates with many people, even the people we hire … … very exciting to watch,” he says.

As the company grows, it may take a while for it to appear in the Sanas App Store or on a cell phone near you.

The team says it is currently working with large call center outsourcing companies and is optimizing the slow release for individual users so they can refine the technology and ensure security.

This screen grip shows what users are seeing in the Sanas desktop application.

But in the end, they hope the SANAS will be used by anyone in need – in other fields as well.

Perez believes this plays an important role in helping people communicate with their doctors.

“Any moment lost in misunderstanding due to lost time or misinformation can have a very, very impact,” he says. “We want to make sure nothing is lost in the translation.”

Someday, he says, it will help people learning languages, improving dubbing in movies, smart speakers at home, and voice assistants in cars to understand different accents.

The SANAS team hopes to add other languages ​​to the algorithm – not just English.

All three co-founders are still working on the details. But how this technology will make communications better in the future, they say, is understandable.

Leave a Reply

Your email address will not be published. Required fields are marked *