Michael Short | Bloomberg | Getty Images
undar Pichai, chief executive officer of Google Inc., discusses the Google Pixel virtual assistant during a Google product launch event in San Francisco, California, U.S., on Tuesday, Oct. 4 2016.
In the intensifying battle to have the best voice-powered technology, Google is making its virtual assistant sound more human and less robotic.
The speech-activated Google Assistant is relying on software from DeepMind, the artificial intelligence research group under Alphabet. The technology now uses a version of DeepMind’s WaveNet system for American English and Japanese, according to a blog post published on Wednesday.
It’s a timely shift. Two weeks ago Apple released an upgraded version of the Siri virtual assistant, which is available on iPhones, iPads, Macs and other devices. The news also comes as Google introduces new versions of its Pixel smartphones, as well as speakers and earbuds that will let users talk to the Google Assistant.
The blog, from DeepMind research scientists Aäron van den Oord and Tom Walters and Google speech software engineer Trevor Strohman, said that the Assistant working on WaveNet “is the first product to launch” using Google’s second-generation AI chip, the tensor processing unit, or TPU. Google also uses graphics cards from Nvidia to train certain AI systems.