Last October, when Google Home was announced, Google CEO Sundar Pichai christened AI as the next platform. Yesterday, AI became a Google product that could become as transformative and as large and potentially pervasive as Google Search.
Google’s grand bargain with its users will not change: indispensable free apps in return for users’ data. Easier to use conversational interfaces such as Google Home and Google Assistant built with AI could be the next free indispensable Google app purchased with the users’ information as the currency. It is a virtuous cycle. User interaction with indispensable apps like Google Search, Translate and Assistant that use AI, creates more data to create new indispensable AI systems. Pichai confirmed this during the Google I/O keynote when he said:
“The most important product where we are using this [AI] is in Google Search and Google Assistant. We are evolving Google Search to be more assistive for our users.”
The company has invested in AI and machine learning for over a decade to perform tasks such as filter spam from email. Machine learning programs AI systems with data instead of lines of source code. Generally, the more data the more accurately the systems operate.
+ Also on Network World: Google’s new TPUs are here to accelerate AI training +
There are two parts to AI systems built with machine learning. A good example is machine translation of English to Japanese. The first part is programming the model by training a neural network with examples of millions of English sentences and Japanese sentences. After the model has been trained to a high level of accuracy, the second part is deploying the trained model to execute the task of translating a text or spoken English sentence into Japanese, called inference.
The workload of machine learning to categorize photos and to power consumer-facing applications such as Google Translate has grown significantly. Google’s enormous data centers are in the process of being redesigned to be what Pichai called AI-first data centers. Merchant hardware, fully optimized for the machine learning workloads have not emerged from traditional processor vendors like Intel, though Intel CPUs have been adapted for inference tasks and Nvidea GPUs for training tasks. Yesterday, Google announced its own chip, the Cloud TensorFlow Processor Unit (CTPU) used to speed training of machine learning models. This comes on the heels of its announcement in April of its TensorFlow Processing Unit (TPU) that speeds the performance of inference models that do things such as translate language or searches for photos.
To get an early advantage from emerging AI technology and apply it to its growing advertising business model, Google designed and built the specialized processor devices it needs to train AI systems with the CTPU and run them efficiently with the TPU at low latency to make consumer-facing applications responsive. Pichai’s keynote marks the tipping point from text search-optimized data centers, to data centers trained for multimodal search with machine learning.
Text systems, search and Gmail gave Google the information it needed to optimize the relevance of ads, personalized for a given user. But search and email provide a narrow aperture into only part of people’s lives. Android widened the aperture into a greater part of users’ lives with its two killer mobile apps: Google Maps and Google Photos. Google’s AI learns about the user by understanding the history of searches, emails, locations and directions and by understanding the user’s narrative in the photos uploaded. Each bit of multimodal information, texts, images and voice used to train models helps Google predict better—to predict how to rank search results, how to recognize the words and context of spoken queries, and which ads the user is most likely to click.
The predictions are not an average of every user, but personalized for each specific user. Each user has their own personalized Google AI agent in Google’s cloud that makes these predictions. Each new user product adds to the accuracy of the prediction: The more data, the better the accuracy.
Conversational interface for Google Home and Google Assistant
The TPU and CTPU are the company’s latest investment in AI to make the agent possible and just one in a long line of investments in the field. The most notable was creating its Google Brain research group in 2011 responsible for researching, proving and applying AI systems. The increased accuracy of Google’s natural language understanding methods developed by Google Brain precipitated Google’s introduction last year of Google Home, a consumer personal assistant device, and an Android version called Google Assistant that has a conversational interface that puts this agent in users’ homes and hands.
Google Home and Google Assistant perform tasks in response to users’ request, such as providing the weather, playing music, turning on or off lights and providing search results. The tasks that Home and Assistant perform, called Actions, have been built by Google or by a small number of third parties in close association with Google. Google opened the platforms to independent developers.
To show what independent developers might build, Google demonstrated an Action that the used voice search to interact with the Panera restaurant menu, offered suggestions, made recommendations, took the user’s spoken order, and billed the user’s credit card. In another demonstration, the Assistant warned the user to leave home a little earlier to arrive on time because of increased traffic. Google has also made the Actions software developer kit (SDK) available to manufacturers that want to build their own Assistant device.
+ Also on Network World: Here’s how Google is preparing Android for the AI-laden future +
Google also demonstrated that Assistant could co-opt mobile and computer screens and Android TVs and Chromecast connected TVs to display complex or graphical information in the Assistant’s response. Assistant for the iPhone was also announced.
The ultimate goal is an agent that has general intelligence—sentient artificial intelligence like that cast in SciFi movies. Home and Assistant are not capable of this type of intelligence, but with enough Actions built by Google and third parties it marks the tipping point of shifting to a conversational interface to make requests for information from Google’s Knowledge Graph with over 70 billion facts, search the internet, make ecommerce purchases, or call an Uber. According to Scott Huffman, vice president of Google Assistant, 70 percent of the interactions with Google Home and Assistant are conversational, and 100 million devices have Google Assistant installed.
AI could perpetuate Google’s grand bargain in new ways. Machine learning has allowed Google to understand more about users and improve user experience. If the conversational interface of Home and Assistant become indispensable, the aperture into users lives will be increased, providing more data and increasing the accuracy predications feeding the virtuous cycle.