Google’s machine-learning cloud pipeline explained


When Google first told the world about its Tensor Processing Unit, the strategy behind it seemed clear enough: Speed machine learning at scale by throwing custom hardware at the problem. Use commodity GPUs to train machine-learning models; use custom TPUs to deploy those trained models.

The new generation of Google’s TPUs is designed to handle both of those duties, training and deploying, on the same chip. That new generation is also faster, both on its own and when scaled out with others in what’s called a “TPU pod.”

But faster machine learning isn’t the only benefit from such a design. The TPU, especially in this new form, constitutes another piece of what amounts to Google building an end-to-end machine-learning pipeline, covering everything from intake of data to deployment of the trained model.

Machine learning: A pipeline runs through it

One of the largest obstacles to using machine learning right now is how tough it can be to put together a full pipeline for the data—intake, normalization, model training, model and deployment. The pieces are still highly disparate and uncoordinated. Companies like Baidu have hinted at wanting to create a single, unified, unpack-and-go solution, but so far that’s just a notion.

Leave a Reply

Your email address will not be published. Required fields are marked *