Author: Janani Ravi
We are streaming LIVE now on Twitch. Join us!
“Compound interest,” Albert Einstein is thought to have said, “is the eighth wonder of the world.” Of course, he was referring to small accumulations of financial improvements that build on each other and lead to staggering gains in the long run.
But technologists can effectively reap compound interest on technology as well. The major cloud platforms offer an excellent demonstration of this. The primary attraction of using the major cloud platforms is that you as an end user get to benefit from an unending stream of new product releases and updates. Of course, there’s a flip side to earning this utility, which is the risk of platform lock-in. Even so, for enterprises and individuals that can intelligently balance their use of the different cloud platforms and their offerings, the endless conveyor belt of new product launches and radical overhauls can prove a great source of internal efficiencies and productivity improvements.
Let’s sample some of the latest treats coming down on this gravy train in 2021, specifically from the Google Cloud Platform.
What’s new on Google Cloud Platform
Vertex AI, a new managed ML platform, went into general availability on May 18; Apigee X, a major new release of the API management system, went live on February 4; and a new set of capabilities for Virtual Agents were added to DialogFlow CX on January 27. Each of these could be just the tool that you need for your latest use-case, so it is worth taking the time to understand what they offer. The launch of Vertex AI stands out, so that’s what I’ll focus on in this blog post.
The Google Cloud Platform has always had a plethora of powerful AI/ML offerings, and it was not always clear how these fit together, or could be used together. Consider, for instance, AutoML, the AI Platform and the individual AI APIs for Vision, Speech and Text.
- AutoML was a great product focused on the democratization of ML, allowing developers with limited ML expertise to build and use models.
- The AI Platform, in contrast, was aimed at more ML-savvy practitioners looking to build serious models using TensorFlow, PyTorch or scikit-learn.
- The individual AI APIs, such as Speech, Text and Vision were effectively pre-built, entirely ready-to-use models aimed at the most common AI use cases such as, say, speech-to-text conversion.
Users of the GCP loved these three groups of offerings but mostly ended up using them separately rather than in a unified, coherent manner. Vertex AI aims to address that; it focuses on bringing AutoML as well as the AI platform into a unified API, client library and user interface. As an end-user, you now have a unified workflow that integrates the entire development lifecycle—from experimentation to deployment.
Capabilities and features of Vertex AI
In addition, Vertex AI has some new and innovative capabilities. One area of concern for many users of ML models is their “black box” nature. Let’s say your ML model has thrown up a prediction that does not agree with your own professional intuition. Before you can decide who is more likely to be right, you’d probably want to know what caused that model to predict as it did. This is particularly relevant as concerns around bias—both conscious and unconscious-in the training of ML models becomes a salient concern. Vertex Explainable AI is meant to address exactly this: it integrates feature attributions to help understand your model outputs for classification and regression.
Another emerging trend in how ML models are being built and used is the separation between feature extraction/engineering and actual model building. Feature extraction/engineering refers to the process of building nice, curated, normalized datasets that can then potentially be used in a wide range of models. It makes little sense to have to go through this process every time you build a model. To avoid reinventing this wheel, Vertex AI incorporates Vertex Feature Store, which is a centralized repository for organizing, storing and serving ML features.
In addition, Vertex AI also incorporates the bread-and-butter capabilities that you would expect from an offering of this nature. For instance, hyperparameter tuning is simplified by Vertex Vizier, which is offered up as a black box optimization service that helps you tune hyperparameters in complex machine learning models. In choosing between different model architectures and configurations, this process of hyperparameter tuning can become extremely cumbersome, and tools such as Vertex Vizier aim to address this pain-point.
Unifying your AI and ML
The common theme through these features, and indeed the explicit raison d’être of Vertex AI is building a coherent, unified AI/ML offering. This theme of unifying different AI/ML offerings in this manner has been gaining currency for some time now.
In 2015 through 2018, the emphasis was on rushing out new, cool products without focusing on interconnecting the models built using those products. In more recent years, as these technologies have matured, interconnection and coherent usage have assumed greater importance.