Skip to main content

Vertex AI – One AI platform, every ML tool you need

 

This year on Google I/O (Google’s Developer conference) Google presented a new platform that unites all ML tools. Vertex AI brings together the Google Cloud services for building ML under one, unified UI and API.

There are many benefits to using Vertex AI. You can train models without code, with minimal expertise required, and take advantage of AutoML to build models in less time. Also, Vertex AI’s custom model tooling supports advanced ML coding, with nearly 80% fewer lines of code required to train a model with custom libraries than competitive platforms.



Google Vertex AI logo

You can use Vertex AI to manage the following stages in the ML workflow:

  • Define and upload a dataset.
  • Train an ML model on your data:
    • Train model
    • Evaluate model accuracy
    • Tune hyperparameters (custom training only)
  • Upload and store your model in Vertex AI.
  • Deploy your trained model and get an endpoint for serving predictions.
  • Send prediction requests to your endpoint.
  • Specify a prediction traffic split in your endpoint.
  • Manage your models and endpoints.


“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” said Andrew Moore, vice president and general manager of Cloud AI and Industry Solutions at Google Cloud. “We are very proud of what we came up with within this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”

“Data science practitioners hoping to put AI to work across the enterprise aren’t looking to wrangle tooling. Rather, they want tooling that can tame the ML lifecycle. Unfortunately, that is no small order,” said Bradley Shimmin, chief analyst for AI Platforms, Analytics, and Data Management at Media. “It takes a supportive infrastructure capable of unifying the user experience, plying AI itself as a supportive guide, and putting data at the very heart of the process — all while encouraging the flexible adoption of diverse technologies.”

To learn more about how to get started on the platform, check out the ML on GCP best practices, this practitioner’s guide to MLOps whitepaper, and sign up to attend the Applied ML Summit for data scientists and ML engineers on June 10th.

If you want to try it and get to know it better you can watch a Codelab (It will cost you about $2 for running it on Google Cloud, but you can register for a $300 trial on GCP 😁 )

Comments

Popular posts from this blog

How to build FAQ Chatbot on Dialogflow?

  After Google I/O I’m inspired and excited to try new APIs and learn new stuff from Google. This time I decided to try Dialogflow and build a Flutter Chatbot app that will answer some frequently asked questions about Dialogflow. This time I want to be focused more on Dialogflow rather than Flutter. Firstly, go to  Dialogflow ES console , create a new Agent, specify the agent’s name, choose English as a language and click “Create”. As you created a new agent go to setting and enable beta features and APIs and Save. Now let’s model our Dialogflow agent When you create a new Dialogflow agent, two default intents will be created automatically. The  Default Welcome Intent  is the first flow you get to when you start a conversation with the agent. The  Default Fallback Intent  is the flow you’ll get once the agent can’t understand you or can not match intent with what you just said. Click  Intents > Default Welcome Intent Scroll down to  Responses ....

What’s new in ARKit 4

  Here it is. The latest version of Apple’s Augmented Reality framework,  ARKit 4 , has just been announced at  WWDC 2020 . Let’s see what’s new in ARKit 4 and for Augmented Reality app development on iOS.  If you’re more inquisitive I linked all stuff directly to Apple documentation. Location anchors ARKit  location anchors  allow you to place virtual content in relation to anchors in the real world, such as points of interest in a city, taking the AR experience into the outdoors.   By setting geographic coordinates (latitude, longitude, and altitude) and leveraging data from Apple Maps, you can create AR experiences linked to a specific world location. Apple calls this process “visual localization”, basically locating your device in relation to its surroundings with a higher grade of accuracy.   All iPhones & iPad with at least an A12 bionic chip and GPS are supported. Depth API The new ARKit  depth API , coming...