
This tutorial uses billable components of Google Cloud Platform (GCP): Use the AI Platform Notebooks to drive the workflow.Invoke the deployed Web API for predictions.
Deploy the prediction Web API container image model on Cloud Run. Build Docker container image for the prediction Web API. Implement a Web API wrapper to the trained model using Plumber R package. Train the CARET model using on AI Platform Training with custom R container. In this notebook, we focus on Exploratory Data Analysis, while the goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. We use the data extracted from BigQuery and stored as CSV in Cloud Storage (GCS) in the Exploratory Data Analysis notebook. The dataset is available in BigQuery public dataset. The dataset used in this tutorial is natality data, which describes all United States births registered in the 50 States, the District of Columbia, and New York City from 1969 to 2008, with more than 137 million records. With over 10,000 packages in the open-source repository of CRAN, R caters to all statistical data analysis applications, ML, and visualisation. R is one of the most widely used programming languages for statistical modeling, which has a large and active community of data scientists and ML professional.
Rhen use the Cloud Run to serve the trained model as a Web API for online predictions. We use AI Platform Training with Custom Containers to train the TensorFlow model at scale.
#Caret models how to
This notebook illustrates how to use CARET R package to build an ML model to estimate the baby's weight given a number of factors, using the BigQuery natality dataset. Training and Serving CARET models using AI Platform Custom Containers and Cloud Run Overview