The below documentation provides an example configuration and model adapter for a custom connection to a model hosted in Google's Vertex AI. Review the benefits of external model integration to make sure this is the right fit for your use case.
For a step-by-step guide, refer to the documentation on how to create a model adapter and how to create a connection to an externally hosted model.
First, publish and tag a model adapter using the model adapter library in the Code Repositories application. The below model adapter configures a connection to a model hosted in Vertex AI using the Vertex AI SDK for Python ↗ and framework. The below code was tested with versions Python 3.8.17
, pandas 1.5.3
,
google-cloud-aiplatform 1.32.0
, google-auth 2.23.0
, google-auth-oauthlib 1.1.0
.
Note that this model adapter makes the following assumptions:
region_name
- Provided as connection configurationproject_id
- Provided as connection configurationendpoint_id
- Provided as connection configurationgoogle_application_credentials
- Provided as credentialsCopied!1import palantir_models as pm 2import models_api.models_api_executable as executable_api 3 4from google.oauth2 import service_account 5from google.cloud import aiplatform 6from google.api_core import exceptions 7 8import json 9import pandas as pd 10import logging 11from typing import Optional 12 13logger = logging.getLogger(__name__) 14 15 16class VertexAITabularAdapter(pm.ExternalModelAdapter): 17 """ 18 :display-name: Vertex AI Tabular Model Adapter 19 :description: Default model adapter for Vertex AI models that expect tabular input and output data. 20 """ 21 22 def __init__(self, project_id, region_name, endpoint_id, google_application_credentials): 23 self.endpoint_id = endpoint_id 24 # google_application_credentials is expected to be valid string representation of the Google provided 25 # secret key_file 26 credentials = service_account.Credentials.from_service_account_info( 27 json.loads(google_application_credentials), 28 scopes=["https://www.googleapis.com/auth/cloud-platform"] 29 ) 30 aiplatform.init(project=project_id, location=region_name, credentials=credentials) 31 32 @classmethod 33 def init_external(cls, external_context) -> "pm.ExternalModelAdapter": 34 project_id = external_context.connection_config["project_id"] 35 region_name = external_context.connection_config["region_name"] 36 endpoint_id = external_context.connection_config["endpoint_id"] 37 google_application_credentials = external_context.resolved_credentials["google_application_credentials"] 38 return cls( 39 project_id, 40 region_name, 41 endpoint_id, 42 google_application_credentials 43 ) 44 45 @classmethod 46 def api(cls): 47 inputs = {"df_in": pm.Pandas()} 48 outputs = {"df_out": pm.Pandas()} 49 return inputs, outputs 50 51 def predict(self, df_in): 52 instances = df_in.to_dict(orient='records') 53 try: 54 endpoint = aiplatform.Endpoint(endpoint_name=self.endpoint_id) 55 except ValueError as error: 56 logger.error("Error initializing endpoint object double check the inputted endpoint_id, project_id and " 57 "region_name.") 58 raise error 59 try: 60 # Output from model is assumed to be json serializable 61 # if result is too large for executor this may cause an OOM 62 prediction_result = endpoint.predict(instances=instances) 63 except exceptions.Forbidden as error: 64 logger.error("Error performing inference provided google_application_credentials do not have sufficient " 65 "permissions.") 66 raise error 67 except exceptions.BadRequest as error: 68 logger.error("Error performing inference double check your input dataframe.") 69 raise error 70 return pd.json_normalize(prediction_result.predictions)
Next, configure a externally hosted model to use this model adapter and provide the required configuration and credentials as expected by the model adapter. In this example, the model is assumed to be hosted in us-central1
, but this is configurable.
Note that the URL is not required by the above VertexAITabularAdapter
and so is left blank; however, the configuration and credentials maps are completed using the same keys as defined in the Model Adapter.
The below uses an egress policy that has been configured for us-central1-aiplatform.googleapis.com
(Port 443).
Choose the published model adapter in the Connect an externally hosted model dialog.
Define connection configurations as required by the example Vertex AI tabular model adapter.
This adapter requires connection configuration of:
Define credential configurations as required by the example Vertex AI tabular model adapter.
This adapter requires credential configuration of:
Now that the Vertex AI model has been configured, this model can be hosted in a live deployment, or Python transform.
The below image shows an example query made to the Vertex AI model in a live deployment.