Functions on models

You can operationalize models in the context of the Ontology by using Functions that invoke models during their runtime. A model can be made available through Modeling Objective or Model live deployments and imported into a Functions repository for usage in code.

Prerequisites

To use a model as a function, you must first publish a function from a Modeling Objective Live deployment or directly from a model with a direct deployment.

For more information on creating Functions, refer to the getting started guide.

While Models are only fully supported in TypeScript functions, advanced users can also call them from Python Functions using the Platform SDK.

Import a live deployment in a TypeScript repository

Once a function for a live deployment has been created, it must be imported for usage in a specific repository. Select the Resource Imports sidebar to view the model deployments that have already been imported.

Model import sidebar

To import additional models, select Add in the Resource Imports and then select Registered under Model Sources. Models are searchable by the function name that was chosen during publishing. If the model is not showing up, make sure you published the function for the direct model deployment or the modeling objective.

Prerequisites

Legacy function on models that use the API Name card in Modeling Objectives can still be imported under the Modeling Objectives section, but this flow is being deprecated in favor of direct function publishing.

model-import-example

Confirm the model import by selecting Confirm selection. Task runner will execute the localDev task, generating code bindings to interact with these models.

The model may now be imported into your code by using the provided import statement in the sidebar under Usage. Each model is available as a constant named after its defined API Name.

model-import-usage-example

Write a model-backed TypeScript Function

Let's write a Function that connects a Flight Delay Model to the Ontology. Once Code Assist has completed, add an import statement from "@foundry/models-api/deployments" and enter the API Name you defined for your model function between the brackets. Alternatively, you can copy the API Name from the Model Imports sidebar.

Copied!
1 2 // View the Model in the repository's Resource imports sidebar to know which namespace to import it from import { FlightModelDeployment } from "@{YOUR_NAMESPACE_HERE}/models";

Then, write a Function that takes a flight, prepares data for the model, and interprets the result of the model execution. The model is imported as an asynchronous function that respects the model's input and output specification or API. From this, TypeScript can ensure, at compile time, that the correct data structure is sent to and received from the model deployment.

Note that if your Model's API expects a single tabular input and output, the associated function will accept single TypeScript objects with properties corresponding to the columns specified for the input if the Enable row-wise processing option is enabled - which it is by default. Alternatively, consider using an Object or ObjectSet directly in the Model API to facilitate use of your Model with Objects in Functions.

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 @Function() public async predictFlightDelaysRowWise(flight: Flight): Promise<Double> { // Prepare the input to match the model function's API. // This model function expects a single flight. // If you'd like to process multiple flights at a time, // edit your model function and uncheck "Enable row-wise processing". // Note you can also use an Object directly in the model API // to avoid tedious mapping between a model API and an object type's properties. const modelInput = { "lastArrivalTime": flight.lastArrivalTime, "lastExpectedArrivalTime": flight.lastExpectedArrivalTime, }; // Call the Live deployment. const modelOutput = await FlightModelDeploymentRowWise(modelInput); return modelOutput.prediction; } @Function() public async predictFlightDelays(flights: Flight): Promise<FunctionsMap<Flight, Double>> { let functionsMap = new FunctionsMap(); // Prepare the input to match the model function's API, // for the case where row-wise processing is not enabled. // Note you can also use an ObjectSet directly in the model API // to avoid tedious mapping between a model API and an object type's properties. const dfIn = flights.map(flight => ({ "lastArrivalTime": flight.lastArrivalTime, "lastExpectedArrivalTime": flight.lastExpectedArrivalTime, })); // Call the Live deployment. const modelOutput = await FlightModelDeployment( {"df_in": dfIn} ); for (let i = 0; i < flights.length; i++) { functionsMap.set(flights[i], modelOutput.df_out[i].prediction); } return functionsMap; }

Note the above example assumes the following Model API:

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import palantir_models as pm class ExampleModelAdapter(pm.ModelAdapter): ... @classmethod def api(cls): inputs = { "df_in": pm.Pandas(columns=[("lastArrivalTime", datetime.datetime), ("lastExpectedArrivalTime", datetime.datetime)]) } outputs = { "df_out": pm.Pandas(columns=[("prediction", float)]) } return inputs, outputs ...

Call a model function from Python Functions

Advanced users can use the Foundry Platform SDK, which is currently in Beta, to execute a model function from Python Functions using the ontology query endpoint using the full API name of the model function as the query_api_name parameter.

Functions backed by model datasets

Functions on models are optimized for deployments that serve Model Assets. Dataset-backed models will be supported until their deprecation in October 2025, however, the transform method expects and returns a list<Row<str, any>> and is therefore effectively untyped. You may want to check the validity of your data at runtime.

Performance considerations

Models are executed as part of the runtime of the Function, therefore all standard limits apply. If your Function backs an Action, there are further limits on the number of resulting edits. When calling live deployments, model input and output data is sent through the network with an upper limit of 50 Mb. Including that additional throughput, the total execution time of the Function cannot exceed 30 seconds. If you wish to increase this timeout limit per Function, contact your Palantir representative.