You can operationalize models in the context of the Ontology by using Functions that invoke models during their runtime. A model can be made available through Modeling Objective or Model live deployments and imported into a Functions repository for usage in code.
Before creating Functions on models, be sure to first create and set up a Modeling Objective live deployment with an API name or a model live deployment.
For more information on creating Functions refer to the getting started guide.
Once a live deployment has been created, it must be imported for usage in a specific repository. Select the Resource Imports sidebar to view the model deployments that have already been imported.
To import additional models, select Add in the Resource Imports sidebar to open a search window for Modeling Objectives. Since this is a repository with no Ontology imports, you will only be able to import Objectives that live in the same space as this repository. If your function repository has already imported object types from a given Ontology, you will only be able to import Objectives that live in the same space as that ontology. From here, you can select a deployment representing either a PRODUCTION
or STAGING
release, or select a sandbox deployment from a specific model submission. In this example, we will import the Flight Delay model.
Confirm the model import by selecting Confirm selection. Task runner will execute the localDev
task, generating code bindings to interact with these models.
In your code, you may now import models type from the @foundry/models-api/deployments
package. Each model is available as a constant named after its defined API Name.
Let's write a Function that connects the Flight Delay Model to the Ontology. Once Code Assist has completed, add an import statement from "@foundry/models-api/deployments"
and enter the API Name you defined for your model between the brackets. Alternatively, you can copy the API Name from the Model Imports sidebar.
Copied!1
import { FlightModelDeployment } from "@foundry/models-api/deployments";
Then, write a Function that takes a list of flights, prepares data for the model, and interprets the result of the model execution. Each imported model comes with an asynchronous transform
method that represents its input and output specification. From this, Typescript can ensure, at compile time, that the correct data structure is sent to and received from the model deployment. Unless specified otherwise, Modeling Objective live deployments operate on lists of rows.
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
@Function() public async predictFlightDelays(flights: Flight[]): Promise<FunctionsMap<Flight, Double>> { let functionsMap = new FunctionsMap(); // Prepare a list of rows as expected by the model const modelInput = flights.map(flight => ({ "lastArrivalTime": flight.lastArrivalTime, "lastExpectedArrivalTime": flight.lastExpecptedArrivalTime, })); // call the Foundry ML Live deployment const modelOutput = await FlightModelDeployment.transform(modelInput); // map each flight to its designated model output for (let i = 0; i < flights.length; i++) { functionsMap.set(flights[i], modelOutput[i].prediction); } return functionsMap; }
Functions on models are optimized for deployments that serve Model Assets. Models from datasets are supported as well, however, the transform
method expects and returns a list<Row<str, any>>
and is therefore effectively untyped. You may want to check the validity of your data at runtime.
Models are executed as part of the runtime of the Function, therefore all standard limits apply. If your Function backs an Action, there are further limits on the number of resulting edits. When calling live deployments, model input and output data is sent through the network with an upper limit of 50 Mb. Including that additional throughput, the total execution time of the Function cannot exceed 30 seconds. If you wish to increase this timeout limit per Function, contact your Palantir representative.