Model adapter API

The model adapter's api() method specifies the expected inputs and outputs in order to execute this model adapter's inference logic. Inputs and outputs are specified separately.

At runtime, the model adapter's predict() method is called with the specified inputs.

Example api() implementation

The following example shows an API specifying one input, named input_dataframe, and one output, named output_dataframe. Both the input and output objects are specified as Pandas dataframes, where the input dataframe has one column of float type named input_feature, and the output dataframe has two columns: (1) a column named input_feature of float type and (2) a column named output_feature of int type.

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 import palantir_models as pm class ExampleModelAdapter(pm.ModelAdapter): ... @classmethod def api(cls): inputs = { "input_dataframe": pm.Pandas(columns=[("input_feature", float)]) } outputs = { "output_dataframe": pm.Pandas(columns=[("input_feature", float), ("output_feature", int)]) } return inputs, outputs ...

The API definition can also be extended to support multiple inputs or outputs of arbitrary types:

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import palantir_models as pm class ExampleModelAdapter(pm.ModelAdapter): ... @classmethod def api(cls): inputs = { "input_dataframe": pm.Pandas(columns=[("input_feature", float)]), "input_parameter": pm.Parameter(float, default=0.0) } outputs = { "output_dataframe": pm.Pandas(columns=[("input_feature", float), ("output_feature", int)]) } return inputs, outputs ...

Direct setup of batch deployment and automatic model evaluation in the Modeling Objectives application is only compatible with models that have a single tabular dataset input. If your model adapter requires several inputs, you can set up batch inference in a Python transform.

API types

The types of inputs and outputs for the model adapter API can be specified with the following classes, defined in detail below:

  • pm.Pandas, for Pandas Dataframes
  • pm.Spark, for Spark Dataframes
  • pm.Parameter, for constant, single-valued parameters
  • pm.FileSystem, for Foundry Dataset filesystem access
  • pm.MediaReference, for use with Media References
Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 # The following classes are accessible via `palantir_models` or `pm` class Pandas: def __init__(self, columns: List[Union[str, Tuple[str, type]]]): """ Defines an Pandas Dataframe input or output. Column name and type definitions can be specified as a parameter of this type. """ class Spark: def __init__(self, columns: List[Union[str, Tuple[str, type]]] = []): """ Defines an Spark Dataframe (pyspark.sql.Dataframe) input or output. Column name and type definitions can be specified as a parameter of this type. """ class Parameter: def __init__(self, type: type = Any, default = None): """ Defines a constant single-valued parameter input or output. The type of this parameter (default Any) and default value can be specified as parameters of this type. """ class FileSystem: def __init__(self): """ Defines a FileSystem access input or output object. This type is only usable if the model adapter's `transform()` or `transform_write()` method is called with Foundry Dataset objects. If used as an input, the FileSystem representation of the dataset is returned. If used as an output, an object containing an `open()` method is used to write files to the output dataset. Note that FileSystem outputs are only usable via calling `.transform_write()`. """ class MediaReference: def __init__(self): """ Defines an input object to be of MediaReference type. This input expects either a stringified JSON representation or a dictionary representation of a media reference object. This type is not supported as an API output. """

Specifying tabular columns

For Pandas or Spark inputs and outputs, columns can be specified as either a list of strings specifying the column name, or a list of two-object tuples in the format (<name>, <type>) where <name> is a string representing the column name and <type> is a Python type representing the type of the data in the column. If a string is provided for a column definition, its type will default to Any.

Column types

The following types are supported for tabular columns:

  • str
  • int
  • float
  • bool
  • list
  • dict
  • set
  • tuple
  • typing.Any
  • MediaReference

Column types are not enforced and act as a way to signal to consumers of this model adapter what the expected column types are. The only exception is the MediaReference type, which expects each element in the column to be a media reference string and will convert each element to a MediaReference object before being passed to this model adapter's inference logic.

Parameter types

For Parameter inputs and outputs, the following types are supported:

  • str
  • int
  • float
  • bool
  • list
  • dict
  • set
  • tuple
  • typing.Any

Parameter types are enforced, and any parameter input to model.transform() that does not correspond to the designated type will throw a runtime error.

Example predict() implementation

This example is compatible with the example definition of api() above. This uses Pandas dataframes as inputs and outputs.

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 class ExampleModelAdapter(pm.ModelAdapter): ... def predict(self, inputs): columns = ["input_feature"] outputs = pd.DataFrame(df_in) model_input = inputs[columns] outputs["prediction"] = self.model.predict(model_input) return outputs ...
  • In a dataframe, the list of columns passed to its constructor only contain the required columns. At runtime, extra columns can be included in the input and output dataframes. The example implementation preserves the input dataframe and adds a prediction column to the output. This effectively preserves all extra columns.
  • During evaluation, the output should always contain the model prediction and the label. In our example, ensure that the label column is not dropped inside the method.
  • Some models require that only columns used during training are passed to their prediction method. Therefore, we recommend only extracting the feature columns to pass to the model.
  • Some models require the ordering of columns to be preserved. When inputs are passed via REST API requests as JSON objects, the ordering of columns is not necessarily preserved. Therefore, we recommend reordering the columns before passing them to the model's inference method.