The model adapter's api()
method specifies the expected inputs and outputs in order to execute this model adapter's inference logic. Inputs and outputs are specified separately.
At runtime, the model adapter's predict()
method is called with the specified inputs.
api()
implementationThe following example shows an API specifying one input, named input_dataframe
, and one output, named output_dataframe
. Both the input and output objects are specified as Pandas dataframes, where the input dataframe has one column of float
type named input_feature
, and the output dataframe has two columns: (1) a column named input_feature
of float
type and (2) a column named output_feature
of int
type.
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
import palantir_models as pm class ExampleModelAdapter(pm.ModelAdapter): ... @classmethod def api(cls): inputs = { "input_dataframe": pm.Pandas(columns=[("input_feature", float)]) } outputs = { "output_dataframe": pm.Pandas(columns=[("input_feature", float), ("prediction", float)]) } return inputs, outputs ...
The API definition can also be extended to support multiple inputs or outputs of arbitrary types:
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
import palantir_models as pm class ExampleModelAdapter(pm.ModelAdapter): ... @classmethod def api(cls): inputs = { "input_dataframe": pm.Pandas(columns=[("input_feature", float)]), "input_parameter": pm.Parameter(float, default=1.0) } outputs = { "output_dataframe": pm.Pandas(columns=[("input_feature", float), ("prediction", float)]) } return inputs, outputs ...
Direct setup of batch deployment and automatic model evaluation in the Modeling Objectives application is only compatible with models that have a single tabular dataset input. If your model adapter requires several inputs, you can set up batch inference in a Python transform.
The types of inputs and outputs for the model adapter API can be specified with the following classes, defined in detail below:
pm.Pandas
, for Pandas Dataframespm.Spark
, for Spark Dataframespm.Parameter
, for constant, single-valued parameterspm.FileSystem
, for Foundry Dataset filesystem accesspm.MediaReference
, for use with Media ReferencesCopied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
# The following classes are accessible via `palantir_models` or `pm` class Pandas: def __init__(self, columns: List[Union[str, Tuple[str, type]]]): """ Defines an Pandas Dataframe input or output. Column name and type definitions can be specified as a parameter of this type. """ class Spark: def __init__(self, columns: List[Union[str, Tuple[str, type]]] = []): """ Defines an Spark Dataframe (pyspark.sql.Dataframe) input or output. Column name and type definitions can be specified as a parameter of this type. """ class Parameter: def __init__(self, type: type = Any, default = None): """ Defines a constant single-valued parameter input or output. The type of this parameter (default Any) and default value can be specified as parameters of this type. """ class FileSystem: def __init__(self): """ Defines a FileSystem access input or output object. This type is only usable if the model adapter's `transform()` or `transform_write()` method is called with Foundry Dataset objects. If used as an input, the FileSystem representation of the dataset is returned. If used as an output, an object containing an `open()` method is used to write files to the output dataset. Note that FileSystem outputs are only usable via calling `.transform_write()`. """ class MediaReference: def __init__(self): """ Defines an input object to be of MediaReference type. This input expects either a stringified JSON representation or a dictionary representation of a media reference object. This type is not supported as an API output. """
For Pandas
or Spark
inputs and outputs, columns can be specified as either a list of strings
specifying the column name, or a list of two-object tuples
in the format (<name>, <type>)
where <name>
is a string representing the column name and <type>
is a Python type representing the type of the data in the column. If a string is provided for a column definition, its type will default to Any
.
The following types are supported for tabular columns:
str
int
float
bool
list
dict
set
tuple
typing.Any
MediaReference
Column types are not enforced and act as a way to signal to consumers of this model adapter what the expected column types are. The only exception is the MediaReference
type, which expects each element in the column to be a media reference string and will convert each element to a MediaReference
object before being passed to this model adapter's inference logic.
For Parameter
inputs and outputs, the following types are supported:
str
int
float
bool
list
dict
set
tuple
typing.Any
Parameter types are enforced, and any parameter input to model.transform()
that does not correspond to the designated type will throw a runtime error.
predict()
implementationThis example is compatible with the example definition of api()
above. This uses Pandas dataframes as inputs and outputs, alongside a parameter.
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
class ExampleModelAdapter(pm.ModelAdapter): ... @classmethod def api(cls): inputs = { "input_dataframe": pm.Pandas(columns=[("input_feature", float)]), "input_parameter": pm.Parameter(float, default=1.0) } outputs = { "output_dataframe": pm.Pandas(columns=[("input_feature", float), ("prediction", float)]) } return inputs, outputs def predict(self, input_dataframe, input_parameter): outputs["prediction"] = self.model.predict(input_dataframe) * input_parameter return outputs ...