In Foundry, a model is an artifact for inference that contains machine learning, forecasting, optimization, physical models, or business rules. Within a use case, models encode knowledge about your data to create predictions and empower decisions.
Models developed inside or integrated into Palantir provide:
Full version history, granular model permissioning, automatic dependency management, model lineage, and API management
A Model resource in Palantir comprises of two related but distinct components:
Model artifacts: The model weights or container where the trained model is saved.
Model adapter: The logic that describes how the platform can interact with the model artifacts to load, initialize, and perform inference with the model.
An adapter is published as part of a Python library to enable communication with the stored model artifacts. It enables the platform to load, initialize, and run inference on any kind of model. Adapters are designed to be flexible and can be used to wrap manually uploaded model files or checkpoints, models trained in Foundry, or any arbitrary functional logic.