Palantir's modeling suite of products enables users to develop, manage, and operationalize models. This page compares different products to help you choose the right tool for your needs.
For guided assistance with modeling tasks, AI FDE offers a dedicated Machine learning mode that helps you train, evaluate, deploy, and tune models. The agent can guide you through the full workflow, from feature engineering through model training and deployment, using either Model Studio or pro-code repositories. To get started, select the machine learning mode from the AI FDE mode selector or describe your modeling task and let the agent select the mode automatically.
| Product | Details |
|---|---|
| Pipeline Builder | Large scale point-and-click data transformation |
| Code Workspaces | Interactive, pro-code data analysis and transformation in familiar environments such as JupyterLab® |
| Python Transforms | PySpark data pipeline development in Foundry's web-based IDE, Code Repositories |
No-code model training tools are available in Model Studio, providing a simple point-and-click interface for creating production-grade machine learning models.
The palantir_models library provides flexible tooling to publish and consume models within the Palantir platform, using the concept of model adapters. The foundry_ml library, its predecessor, has been formally deprecated as of October 2025.
| Product | Library support | Details |
|---|---|---|
| Code Workspaces | palantir_models | Interactive model development in Jupyter® notebooks |
| Code Repositories | palantir_models | Powerful web-based IDE with native CI/CD features and support for modeling workflows; less interactive than notebooks |
| Product | Details |
|---|---|
| Experiments | Framework for logging metrics and hyperparameters during a model training job |
Models can be used for running large scale batch inference pipelines across datasets.
| Product | Details | Caveats |
|---|---|---|
| Pipeline Builder | No-code model inference directly on the pipeline canvas using the trained model node. Models run as isolated sidecars alongside Spark executors and automatically use the latest model version. | Only supports models with a single tabular input and output. Streaming and Lightweight execution modes are not yet supported. |
| Python transforms | Batch inference can be run directly in Python transforms. Supports pinning a specific model version. | Using the @lightweight decorator and model sidecars is recommended. |
| Modeling objective batch deployments | Modeling Objectives offers broader model management features such as model release management and evaluation. | Does not support multi-output and external models, models as sidecars, or deployment via Marketplace as detailed here. |
| Jupyter® Notebook | Users can create scheduled training and/or inference jobs directly from Code Workspaces. | Only supports running inference models created from the same notebook; use Python Transforms to orchestrate models created elsewhere. |
Models can be deployed in Foundry behind a REST API; deploying a model operationalizes the model for use both inside and outside of Foundry.
| Product | Details |
|---|---|
| Model direct deployments | Auto-upgrading model deployments; best for quick iteration and deployment. |
| Modeling objective live deployments | Production-grade modeling project management; modeling objectives provide tooling for model release management and evaluation. Does not support deployment via Marketplace as detailed here. |
Publishing models as functions makes it easy to use models for live inference in downstream Foundry applications, including Workshop, Slate, actions, and more.
| Product | Best for |
|---|---|
| Direct function publication | No-code function creation on models with live deployments, allowing integration with the Ontology. The same functionality is available in the Model and Modeling Objectives applications. |
| Importing model functions into Functions repositories | Import model functions into TypeScript v1, v2 or Python functions to further process predictions (for example, make ontology edits) with support for Model API type checking. |