Palantir's modeling suite of products enables users to develop, manage, and operationalize models. This page compares different products to help you choose the right tool for your needs.
Product | Details |
---|---|
Pipeline Builder | Large scale point-and-click data transformation |
Code Workspaces | Interactive, pro-code data analysis and transformation in familiar environments such as JupyterLab® |
Python Transforms | PySpark data pipeline development in Foundry's web-based IDE, Code Repositories |
The foundry_ml
library and dataset-backed models have entered their sunset period and are scheduled to be deprecated and removed alongside the Python 3.9 deprecation in October 2025.
The palantir_models
library provides flexible tooling to publish and consume models within the Palantir platform, using the concept of model adapters.
Library | Details |
---|---|
palantir_models | Flexible library to publish and consume models in Foundry through Model Adapters; supports models produced in platform, external models, and containerized models |
foundry_ml | Legacy model development library scheduled to removed in October 2025; you should use palantir_models instead of foundry_ml unless absolutely necessary |
Product | Library support | Details |
---|---|---|
Code Workspaces | palantir_models | Interactive model development in Jupyter® notebooks |
Code Repositories | palantir_models , foundry_ml (until deprecation) | Powerful web-based IDE with native CI/CD features and support for modeling workflows; less interactive than notebooks |
Code Workbooks | foundry_ml | Modeling support in Code Workbooks is limited to only foundry_ml models, which are scheduled to deprecated in October 2025 |
Models can be used for running large scale batch inference pipelines across datasets.
Product | Details | Caveats |
---|---|---|
Modeling objective batch deployments | Batch inference can be set up from Modeling Objectives, which offers broader model management features such as model release management, upgrades, evaluation, and more | Does not support multi-output models |
Python transforms | Batch inference can be run directly in Python transforms | N/A |
Models can be deployed in Foundry behind a REST API; deploying a model operationalizes the model for use both inside and outside of Foundry.
Product | Details |
---|---|
Model direct deployments | Auto-upgrading model deployments; best for quick iteration and deployment |
Modeling objective live deployments | Production-grade modeling project management; modeling objectives provide tooling for model release management, upgrades, evaluation, and more |
Publishing models as functions enables using models in downstream applications, including Workshop, Slate, Actions, and more.
Product | Best for |
---|---|
Direct publish functions | No-code function creation |
Functions on Models | Operationalize models using Typescript functions, allowing a deeper integration with the Ontology |