Palantir's modeling suite of products enables users to develop, manage, and operationalize models. This page compares different products to help you choose the right tool for your needs.
Product | Details |
---|---|
Pipeline Builder | Large scale point-and-click data transformation |
Code Workspaces | Interactive, pro-code data analysis and transformation in familiar environments such as JupyterLab® |
Python Transforms | PySpark data pipeline development in Foundry's web-based IDE, Code Repositories |
The foundry_ml
library and dataset-backed models have entered the planned deprecation phase of development and will be unavailable for use alongside the Python 3.9 deprecation in October 2025. Support remains available until the deprecation date. At this time, you should migrate your workflows to use the palantir_models
library. Contact Palantir Support if you require additional help migrating your workflows.
The palantir_models
library provides flexible tooling to publish and consume models within the Palantir platform, using the concept of model adapters.
Library | Details |
---|---|
palantir_models | Flexible library to publish and consume models in Foundry through Model Adapters; supports models produced in platform, external models, and containerized models |
foundry_ml | Legacy model development library scheduled to removed in October 2025; you should use palantir_models instead of foundry_ml unless absolutely necessary |
Product | Library support | Details |
---|---|---|
Code Workspaces | palantir_models | Interactive model development in Jupyter® notebooks |
Code Repositories | palantir_models , foundry_ml (until deprecation) | Powerful web-based IDE with native CI/CD features and support for modeling workflows; less interactive than notebooks |
Code Workbooks | foundry_ml | Modeling support in Code Workbooks is limited to only foundry_ml models, which are scheduled to deprecated in October 2025 |
Product | Details |
---|---|
Experiments | Framework for logging metrics and hyperparameters during a model training job |
Models can be used for running large scale batch inference pipelines across datasets.
Product | Details | Caveats |
---|---|---|
Modeling objective batch deployments | Batch inference can be set up from Modeling Objectives, which offers broader model management features such as model release management, upgrades, evaluation, and more | Does not support multi-output models |
Python transforms | Batch inference can be run directly in Python transforms | N/A |
Models can be deployed in Foundry behind a REST API; deploying a model operationalizes the model for use both inside and outside of Foundry.
Product | Details |
---|---|
Model direct deployments | Auto-upgrading model deployments; best for quick iteration and deployment |
Modeling objective live deployments | Production-grade modeling project management; modeling objectives provide tooling for model release management, upgrades, evaluation, and more |
Publishing models as functions enables using models in downstream applications, including Workshop, Slate, Actions, and more.
Product | Best for |
---|---|
Direct function publication | No-code function creation, allowing integration with the Ontology |
ModelFunction TypeScript annotation | This functionality produces dataset-backed models which will be deprecated in October 2025, and should no longer be used |