Selecting the right modeling tool

Palantir's modeling suite of products enables users to develop, manage, and operationalize models. This page compares different products to help you choose the right tool for your needs.

Feature engineering

ProductDetails
Pipeline BuilderLarge scale point-and-click data transformation
Code WorkspacesInteractive, pro-code data analysis and transformation in familiar environments such as JupyterLab®
Python TransformsPySpark data pipeline development in Foundry's web-based IDE, Code Repositories

Model training

Available libraries

The foundry_ml library and dataset-backed models have entered their sunset period and are scheduled to be deprecated and removed alongside the Python 3.9 deprecation in October 2025.

The palantir_models library provides flexible tooling to publish and consume models within the Palantir platform, using the concept of model adapters.

LibraryDetails
palantir_modelsFlexible library to publish and consume models in Foundry through Model Adapters; supports models produced in platform, external models, and containerized models
foundry_mlLegacy model development library scheduled to removed in October 2025; you should use palantir_models instead of foundry_ml unless absolutely necessary

Code authoring environments

ProductLibrary supportDetails
Code Workspacespalantir_modelsInteractive model development in Jupyter® notebooks
Code Repositoriespalantir_models, foundry_ml (until deprecation)Powerful web-based IDE with native CI/CD features and support for modeling workflows; less interactive than notebooks
Code Workbooksfoundry_mlModeling support in Code Workbooks is limited to only foundry_ml models, which are scheduled to deprecated in October 2025

Batch inference

Models can be used for running large scale batch inference pipelines across datasets.

ProductDetailsCaveats
Modeling objective batch deploymentsBatch inference can be set up from Modeling Objectives, which offers broader model management features such as model release management, upgrades, evaluation, and moreDoes not support multi-output models
Python transformsBatch inference can be run directly in Python transformsN/A

Model deployment

Models can be deployed in Foundry behind a REST API; deploying a model operationalizes the model for use both inside and outside of Foundry.

ProductDetails
Model direct deploymentsAuto-upgrading model deployments; best for quick iteration and deployment
Modeling objective live deploymentsProduction-grade modeling project management; modeling objectives provide tooling for model release management, upgrades, evaluation, and more

Learn more about the difference between direct deployments and deployments through modeling objectives..

Functions integration

Publishing models as functions enables using models in downstream applications, including Workshop, Slate, Actions, and more.

ProductBest for
Direct publish functionsNo-code function creation
Functions on ModelsOperationalize models using Typescript functions, allowing a deeper integration with the Ontology