Selecting the right modeling tool

Palantir's modeling suite of products enables users to develop, manage, and operationalize models. This page compares different products to help you choose the right tool for your needs.

Feature engineering

ProductDetails
Pipeline BuilderLarge scale point-and-click data transformation
Code WorkspacesInteractive, pro-code data analysis and transformation in familiar environments such as JupyterLab®
Python TransformsPySpark data pipeline development in Foundry's web-based IDE, Code Repositories

No-code model training

No-code model training tools are available in Model Studio, providing a simple point-and-click interface for creating production-grade machine learning models.

Pro-code model training

Available libraries

The palantir_models library provides flexible tooling to publish and consume models within the Palantir platform, using the concept of model adapters. The foundry_ml library, its predecessor, has been formally deprecated as of October 2025.

Code authoring environments

ProductLibrary supportDetails
Code Workspacespalantir_modelsInteractive model development in Jupyter® notebooks
Code Repositoriespalantir_modelsPowerful web-based IDE with native CI/CD features and support for modeling workflows; less interactive than notebooks

Training metrics tracking

ProductDetails
ExperimentsFramework for logging metrics and hyperparameters during a model training job

Batch inference

Models can be used for running large scale batch inference pipelines across datasets.

ProductDetailsCaveats
Python transformsBatch inference can be run directly in Python transforms. Supports pinning a specific model version.Using the @lightweight decorator and model sidecars is recommended.
Modeling objective batch deploymentsModeling Objectives offers broader model management features such as model release management and evaluation.Does not support multi-output and external models, models as sidecars, or deployment via Marketplace as detailed here.
Jupyter® NotebookUsers can create scheduled training and/or inference jobs directly from Code Workspaces.Only supports running inference models created from the same notebook; use Python Transforms to orchestrate models created elsewhere.

Model deployment

Models can be deployed in Foundry behind a REST API; deploying a model operationalizes the model for use both inside and outside of Foundry.

ProductDetails
Model direct deploymentsAuto-upgrading model deployments; best for quick iteration and deployment.
Modeling objective live deploymentsProduction-grade modeling project management; modeling objectives provide tooling for model release management and evaluation. Does not support deployment via Marketplace as detailed here.

Learn more about the difference between direct deployments and deployments through modeling objectives.

Functions integration

Publishing models as functions makes it easy to use models for live inference in downstream Foundry applications, including Workshop, Slate, actions, and more.

ProductBest for
Direct function publicationNo-code function creation on models with live deployments, allowing integration with the Ontology. The same functionality is available in the Model and Modeling Objectives applications.
Importing model functions into Functions repositoriesImport model functions into TypeScript v1, v2 or Python functions to further process predictions (for example, make ontology edits) with support for Model API type checking.