REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2025-07-17
Palantir Model Context Protocol (MCP) is now available in beta across all enrollments as of the week of July 14. Palantir MCP enables AI IDEs and AI agents to autonomously design, build, edit, and review end-to-end applications within the Palantir platform. An implementation of Model Context Protocol ↗, Palantir MCP supports everything from data integration to ontology configuration and application development, all performed within the platform.
Vibe code production applications: Enables developers to use AI to produce production-grade applications on top of the ontology while following Palantir's security best practices.
Data integration: Powers Python transforms generation by enabling AI IDEs to get context from Compass, dataset schemas, and execute SQL commands entirely locally.
Ontology configuration: Allows developers to configure their ontology locally without leaving the IDE.
Application development: Integrates with your OSDK to enable the development of TypeScript applications on top of your ontology.
To get started, follow the installation steps and read the user guide for examples and best practices. We strongly encourage all local developers to install and regularly update the Palantir MCP to take advantage of the latest changes and tool releases.
Date published: 2025-07-17
Updated language models are now available in TypeScript functions repositories. These updates provide better consistency between model APIs, making it easier to interchange underlying models. Model capabilities have also been enhanced, with improved support for vision and streaming.
We highly recommend updating your functions repositories with the new models to ensure you stay up to date with the latest AIP features. Review the updated documentation for language models in functions to learn how to update your repository.
Viewing model capabilities when importing updated language models.
Share your feedback about functions by contacting our Palantir Support teams, or let us know in our Developer Community ↗ using the functions
tag ↗.
Date published: 2025-07-09
You can now protect the main branch of your Workshop modules and define custom approval policies. While this only applies to Workshop for now, all types of resources will eventually be supported, with support for ontology and Pipeline Builder resources coming next.
To safeguard critical workflows and maintain development best practices, you can protect the main branch of your resources. This means that any change to a protected resource must be made on a branch and will require approval to take effect.
Approval Flow in a protected Workshop application.
Once a resource is protected, any change to that resource will have to be made on a branch and go through an approval process. The approval policy is set at the project level, and defines whose approval is required in order to merge changes to protected resources.
Project with default approval policy.
Project with custom approval policy.
Approval policies have three customizable parameters:
Note that branch protection currently only applies to Workshop resources, but support for protecting ontology resources is coming soon.
Date published: 2025-07-08
AIP Evals now supports combining multiple object sets and manual test cases within a single evaluation suite. The test case creation experience has been simplified, allowing you to add, delete, and duplicate object sets as needed. This flexibility enables you to leverage object sets while also adding specific manual test cases for comprehensive function testing.
You can now combine multiple object sets and manual test cases to an evaluation suite.
Learn more about adding test cases in AIP Evals.
As we continue to build upon AIP Evals, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.
Date published: 2025-07-03
The Markdown widget in Workshop now supports text tagging with the new Annotation feature. With this feature, builders can seamlessly display, create, and interact with annotation objects on text directly in the Markdown widget.
An example of a Markdown widget with a configured "create annotation" action.
Key highlights of this feature include:
An example of a Markdown widget with configured annotation interactions.
To learn more about configuring Annotations, refer to the Markdown widget documentation.
We want to hear about your experience with Workshop and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the workshop
tag ↗.
Date published: 2025-07-01
When running an incremental transform, you may encounter the following situations:
SNAPSHOT
because the entire input needs to be read from the beginning (for example, the semantic version of the incremental transform was increased).Typically, when an output dataset is built incrementally, all unprocessed transactions of each input dataset are processed in the same job. This job can take days to finish, often with no incremental progress. If the job fails halfway through, all progress is lost, and the output would need to be rebuilt. This process often results in undesirable costs and errors and does not address pipelines where large amounts of data need to be frequently processed.
Limiting the maximum number of transactions that should be processed per job offers a solution to this time-consuming problem.
An animation of incremental transform builds. On the left, the transform without transaction limits is constantly working on one job without noticeable progress. On the right, the transform has set a transaction limit of 3 for the input and is progressing through jobs consistently.
If a transform and its inputs satisfy all requirements, you can configure each incremental input using the transaction_limit
setting. Each input can be configured with a different limit. The example below configures an incremental transform to use the following:
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
from transforms.api import transform, Input, Output, incremental @incremental( v2_semantics=True, strict_append=True, snapshot_inputs=["snapshot_input"] ) @transform( # Incremental input configured to read a maximum of 3 transactions input_1=Input("/examples/input_1", transaction_limit=3), # Incremental input configured to read a maximum of 2 transactions input_2=Input("/examples/input_2", transaction_limit=2), # Incremental input without a transaction limit input_3=Input("/examples/input_3"), # Snapshot input whose entire view is read each time snapshot_input=Input("/examples/input_4"), output=Output("/examples/output") ) def compute(input_1, input_2, input_3, snapshot_input, output): ...
After configuring your incremental transform with transaction limits, you can continue to configure and monitor your builds with the following features and tools:
Ensure your data is always up-to-date by configuring a build schedule.
Verify job ranges: Review Spark details for your build jobs to verify the transaction limits read per input.
Learn about read ranges when transaction limits are set: Review how the added
, current
, and previous
read ranges are used differently when incremental transforms are configured with and without transaction limits.
To use transaction limits in an incremental transform, ensure you have access to the necessary tools and services and that the transforms and datasets meet the requirements below.
The transform must meet the following conditions:
v2_semantics
argument is set to True
.3.25.0
or higher. Configure a job with module pinning to use a specific version of Python transforms.Input datasets must meet the following conditions to be configured with a transaction limit:
APPEND
transactions; however, the starting transaction can be a SNAPSHOT
.We want to hear about your experiences when configuring incremental transforms with transaction limits, and we welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.
Date published: 2025-07-01
When building your pipeline, you may need to roll back a dataset and all of its downstream dependents to an earlier version. There can be many reason for this, including the following:
The pipeline rollback feature allows you to revert back to a transaction of an upstream dataset. When performing a rollback, the data provenance of the upstream dataset transaction is used to identify its downstream datasets and their corresponding transactions to create a final pipeline rollback state. Typically, this process would require several steps to properly roll back each affected dataset. With pipeline rollback, this is reduced to a few simple steps discussed below, along with the ability to preview the final pipeline state before confirming and proceeding with the rollback. Pipeline rollback also ensures that the incrementality of your pipeline is preserved.
As you set up your rollback, you can choose to exclude any downstream datasets; these datasets will remain unchanged as the pipeline is rolled back to the selected transaction.
This feature is currently in the beta stage of development, and functionality may change before it is generally available.
The right editor panel in Data Lineage, with the option to View node properties.
Select Actions, then Rollback.
Under Selected transaction, choose the transaction to which you would like to roll back.
An example of a selected transaction.
After choosing the transaction, downstream datasets will automatically be found and the states they will revert to if the rollback is actioned will be displayed.
Resource types that are unable to be rolled back, including streaming datasets, media sets, and restricted views, will be displayed under the unsupported resources section. Transactional datasets on which you do not have Edit
access will also be included in this list.
A list of datasets with timestamps of the builds.
A list of datasets selected for rollback that you can exclude.
A dataset excluded from rollback that you can choose to add back.
A confirmation dialog confirming the rollback of five dataset transactions and incremental state resets of two datasets.
A confirmation of seven successful dataset rollbacks.
An example of a dataset that was rolled back, with the rolled back transaction crossed out.
To learn more about pipeline rollbacks, review our public documentation. We also invite you to share your feedback and any questions you have with Palantir Support or our Developer Community ↗.