REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2025-11-25
In addition to non-georestricted enrollments, US georestricted enrollments can now enable Direct OpenAI as a model family in AIP. This model family is separate from the Microsoft Azure model family, although the model offerings will often look the same. Turning on Direct OpenAI for an enrollment offers the following key benefits:
To enable these models, enrollment administrators must enable Direct OpenAI in Control Panel.
OpenAI models provided directly from OpenAI (not via Azure) including GPT-4o. Use of these models must be consistent with the OpenAI Usage Policies ↗ and the OpenAI Sharing & Publication Policy ↗.
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2025-11-20
Three new training courses are now available on learn.palantir.com ↗ to help you improve your skills in Foundry and AIP. These courses, focusing on building workflows and scoping use cases in Foundry with our AIP applications and features, are designed to help you feel more familiar with real-world solutions and industry-specific processes.
The new Foundry & AIP Aware course ↗ ties together various courses on the Learn site to create an immersive and hands-on experience for users interested in the following:
Expect to start with theory and the basics, and progress to building functional, real-world solutions by the end of each module. Upon completing this program, all participants will be eligible to sit for the (free) Foundry Aware certification exam ↗. Successfully passing this exam marks an important step before gaining further real-world experience or pursuing an apprenticeship.
This comprehensive course is intended to take approximately eight hours and is accessible to both technical and non-technical users.
The following two new speedrun courses are also available:
Speedrun: Your First Agentic AIP Workflow ↗: Learn how to leverage AIP and the Ontology for an agentic workflow and AIP-human teaming.
Speedrun: Data Science Fundamentals ↗: Learn how to use Foundry and AIP for Data Science.
Date published: 2025-11-20
You can now import Snowflake tables as virtual tables into JupyterLab® code workspaces, enabling you to work with large-scale data stored externally to Foundry without moving it.
Code Workspaces now supports read and write operations on Snowflake tables, including Iceberg tables cataloged in Polaris. Iceberg tables are open-source table formats that enable reliable, scalable, and efficient management of large datasets, including tables stored externally in Snowflake.

A highlighted code snippet in the Data panel.
This capability allows you to run interactive Python notebooks directly against data cataloged in Snowflake, supporting data science, analytics, and machine learning workflows without requiring data replication into Foundry. By working with data where it lives, you can leverage Snowflake's storage and cataloging while using Foundry's development environment.
Learn more about virtual tables and Code Workspaces.
Jupyter®, JupyterLab®, and the Jupyter® logos are trademarks or registered trademarks of NumFOCUS.
All third-party trademarks including logos and icons referenced remain the property of their respective owners. No affiliation or endorsement is implied.
Date published: 2025-11-18
Organization administrators can now export Foundry logs directly into streaming datasets, allowing you to track application logs, function execution logs, and more all in one place as they occur. Once your logs are flowing into a dataset, you can then analyze them using Foundry's transformation tools and build custom dashboards to derive real-time insights.

The Log observability settings page in Control Panel, where you can create log exports to streaming datasets.
To export logs, navigate to Control Panel > Log observability settings > Create log export, then select the projects containing the desired logs. Specify the export location, then select either the Internal Format or OpenTelemetry Protocol (OTLP) format as the schema for the streaming dataset.

The log configuration wizard, displaying the columns that will be present in the streaming dataset when the Internal Format schema is selected.

The log configuration wizard, display the columns that will be present in the streaming dataset when the OpenTelemetry Protocol (OTLP) schema is selected.
Learn more about exporting logs to streaming datasets in our documentation.
Let us know what you think about exporting logs through Control Panel. Share your thoughts with Palantir Support channels, or leave a post on our Developer Community ↗ using the control-panel tag ↗.
Date published: 2025-11-18
AI FDE - the AI forward deployed engineer - will be available in beta starting the week of November 17 for enrollments with AIP enabled. AI FDE allows you to operate Foundry with natural language, using conversations to unlock the power of the Palantir platform. AI FDE makes platform interactions more intuitive and accessible for all users, regardless of technical expertise, while maintaining complete control and visibility into tool use and data access.

The AI FDE prompt input field, with the option to add ontology resources as context.
With AI FDE, you can perform data transformations, manage code repositories, build and maintain your ontology, and more. AI FDE can accelerate your efforts with the following features:
To get started with AI FDE, ensure that AIP is enabled on your enrollment. For the best experience, Foundry Branching should also be enabled to support ontology edits. Once enabled, you can begin interacting with AI FDE by providing natural language requests. You can expand the agent's context by sharing resources or documentation, and enable relevant tools depending on the given task.

The AI FDE tool selection menu, which allows users to select the tools AI FDE has access to.
After configuring context and tools, you can use AI FDE to help you perform the following actions in Foundry:
By enabling natural language commands for data integration, ontology development, and function creation, AI FDE can transform how you work with Foundry while maintaining security and providing full visibility into every action.
As we continue to develop AI FDE, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.
Note: AIP feature availability is subject to change and may differ between customers.
Date published: 2025-11-18
AIP Analyst is a new data analysis application launching in beta the week of November 17 for users with AIP enabled. AIP Analyst uses the data in your ontology to answer questions in a chat-based interface, offering an ontology-first experience that enables both technical and non-technical users to traverse and generate insights from their ontology data. With AIP Analyst, you can ask questions, visualize results, and understand every step of your analysis with complete transparency.
AIP Analyst has access to a growing list of tools, including:
AIP Analyst emphasizes transparency and control; you can review each step of an analysis, validate logic, make manual adjustments when needed, and view the analysis lineage in an interactive graph view.
AIP Analyst shows its work. Every analysis creates an interactive dependency graph showing the flow from question to answer. Users can see exactly how the agent reasoned through their request, inspect intermediate results, and manually adjust steps.

AIP Analyst in the process of answering a question, with the graph tab expanded.
With the ability to fork a chat, users can branch explorations from a common starting point. AIP Analyst can either access every object type that the user has access to, or it can be locked down to specific ontologies and object type groups for more targeted exploration.

The AIP Analyst Settings menu, with ontology and object type group options.
Beyond text chat, AIP Analyst supports the following features:
Shift + tab) allows the user to guide the agent at every step of the analysis.
Options in the AIP Analyst input field.
AIP Analyst redefines ontology exploration by combining the power of conversational AI with robust user controls and transparent workflows. With its growing suite of tools and commitment to transparency, AIP Analyst is an essential tool for anyone seeking deeper insights from their ontology data.
We want to hear about your experience with AIP Analyst and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.
Note: AIP feature availability is subject to change and may differ between customers.
Date published: 2025-11-18
AIP Evals now has a results analyzer that enables you to quickly understand why tests failed and how to fix them. Previously, when iterating on an AI function, moving from failing test cases in the results table to clear, actionable next steps was a lengthy manual process. The results analyzer addresses this by automatically clustering failures into root-cause categories and proposing targeted prompt changes where they help.
The AIP Evals results analyzer is a built‑in AI copilot on the Results view that:

A view of the results analyzer in the AIP Evals application.
The results analyzer has been used to discover significant AI failure modes and optimization opportunities, and can be used with AIP Logic functions. Support for agents published as functions and functions on objects will be included in a future release.
Some example use cases include the following:
To start using the AIP Evals results analyzer, refer to the documentation to view prerequisite setup steps. After setup, you can select a single evaluation suite run with failing test cases from either the AIP Logic sidebar or the AIP Evals application, then generate an analysis. From there, you can:
As we continue to develop new AIP Evals features and improvements, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the aip-evals tag ↗.
Date published: 2025-11-18
GPT-4.1 is now available from Azure OpenAI for IL2, IL4, IL5 enrollments.
GPT-4.1 is an improvement over GPT-4o and alternative to Claude 3.7 Sonnet for certain use cases, and shows excellent performance in coding, instruction following, and long context conversations. Comparisons between GPT-4.1 and other models in the OpenAI family can be found in the OpenAI documentation ↗.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2025-11-18
The Palantir platform now provides networking logs and new metrics to help you debug networking issues that were previously difficult to resolve. This feature is currently available in beta for enrollments running on Rubix, Palantir's Kubernetes-based infrastructure.
Build logs now encompasses network egress logs. Access network egress logs from a Build page by selecting Logs and then applying the suggested Network egress logs filter. To understand what these logs contain and how to interpret them, you can review the documentation.

Select the Network egress logs filter to review logs.
Network egress policies in Control Panel now aggregate those same logs per source, as well as the metrics of the usage of these policies. Start from Control Panel > Network egress, then select a policy and go to the Observability tab to access.

You can review egress logs and metrics for a network policy in Control Panel.
As we continue to add Data Connection features, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ or post using the our data-connection tag ↗.
Date published: 2025-11-18
Monitoring views can now be included in Marketplace products, ensuring that key rules and checks are packaged together with your product. Monitoring views are a collection of data monitoring rules and health checks that make it easier to monitor resources at scale. With this update, when you add a monitoring view to a Marketplace product, all of the checks defined within that view will be automatically incorporated, and all targets specified in the monitoring rules will be included as product inputs.
This feature streamlines the process of delivering comprehensive monitoring alongside your product, reducing the need for manual configuration and setup. When a customer installs your Marketplace product, they will be able to reconfigure each target scope associated with the monitoring view so that monitoring logic can be adapted to the enrollment where the product is deployed. Monitoring view subscriptions can also be configured post-installation, even in locked installations.

Packaging a monitoring view in a Marketplace product.
By making it easier to bundle and deploy monitoring configurations, this update helps ensure consistent observability across different environments. Users can benefit from a more seamless experience, as monitoring is integrated from the start and can be tailored to their unique context.
We want to hear about your experiences with monitoring views and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗.
Learn more about monitoring views.
Date published: 2025-11-18
Multimodal media sets are now available on all Foundry enrollments, allowing you to upload and store files of any format within a single media set. This capability simplifies workflows that require handling multiple media types together, such as processing mixed document types or combining different file formats in analysis pipelines.
Multimodal media sets allow you to work with multiple types of unstructured data in a single media set. They are ideal for working with:

A preview of a media item not supported for preview.

A preview of multimodal media in Workshop.
transforms-media package.In upcoming releases, we plan to enhance multimodal media sets with further Pipeline Builder integrations.
We want to hear about your experience with multimodal media sets and welcome your feedback. Share your thoughts with Palantir Support channels, or on our Developer Community ↗ using the media-sets tag ↗.
Date published: 2025-11-18
Transforms has added Foundry-native Python bindings for DuckDB ↗, a modern, high-performance single-node SQL execution engine.
This integration allows users to write highly performant SQL pipelines within the Python ecosystem, with features such as incremental processing and partitioned outputs. For many use cases, DuckDB can offer significant advantages over other single-node runtimes, including faster execution and improved memory efficiency. DuckDB’s familiar SQL interface makes it an excellent choice for Foundry users with existing SQL experience.
DuckDB is particularly well-suited for medium-to-large scale data processing tasks that require low latency and efficient resource usage. Unlike many other single-node compute engines, DuckDB supports resource configuration to control memory usage and parallelism, which allows fine-grained optimization for different workloads. This is especially important for memory-constrained contexts, where DuckDB can self-limit its memory consumption to avoid out-of-memory errors.

Transforms DuckDB code example.
To learn more, see the DuckDB API documentation and an overview of Foundry’s Python compute engine options.
Date published: 2025-11-11
JupyterLab® and RStudio® Code Workspaces now provide an AIP agent accessible from a workspace's sidebar, enabling access to any of AIP's supported large language models (LLMs) to help you develop and deploy code in Foundry based on your specific use case. This experimental feature is available for all Foundry enrollments with AIP enabled.

An AIP agent helps you write code and generate visualizations in JupyterLab® and RStudio® Code Workspaces.
To get started, open your workspace, select the </> icon at the bottom of the left sidebar, and enter a prompt in the Ask a question... text box to initiate the agent. The agent will provide coding guidance or generate complete files for you based on its available tools. To configure the tools an agent can access to help it perform essential operations in your workspace, select the wrench icon to render all available Tools and opt the agent out of those which are not relevant for your use case. Agents can perform a wide range of tasks through its tools, such as author files, write and run code snippets, search for and install libraries, or execute terminal commands.

Configure the tools available to the AIP agent in your workspace.
Use the agent's Settings menu to rename conversation threads and view the system prompt. The agent will not persist your chat history after you shut it down or restart the workspace, so make sure to sync any code or model outputs you want to save before ending your session.

You can rename chat threads using the agent's Settings menu.
You can alternate the LLM your agent uses by selecting the name of the current model from the bottom of the prompt text box. Model behavior may vary across providers, so you can experiment with different models to find the approach that works best for your specific use case. Learn more about prompt engineering best practices.
You can use a workspace's AIP agent to:
We will continue to refine the agent's capabilities and expand its toolkit as we gather feedback during its initial experimental release. Additionally, support for writing Foundry models will be available in the coming weeks.
Jupyter®, JupyterLab®, and the Jupyter® logos are trademarks or registered trademarks of NumFOCUS. RStudio® is a trademark of Posit™. All third-party trademarks (including logos and icons) referenced remain the property of their respective owners. No affiliation or endorsement is implied.
Date published: 2025-11-11
The new Machinery widget, an analysis and real-time monitoring tool that provides operational insights for your configured Machinery processes, is available on all enrollments the week of November 10. This new capability enables teams to visualize process flows, track key metrics, and identify performance issues without requiring additional configuration beyond your existing Machinery setup.

The new Machinery widget at a glance.
The new Machinery widget natively supports multi-process graphs, allowing you to track metrics across multi-object-type process implementations. The widget is available in Workshop modules or as a stand-alone view in the Machinery application with limited features.
Configuration is streamlined through automatic derivation of subprocess object sets using search arounds from parent processes. This means that you only need to configure one object input for each root process. If you have an application process with many linked review subprocesses, you can provide 100 application objects, and all related child objects will be automatically identified through configured link types.
Four metric views are preconfigured and can be customized by application builders; historical count, current count, historical duration, and current duration. Application builders can also add custom metric views to suit specific analytical needs. Users can switch between these views, hover over nodes to reveal all available metrics, and pin specific nodes for continuous monitoring across the graph visualization.
The new Machinery widget optimizes space usage using contextual zoom. When zoomed out, it will show many graph elements, but only a single metric. When zoomed in, nodes reveal additional information and metric cards show up to three available metrics.

Contextual zoom reveals additional information and metrics.
Two analysis modes enable process investigation beyond visualization. Path explorer analyzes individual process paths and their frequency distribution, allowing you to select specific paths to filter outputs and understand exactly how objects flow through your workflow.

Analyze individual process paths and their frequency distribution with path explorer mode.
Duration distribution identifies performance outliers through the visualization of time spent in selected states across all objects. This allows the isolation of individual buckets or ranges of objects with undesirable behavior, such as spending excessive time in particular transitions or states. Both analysis modes update output object sets dynamically, enabling iterative investigation of process performance issues.

Use the duration distribution mode to identify outliers through visualization of time spent in states across selected objects.
Multiple graph features adapt visualizations to different use cases where bottleneck identification is critical. Transition nodes simplify complex graphs by replacing actions and automations with implicit state transitions, providing a cleaner state-transition perspective. Additionally, subprocesses can be replaced with their implicit state transitions for visibility into transition metrics on the currently focused process.
The Machinery v2 widget automatically detects and removes objects that are deviating from the process definition, helping to remove noise from the performance analysis. Non-conforming objects can be made visible and explicitly included or isolated in the output. When visible, deviating states and transitions are visually highlighted, with metrics computed across all input objects rather than just conforming ones. This is valuable when investigating why certain processes deviate from expected patterns.
We want to hear about your experiences using Machinery and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the machinery tag ↗.
Learn more about the Machinery widget.
Date published: 2025-11-11
Starting the week of November 10, Workflow Builder will be rebranded as Workflow Lineage, better reflecting its role as an interactive workspace for visualizing, understanding, and managing application dependencies and their underlying processes.

The newly renamed Workflow Lineage home page.
All existing features and functionalities remain unchanged, and you can continue to use Workflow Lineage as usual. You should see the new name reflected across Foundry and platform communications. If you have any questions about this change, share them with Palantir Support channels or on our Developer Community ↗ .
Learn more about Workflow Lineage.
Date published: 2025-11-06
Tracing, logging, and run history views for functions, actions, automations, and language models are now available in Workflow Lineage for all users. Additionally, starting the week of November 10, all in-platform logs (including those from the Ontology and AIP workflows) can be exported to a real time streaming dataset, allowing for powerful custom analysis of your telemetry.
Ontology and AIP workflows now come out of the box with first-class tracing, logging, and run history views for all functions, actions, automations, and language models:
Telemetry highlights include the following:
To start observing your Ontology and AIP workflows, follow the steps below:

The trace view for a function workflow execution.
As stated in the log permissions and configure logging documentation, users with the Information security officer or Enrollment administrator role can manage the Log observability settings for an organization in Control Panel.
Let us know what you think about our new observability capabilities for Ontology and AIP workflows. Contact our Palantir Support channels, or leave your feedback in our Developer Community ↗ .
Date published: 2025-11-06
Peer Manager enables you to view and monitor jobs associated with an established peering connection that synchronizes objects and links between Foundry enrollments in real-time as well as mediates changes made across ontologies. The application will be generally available across all enrollments the third week of November.
Peering enables organizations to establish secure, real-time Ontology data synchronization across distinct Foundry enrollments. Peer Manager is the central home for administering peering in Foundry. From Peer Manager, space administrators can create peer connections, monitor peering jobs, and configure data to peer.
After you create a peer connection, you can use Peer Manager's home page to garner information about your new connection and all other connections configured between your enrollment and other enrollments. Peer connections support the import and export of Foundry objects and their links as well as object sets configured in Object Explorer.

The Peer Manager home page provides an overview of all configured Peer Connections.
Select a connection to launch its Overview window, where you can track the health of each peer connection by viewing the status of individual peering jobs.

Peer Manager's Overview window offers a unified view of the status and health of peering jobs within a connection.
Select Ontology from the top ribbon to peer objects across an established connection, where Peer Manager enables you to peer all or a selection of properties on the object.
Learn more about object peering in Peer Manager.

Peer Manager's Ontology window enables you to peer object types and their links across a peer connection.
The ability to configure Artifact peering will be available in Peer Manager by the end of 2025. Contact Palantir Support with questions about peering or Peer Manager on your enrollment.
Date published: 2025-11-06
Pipeline Builder now offers the ability to create external pipelines using third-party compute engines, with Databricks as the first supported provider. This capability is in beta.
External pipelines require virtual table inputs and outputs from the same source as your compute. When using external pipelines, compute is orchestrated by Foundry and pushed down to the source system for execution.
Foundry’s external compute orchestration provides you with the flexibility to choose the most appropriate technology for your workload, use case, and architecture requirements. Pipelines built with external compute can also be composed together with Foundry-native compute pipelines using Foundry’s scheduling tools, allowing you to easily orchestrate complex multi-technology pipelines using the exact right compute at every step along the way.
With this improvement, you can now push down compute to Databricks using either code-based Python transforms or point-and-click Pipeline Builder boards. Learn more about creating external pipelines in Pipeline Builder.

Enabling push down compute in Pipeline Builder.

External pipeline with pushdown compute in Pipeline Builder.
As we continue to add features to Pipeline Builder, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the pipeline-builder tag ↗.
Date published: 2025-11-06
Iceberg ↗ and Delta ↗ tables can now be imported as virtual tables into JupyterLab® code workspaces, providing more flexibility when working with externally stored data at large scales. Delta and Iceberg tables are open source table formats that enable reliable, scalable, and efficient management of large datasets, including tables stored in Databricks.
JupyterLab® code workspaces now support read and write capabilities for Iceberg and Delta tables, and provide table-specific code snippets in the Data panel to facilitate development.

A highlighted code snippet in the Data panel.
This feature enables running interactive Python notebooks against data stored and cataloged externally to Foundry in Iceberg and Delta tables, supporting a wide range of data science, analytics, and machine learning workflows.
Learn more about virtual tables and Code Workspaces.
Jupyter®, JupyterLab®, and the Jupyter® logos are trademarks or registered trademarks of NumFOCUS. All third-party trademarks (including logos and icons) referenced remain the property of their respective owners. No affiliation or endorsement is implied.
Date published: 2025-11-04
Widget sets created in Custom Widgets can now be included as content in Marketplace products.
When you add a Workshop module that uses a widget set to a Marketplace product, the widget set is automatically packaged. Widget sets can also be manually packaged independently, allowing you to build Workshop modules on top of them.
If a widget set had Ontology API access enabled in the source environment, it will be installed with access disabled by default. After installation, you must manually enable Ontology API access on the widget set if needed.

Published Marketplace product containing a Workshop module that uses a widget set.
As we continue to develop new features for custom widgets, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ and use the custom-widgets ↗ tag.
Date published: 2025-11-04
Dataset rollback is now available in Data Lineage, giving you greater control over your data pipelines. Whether you encounter an outage, errors in your pipeline logic, or unexpected upstream data, dataset rollback provides a fast, reliable way to revert your datasets to a stable state. In addition, you can now queue snapshots, allowing datasets to snapshot automatically on their next build.
Dataset rollbacks provide several key benefits:
To get started with dataset rollback, open your dataset in Data Lineage and select a previous successful transaction in the History tab. You can roll back your dataset to that transaction by selecting Roll back to transaction.

The Roll back to transaction option, listed in a selected transaction's Overview tab.
To queue a snapshot on your dataset's next build, open a dataset in Data Lineage and select Force snapshot In the History tab in the bottom panel.

The Force snapshot option in the History tab.
Note that you will need to acknowledge that this action cannot be undone before proceeding.
Editor role can perform rollbacks to ensure secure operations.Dataset rollback allows you to build, experiment, and iterate on your pipelines with confidence; the ability to revert to a stable state is available whenever you need it.
We want to hear about your experience and welcome your feedback as we develop more features in Data Lineage. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the data-lineage tag ↗.
Learn more about dataset rollback.
Date published: 2025-11-04
Ontology Manager now offers an improved rebasing and conflict resolution experience that gives you greater flexibility and control when managing branch changes. You can now rebase at any point without creating a proposal, view changes from both Main and your branch simultaneously, and resolve merge conflicts using multiple approaches—either through the Conflicts tab in the Save dialog or directly in the Ontology Manager interface for conflict resolution. This enhanced workflow prevents situations where unresolvable errors block your progress. This feature is available the week of November 3 across all enrollments.
Visit the documentation on testing changes in the ontology.
While you introduce changes on your branch, Main can also update with new changes made by others. Rebasing incorporates the latest changes from Main into your current branch to keep it up to date.

Resolve merge conflicts by choosing between changes from Main or your current branch directly in Ontology Manager.
During a rebase, Ontology Manager enters a new state where you can view and access changes from both Main and your branch. You may resolve merge conflicts by choosing between changes from Main or your current branch from the Conflicts tab in the save dialog. Alternatively, you can resolve conflicts by editing the ontology resource directly. This flexibility prevents situations where users become stuck due to unresolvable errors after conflict resolution.
Complex cases of schema migrations or datasource replacements are not yet handled by this rebasing experience. Refer to the known limitations section of the documentation for an alternative solution. We are actively working to resolve these limitations.
As we continue to develop new features for Foundry Branching, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ and use the foundry-branching ↗ tag.