REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2026-03-12
AI FDE, the AI forward deployed engineer, is now generally available for enrollments with AIP enabled. AI FDE allows you to operate Foundry with natural language, using conversations to unlock the power of the Palantir platform. AI FDE makes platform interactions more intuitive and accessible for all users, regardless of technical expertise, while maintaining complete control and visibility into tool use and data access.
With AI FDE, you can perform data transformations, manage code repositories, build and maintain your ontology, and more. AI FDE can accelerate your efforts with the following features:
To use AI FDE, ensure that AIP is enabled on your enrollment. For the best experience, Foundry Branching should also be enabled to support ontology edits. Once enabled, you can begin interacting with AI FDE by providing natural language requests.
AI FDE uses modes and skills to accomplish tasks and provide an easy way to manage the agent's context. Modes are the broad task at hand, such as data integration or ontology editing, while skills are granular capabilities that can be used across different modes. To get started, describe your task in the input field and allow the agent to pick a mode based on your task, or select a mode manually. For some modes, you can configure additional settings, such as function language or whether to use Python transforms instead of Pipeline Builder.

The AI FDE Modes menu, which allows users to select a mode with additional configuration for certain modes.
Modes limit the documentation and tools available to the agent to only those relevant for the current task. You can open the Skills menu to see the skills currently available to the agent, and expand the agent's context by sharing resources or documentation. If needed for your task, additional tools can be enabled using the tool icon below the input field.

The AI FDE prompt input field. The open Skills menu displays the skills that are available to an agent in a given session.
After configuring context and tools manually or by selecting a mode, you can use AI FDE to help you perform a variety of powerful actions in Foundry, including the following:
Unlock natural-language commands with AI FDE, and transform how you work in Foundry while maintaining security and complete visibility into every action.
As we continue to develop AI FDE, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the ai-fde tag ↗.
Date published: 2026-03-12
GPT-5.4 is now available directly from OpenAI and Azure for non-georestricted enrollments.
GPT-5.4 ↗ is OpenAI's most capable and efficient frontier model. It combines the industry-leading coding capabilities of GPT-5.3-Codex with major improvements in knowledge work, native computer use, and tool calling. GPT-5.4 is also OpenAI's most token-efficient reasoning model yet, using significantly fewer tokens than GPT-5.2 to solve problems, translating to reduced token usage and faster speeds.
To use this model:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-03-12
Gemini 3.1 Flash-Lite is now available directly from Google VertexAI for non-georestricted enrollments.
Gemini 3.1 Flash-Lite ↗ is Google's fastest and most cost-efficient Gemini 3 series model, built for high-volume developer workloads at scale. Gemini 3.1 Flash-Lite has adjustable thinking levels, giving builders control over how much the model reasons for a given task, which is useful for managing cost and latency across high-frequency workloads.
To use this model:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-03-05
Pilot is an AI-powered application builder that lets you create full-stack applications on top of your ontology using natural language prompts. Pilot will be available in beta for enrollments with AIP enabled starting the week of March 9. To use Pilot, describe the application you want to build, and Pilot will generate the ontology, design, and front-end code in an isolated workspace with no manual data wiring or UI coding required.
With Pilot, building an ontology-backed application starts with a single prompt. Rather than separately defining object types, writing action types, designing a UI, and wiring OSDK hooks, Pilot handles the development lifecycle from description to deployable application, allowing you to focus on what you want to build rather than how to build it.

The Pilot landing page, where you can describe the application you want to build.
When you describe your application, Pilot will spin up an isolated container and break up the work into structured tasks. First, the Ontology builder agent creates the data model for your application, including object types, action types, and relationships. You can review the generated ontology in the Ontology tab and refine it through conversational follow-ups in the chat panel.

Pilot generates object types, action types, and relationships based on your application description.
Next, the Designer agent reads the ontology and your requirements to produce a detailed design specification covering color palette, typography, layout, interaction patterns, and forms. This specification ensures that the generated frontend is polished and production-ready from the start.
The App builder agent implements the user interface using the ontology and design specification. It builds a React application with real-time data loading using OSDK hooks, functional forms, status management, and filtering; all wired directly to your ontology actions. When generation is complete, a live preview of your application will be displayed in the Pilot workspace, giving you an immediate view of the result.
You can continue to iterate on any aspect of the application by chatting with Pilot. For example, you can ask Pilot to add new fields to the ontology, change the layout, or introduce additional functionality. Pilot tracks each change as a structured task, making it straightforward to follow the evolution of your application.

An application generated by Pilot, with live preview and iterative chat refinement.
Pilot can generate realistic seed data within the container to let you test your application without exposing real datasets. Because seed data lives in the container's local datastore, you can safely iterate on your application without impacting production data. If any import issues arise, Pilot surfaces and resolves them automatically.
When your application is ready, Pilot will provide a guided deployment workflow that walks you through promoting ontology changes using Foundry Branching, configuring a Developer Console application, running CI checks, and tagging a release. The result is a production-hosted application served at a custom subdomain, with OSDK-powered ontology operations and no manual API wiring required.
We want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.
Date published: 2026-03-05
You can now use LLMs to generate richer, more flexible datasets in manually entered tables in Pipeline Builder. Describe the data you want, reference other columns in your prompt for dynamic generation, and preview up to 10 rows of LLM-generated data before applying changes to your full table. You can also lock and unlock columns to control which data gets regenerated and which stays the same. These two new features are now available on all enrollments.

An example of notional, LLM-generated student feedback in a manually entered table, with a column prompt that references the score column to produce dynamic, context-aware feedback data.
For manually entered tables in Pipeline Builder, you can now use LLMs to generate richer, more flexible datasets:
/[name of column].You can also lock and unlock columns, giving you more control over which data will be regenerated and which should remain unchanged.

The score and class columns are locked, ensuring their current values remain unchanged when other columns get regenerated.
Learn more about generating notional data using LLMs.
We want to hear about your experiences using Pipeline Builder and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the pipeline-builder tag ↗.
Date published: 2026-03-05
Workshop now includes a built-in Metrics tab in the editor sidebar, giving module builders direct visibility into how their applications are being used. Usage metrics track two categories of data—action submissions and layout views—so builders can understand which parts of their module are most active and identify trends over time. All metrics are aggregate counts and are not attributable to any specific user.

The Metrics tab in the editor sidebar showing action submission counts over the selected time period.
The Metrics panel displays the total number of successful action submissions across the module, along with the percentage change compared to the previous equivalent period. Individual actions are listed with their submission count and a proportional bar showing relative usage. Selecting an action reveals which widgets in the module reference it, making it straightforward to trace how actions are connected to the module's interface.
Action metrics are available by default for all modules and require no additional setup.
Builders can also track how many times each page, tab, and overlay in their module has been viewed. The layout views overview shows the total view count with a per-layout breakdown listing individual pages, overlays, and tabs. Select a layout item to navigate directly to it in the editor.
To start collecting layout view data, open Module settings, navigate to the Metrics tab and toggle on Enable granular metrics. After enabling, it may take up to 24 hours before view data begins to appear. Views are only recorded when users interact with the module in view mode on the main branch.

Enable granular layout metrics by toggling on usage metrics tracking.
Both action and view metrics support a configurable time window of 7, 30, or 90 days, selectable from the period picker at the top of the panel. Each overview card compares the current period against the previous equivalent period, displaying the percentage change so you can spot usage trends at a glance.
Learn more about tracking your Workshop applications with usage metrics in the documentation.
We want to hear about your experiences using Workshop in the Palantir platform and welcome your feedback. Share your thoughts through Palantir Support channels or on our Developer Community ↗ using the workshop tag ↗.
Date published: 2026-03-03
GPT-5.3 Codex is now available directly from OpenAI for non-georestricted enrollments.
GPT-5.3 Codex ↗ is OpenAI's best coding model, optimized for agentic coding tasks, an attention to detail without sacrificing speed. GPT-5.3-Codex supports low, medium, high, and xhigh reasoning effort values for all types of agentic tasks.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-03-03
Quiver now includes a redesigned analysis creation experience, making it easier to choose the right workspace for your task. When creating a new analysis, you can now select from three analysis types: Quiver analysis, time series analysis, and object set path analysis.

The updated Create new analysis page in Quiver, allowing you to choose from three available analysis workspaces.
As part of this update, we are introducing the time series analysis workspace, a dedicated interface purpose-built for ad-hoc time series analysis. It provides a streamlined environment for visualizing and comparing time series data without the full complexity of a Quiver analysis, making it accessible to a wider range of users. When a more advanced workspace is needed, a time series analysis can be opened directly in Quiver.
The Time Series Analysis widget brings the same interface and tooling to Workshop, allowing application builders to embed time series analysis directly in their applications. The widget includes fine-grained configuration options to tailor the experience for operational users:
Users can also open their analysis in Quiver directly from the widget for more advanced workflows. Note that changes made in Quiver are not reflected back in the Workshop widget.
To create a new time series analysis, navigate to the New Analysis button on the Quiver splash page or Foundry side panel. Choose a name and location for your file and select Time Series Analysis before saving.

The new time series analysis view in Quiver.
For more information, review the Quiver analysis types documentation and the Time Series Analysis widget documentation.
We want to hear about your experiences creating time series analyses in Quiver and Workshop. Share your thoughts with our Palantir Support channels or Developer Community ↗ using the quiver ↗ or workshop ↗ tags.
Date published: 2026-03-03
You can now open Workflow Lineage graphs from more locations across the platform using the Cmd+i (macOS) or Ctrl+i (Windows) shortcut, as well as a dedicated navigation option available on various resource types.

Examples of the Open in Workflow Lineage option in Agent Studio and Notepad, often found under File or Actions in the top navigation bar.
Cmd+i (macOS) or Ctrl+i (Windows) keyboard shortcuts to open Workflow Lineage, or select the Open in Workflow Lineage option on a resource where available.The following applications support these navigation features:

The dedicated navigation option in Pipeline Builder.
Leverage this feature to better explore and understand your workflows from different applications across the Palantir ecosystem.
As we continue to develop Workflow Lineage, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the Workflow-lineage ↗ tag.