REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2025-01-28
We are excited to introduce model experiments, a Python API designed to track, visualize, and compare the results of model training attempts. The model development process is inherently iterative, and managing numerous training attempts can be challenging. To address this, model experiments offers an API for logging and visualizing model training metrics and hyperparameters, enabling users to gain a deeper understanding of the training process and make informed, data-driven decisions.
Model experiments are available in Code Repositories and Jupyter® Code Workspaces without additional libraries or dependencies. With versions of the palantir_models
API beyond 0.1482.0, bindings are introduced for experiment creation in Jupyter® Code Workspaces and Code Repositories, allowing users to leverage this feature in existing compatible workflows.
Once published, model experiments can be found under the Experiments tab of a model page. Multiple experiments can be selected and compared, providing insight into training iterations and fostering a data-driven approach to model development.
Three experiments are compared, with a Parameters table at the top, followed by line charts for each metric.
We will continue to make additional enhancements to model experiments over the next few months, including:
Leverage this feature to seamlessly integrate tracking and visualization into your model training workflow. With model experiments, you can streamline experimentation and accelerate the journey from model conception to deployment, ultimately driving more effective and efficient machine learning development.
Explore the documentation to get started with model experiments.
Date published: 2025-01-23
We are happy to announce that application builders can now access a PR preview when working on React applications with Ontology SDK (OSDK). A PR (pull request) preview provides a working version of your React application based on the code committed in your pull request. You can preview any proposed changes to your application before it is merged into your main branch and production state, making it easier for you to check for any undesirable outcomes and verify user-facing workflows before changes make their way to production. Previously, builders, designers, and other members of an application team would need to create a development environment to view changes. The PR preview feature removes this requirement, allowing for quicker verification of design and functionality changes and a more efficient, collaborative process.
The PR preview feature is available for any OSDK React application that is hosted in a code repository using the Developer Console web hosting capability.
The website hosting configuration page in Developer Console.
A pull request in Code Repositories with the option to view a preview of the changes.
You can also find a PR preview link for every commit on your branch by navigating to the Commits tab of the pull request. Note that PR previews are only available within seven days of the creation of the pull request. After seven days, the preview will expire.
To access the PR preview feature for React applications, users must have proper permissions for the application from Developer Console. To manage these permissions, open your application in Developer Console, then navigate to Sharing in the left sidebar.
To make changes, create pull requests for the application code, and share PR preview with other users, you must have either Owner
or Editor
permissions. To share a PR Preview with a user without granting permissions to edit your application code or configurations, add them as a Viewer
under Share hosted website.
The permissions configuration page in Developer Console, with the option to grant users access to view PR previews.
We look forward to building a version control interface directly in VS Code workspaces, removing the need to navigate to Code Repositories to generate PRs and obtain preview links.
As we continue to work and improve the Palantir developer experience, we encourage you to leave feedback and comments in our community forum ↗.
Date published: 2025-01-23
Starting the week of January 20, the Send to AIP Assist
event in Workshop will allow users to select a default AIP Assist Agent to receive the event's prompt. This Workshop event is triggered on button selection, and opens the AIP Assist sidebar to send a configured prompt. To further customize the user experience, builders can now provide a default agent, an LLM-powered assistant that is equipped with enterprise-specific knowledge to answer queries about custom operational topics. AIP Assist Agents use custom content as their search context, and can be configured in Agent Studio [Beta].
A default AIP Assist Agent can be selected during event configuration, allowing builders to choose the appropriate agent for the given context. Agents can be given access to custom content, such as documentation about the application being used or other operational processes to provide targeted assistance at relevant times. Selecting an agent is optional, and users can also select a dedicated agent in the AIP Assist sidebar. By selecting a default agent, builders can be sure that the correct agent for the task has been chosen without relying on end-users for manual selection.
The agent selector in the Workshop event configuration panel.
Prompts sent in the event can be static text or dynamic variables, allowing builders to tailor the user experience as they see fit based on the prompt and the receiving agent. As an example, if a builder expects that users will have trouble with a specific workflow, they can add a button that will trigger the Send to AIP Assist
event with a tooltip that explains the button's usage.
An example of a button that triggers the Send to AIP Assist
event with an explanatory tooltip.
With this feature, builders can ensure that users have access to immediate interactive support in Workshop application workflows. Note than an AIP Assist Agent must first be configured and given access to custom content in Agent Studio.
Learn more about configuring the Send to AIP Assist event in Workshop.
Note: AIP feature availability is subject to change and may differ between customers.
Date published: 2025-01-21
We are thrilled to announce that you can now bulk upgrade function versions used in Workshop applications in Workflow Builder. This new feature streamlines the often tedious process of manually updating individual function versions. Performing bulk upgrades on a single page ensures that all your functions are up-to-date across multiple Workshop applications with significantly less manual work.
Getting started on upgrading your modules is simple:
Select the relevant Workshop application nodes and review upgradable functions in one view within Workflow Builder.
We support two types of upgrades:
You can select the specific version you want the functions to upgrade to for both function repositories and sources like AIP Logic or Compute Modules. If nothing is specified, Workflow Builder will default to the latest version.
Specify the version to upgrade your functions to or have Workflow Builder default to the latest version when upgrading.
After you upgrade your functions, you will see the list of Workshop applications that were successfully upgraded.
Workflow Builder showing functions that have been successfully upgraded.
Learn more about how to bulk upgrade functions in Workshop applications.
Date published: 2025-01-21
Both Gemini 1.5 Flash and Gemini 1.5 Pro through Google Vertex AI is now generally available on all enrollments. To use the new Gemini models, the Gemini model family must be enabled through the AIP Settings Control Panel extension.
Gemini 1.5 delivers significant performance improvements, efficient training with Mixture-of-Experts (MoE) architecture, and a groundbreaking long context window, processing up to one million tokens for Gemini 1.5 Flash and two million tokens for Gemini 1.5 Pro. These models enable new possibilities in multi-modal processing and in-context learning, making AI applications more powerful and useful.
Google Gemini 1.5 Pro responding to a sample prompt in Model Catalog's Playground feature.
Review a list of LLMs supported in Palantir.
Date published: 2025-01-21
Starting the week of January 20, Code Repositories will offer inline code assistance and new ways to add attachments to AIP Assist as a beta feature. This update enables users to access AIP Assist features directly from the code editor, enhancing support for engineering efforts and minimizing the need for context switching. This improvement is due to the updated integration between AIP Assist and Code Repositories, which previously introduced context-aware attachments [Beta]. Context-aware attachments and other AIP Assist features are now more accessible, providing developers with flexible options that effortlessly integrate with existing workflows.
Code Repositories now features inline code assistance [Beta] when a snippet of code is highlighted. The Explain, Find bugs, and Ask a question options are displayed above the selected code snippet, enabling users to access targeted help from AIP Assist directly from their code.
A highlighted code snippet and the available AIP Assist inline code assistance options.
This feature can be disabled by opening the Ask AIP Assist dropdown menu in the top right corner of the editor, then selecting Configure AIP Settings. In the AIP Features section, users can enable or disable the Show AIP Assist actions above selected code option and other AIP features to suit their needs.
The Show AIP Assist actions above selected code option in Code Repositories AIP settings.
Users can also access these AIP Assist features from the Ask AIP Assist dropdown menu, or by right-clicking a highlighted code snippet and selecting one of the available AIP Assist options from the context menu.
The AIP Assist options in the context menu of highlighted code.
Context-aware attachments allow users to attach code snippets, files, and repositories to conversations with AIP Assist. These attachments provide context that enriches AIP Assist knowledge and enables more accurate responses to code-specific questions. Previously, this feature was only accessible from the AIP Assist sidebar, but it can now be accessed from the redesigned Ask AIP Assist dropdown menu. This update allows developers to attach a highlighted snippet, the current file, or the entire repository to AIP Assist conversations directly from the code editor.
The Ask AIP Assist dropdown menu providing attachment options for AIP Assist conversations.
Code snippets can also be attached by highlighting the desired code, right-clicking, and selecting Attach to AIP Assist from the context menu. These options allow developers to choose the best fit for their workflows and integrate powerful AIP Assist features seamlessly.
Learn more about context-aware attachments and other AIP Assist application integrations.
Note: AIP feature availability is subject to change and may differ between customers.
Date published: 2025-01-16
We are excited to expand the availability of website hosting to Developer Tier enrollments. This powerful addition enables you to seamlessly build and deploy custom frontend applications, leveraging Foundry as your backend with the help of the Ontology SDK (OSDK).
To get started, navigate to Developer Console and create a client-facing application. See the documentation for an in-depth walkthrough:
Use Developer Console to provision a custom subdomain. Foundry will automatically provision the required infrastructure including DNS records and TLS certificates. Your application will be served from one of the following domains depending on your enrollment type:
{APPLICATION-NAME}.[YOUR-ENROLLMENT].palantirfoundry.com
{APPLICATION-NAME}-[HASH].apps.[ZONE].palantirfoundry.com
The website hosting menu in Developer Console.
With this feature, Developer Tier enrollments can now take advantage of Foundry's development tooling and website management tools.
Go from an empty slate to a deployed website in 10 minutes. Start with a custom repository template, develop inside a fully set up VS Code workspace, and manage your changes in Code Repositories.
The development environment in Foundry, with the VS Code IDE on the left and live preview of the application in development on the right.
Leverage robust version control tools provided by Code Repositories to review and safely deploy your website changes.
The version control page in Code Repositories.
Configure your website's content security policy, manage roll-back, and monitor usage metrics.
Developer Console provides tools to monitor your application's Foundry usage.
Date published: 2025-01-16
Generally available across enrollments the week of January 13, you can add manual entry transform tables as cards to your Quiver canvas, enabling you to create a transform table from scratch that contains up to 5,000 rows of data. Manual entry transform tables have an intuitive spreadsheet-like user interface and support five data types: string, number, Boolean, time, and time series.
As with Quiver's other transform tables, you can apply any of the transform operations available in the transform table search window to manual entry transform tables.
As an example use case, you can create a manual entry transform table to dynamically parameterize an analysis in conjunction with row and column selectors. The values in the table's selected rows can be used as dynamic parameters downstream, such as the figures for metric_b in the Time Series Chart on the Quiver canvas in the image below.
Use manual entry transform tables to parameterize an analysis in Quiver.
You can also use manual entry transform tables to ingest small sets of data from external sources to supplement an analysis and integrate with the Ontology. Additionally, manual entry transform tables enable the full range of Quiver's time series analysis operations for time series datasets containing up to 5,000 rows without the need to establish a time series sync.
To learn more about manual entry transform tables and the workflows they support, see Quiver's transform table documentation.
Date published: 2025-01-16
Available today on all enrollments, you now have new options in the Resource Management application to give you more granular control over a Project's resource queue assignments. In the past, GPU-enabled Projects that were assigned to vGPU resource queues used their enrollment's default resource queue for vCPU workloads. Now, you can configure these types of Projects to use a specified vCPU resource queue. This means that Projects can be assigned to up to two different resource queues, a vCPU queue, and optionally, a vGPU resource queue, to give you more flexibility over your resources.
Additionally, you can now support critical workflows that require dedicated compute resources by making one existing branch per Project the "priority" branch for that Project. Workloads on a set priority branch benefit from the ability to use assigned resource queues, as opposed to other workloads which continue to use the resource queues assigned to the Project. Like Projects, each priority branch is assigned to a vCPU resource queue, and they can also optionally be assigned to a vGPU resource queue.
Enable priority branch and manage which vCPU and vGPU resource queues the priority branch uses.
You can now also view a Project's resource queue assignments in the platform filesystem sidebar. For example, the following screenshot demonstrates a Project with a priority branch namedproduction
, with distinct vCPU and vGPU resource queue assignments between the priority branch and the Project itself.
View a Project's resource queue information at a glance from the platform filesystem sidebar.
For more information, visit the priority branch documentation.
Date published: 2025-01-14
We are excited to announce that you can now develop Python transforms in VS Code workspaces, allowing you to use the Visual Studio Code IDE to seamlessly develop your Python transforms with Palantir workflows. This beta feature is available to all users who have access to VS Code workspaces (also in beta).
The VS Code workspace interface, featuring improved developer tools.
Python transforms with VS Code workspaces unlock significant improvements to tooling and capabilities that meaningfully improve developer workflows. You can expect the following updates to your developer experience:
To start using Python transforms in VS Code workspaces, open your transforms repository in the Code Repositories application. From here, select Open in VS Code from the top right corner of the screen, which will take you to a VS Code workspace:
Open your Python transforms code repository in a VS Code workspace by selecting Open in VS Code.
VS Code workspaces are in a beta state and available by default in all Organizations where Code Workspaces is enabled. If you do not see the option to open your code repository in a VS Code workspace, contact Palantir Support to learn how to enable access.
Python transforms in VS Code workspaces is a new feature; some transforms preview components are not yet supported. We are actively working to add support for all missing features. For more information, review our documentation.