Announcements

REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.

Share your thoughts about these announcements in our Developer Community Forum ↗.


Leverage LLM-powered assistants in custom applications

Date published: 2024-12-19

AIP Agents can now be deployed to Ontology SDK (beta) and third-party applications using Palantir platform APIs. AIP Agents are interactive assistants built in Agent Studio that can be equipped with enterprise-specific information and tools. This feature simplifies integration and enhances the capabilities available to developers, underscoring our commitment to building a robust development ecosystem.

AIP Agents in third-party applications

AIP Agents can now be deployed to third-party applications using Palantir platform APIs, allowing developers to seamlessly integrate LLM-powered assistants and provide improved support to users. These platform APIs enable users to programmatically create, update, and list conversation sessions with AIP Agents, allowing for embedding within custom application contexts. For more information on using platform APIs and a full list of supported capabilities, refer to the API documentation.

AIP Agents in Ontology SDK applications [Beta]

AIP Agents can be used in Ontology SDK applications through Developer Console support for platform API and SDK resources. To use an AIP Agent in an Ontology SDK application, add the necessary Ontology resources and the Project containing your agent to the application's Platform SDK resources.

The option to add a Project for an AIP Agent in the Developer Console Platform SDK tab under Resources.

The option to add a Project for an AIP Agent in the Developer Console Platform SDK tab under Resources.

Then, enable AIP Agents API operations for your application, which grants permission to read, create, and update AIP Agent interactions.

The option to enable the AIP Agents API operations in the Client allowed operations table in your Developer Console application.

The option to enable the AIP Agents API operations in the Client allowed operations table in your Developer Console application.

Leverage these new platform APIs in Ontology SDK or third-party applications to provide targeted, real-time support with LLM-powered assistants. To get started with AIP Agents in custom applications, refer to the AIP Agent Studio and platform API documentation.


Semantic search KNN Join now available in Pipeline Builder

Date published: 2024-12-19

KNN Join is now available in Pipeline Builder across all enrollments. This powerful feature allows you to find the K-nearest rows from the right dataset for each row in the left dataset, making your data-merging tasks more efficient and accurate.

What is KNN Join?

KNN (K-Nearest Neighbors) is a method that helps you match and combine rows from two datasets based on their similarity. Imagine you have two lists of items, and you want to find the items in one list that are most similar to the items in the other list. KNN Join does this by comparing each item from one list with all the items in the other list and finding the closest matches.

To use the feature, select the Join board, then the KNN join for Join type. The following screenshot shows an example of a KNN join type selection:

A screenshot of the KNN join selection, with an example.

A screenshot of the KNN join selection, with an example.

Then, specify the parameters for the KNN join. The following example demonstrates an objective where for every movie on the left dataset, we want to find the three nearest movies based on the keywords of all movies.

The KNN board configured in a way to get the three closest movies based on the similarities of the keywords for each movie.

The KNN board configured to get the three closest movies based on the similarities of the keywords for each movie.

The KNN join feature was implemented to enhance your data processing capabilities.

For more information, visit our KNN join documentation.


Debug TypeScript Functions in Code Repositories during Live Preview [GA]

Date published: 2024-12-17

Code Repositories now enables you to debug and iterate on TypeScript Functions during Live Preview. This feature is generally available across Foundry enrollments the week of December 16.

To get started, create a TypeScript Function in Code Repositories and place a breakpoint by hovering your cursor over the line number where you want to begin the debugging process. Next, select the red circle that appears to Set a breakpoint.

Set a breakpoint in your TypeScript Function to inform Code Repositories where the debugging process will begin.

Set a breakpoint in your TypeScript Function to inform Code Repositories where the debugging process will begin.

Select Run and Debug within the Functions panel to start the debugging session.

Select Run and debug to begin the debugging process and launch the Debugger interface.

Select Run and debug to begin the debugging process and launch the Debugger interface.

Code Repositories will launch the Debugger interface, where you can step over, into, or out of individual Frames. To end the debugging session and return to your Function's Live Preview, select the red square icon in the left panel to Stop execution.

An in-progress debugging session within Code Repositories.

An in-progress debugging session within Code Repositories.

Review the Ontology building documentation to learn more about debugging TypeScript Functions.


Introducing time series alert automations [GA]

Date published: 2024-12-10

We are excited to share that you can now configure automations to send alerts, or "events", when time series data meets a certain specific criteria. With time series alerting, you can maintain awareness of critical workflows in your organization, such as learning when machine pressure exceeds a set limit or when production outputs fall below an expected result.

To get started with time series alerts, you will first identify periods of interest within time series data using the Time series search card in Quiver. Then, the logic behind the time series search is saved and replicated across objects of the same type using Automate. When your configured alerting automation runs, any newly identified time intervals are output as objects in a pre-configured alert object type.

The time series alerting feature will be generally available the week of December 16, 2024.

The time series condition logic in the Automate application.

The time series condition logic in the Automate application.

Why should I use Quiver and Automate for time series alerting?

The first time your automation runs, it will check the entire time series for alerts. From then on, it will only check for new data. In other words, this product runs incrementally by default. This is different from both Foundry Rules and FoundryTS, which both check the entire time series for alerts every time they run. For this reason, using Quiver with Automate for time series alerting will be more performant and cheaper than those previous solutions.

What's next for time series alerting?

Time series alerting automations are not a real-time solution. Although search logic can be automated on time series backed by streams, the automations will not run directly on top of the streaming data but rather on top of the archive dataset. This incurs at least 10 minutes of additional latency since archive jobs run every 10 minutes. Our team will be focused on building a real-time alerting solution in the coming months.

Learn more about time series alerting automations in our documentation.


Access request workflow improvements are now available

Date published: 2024-12-10

Initiating access requests is easier than ever, with group attributes and descriptions now visible in the request workflow.

Additionally, access reviewers will benefit from enhanced search capabilities to identify tasks, more information at a glance without needing to open a request, and the ability to easily revert accidentally approved subtasks.

Key request workflow improvements

When Projects in the platform have many groups, it can be difficult to know which group is the appropriate one to select when requesting access. Some of the following new features allow you to configure a better request experience and gain more transparency into group details:

  • Group descriptions, if set, now show up in the request flow below the name of the group.
  • Attributes can now be configured for groups, allowing you to filter groups available for selection. For example, you can add a Role attribute to a group so you can filter down to groups with certain permissions. Read more about how to set group attributes in our documentation.
  • Email notifications are now automatically sent to request creators. These notifications are optional, and you can choose to opt out of them if desired.

The improved Request access dialogue, featuring group descriptions and filtering support.

The improved Request access dialogue, featuring group descriptions and filtering support.

Key review workflow improvements

Access request reviewers now benefit from the following legibility and usability improvements that help to efficiently process requests:

  • Now, it is much easier to reverse mistakenly approved access requests. If you are a request approver, you can now use the new Revert option on each subtask.

    The Revert option available on an approved Marking access request.

The Revert option available on an approved Marking access request.

  • The inbox view of the Approvals application now allows reviewers to view a request preview and understand key details without having to open the request overview page.

    Two request previews for a previous and new access request. Details include the request creator, request date/time, justification, and related Project.

Two request previews for a previous and new access request. Details include the request creator, request date/time, justification, and related Project.

  • Request reviews no longer need to navigate to platform settings to view a user's groups. Instead, reviewers can now view the groups referenced in the request along with the groups to which the requesting user belongs.

    An open access request, with a pop-up window showing the user's current group membership.

An open access request, with a pop-up window showing the user's current group membership.

  • Previously, request reviewers were not allowed to work on requests that were previously actioned by another reviewer. Now, all eligible reviewers can override previous reviews, make edits, and approve a request.

    The action buttons at the bottom of an access request, with options to Approve eligible tasks and Override and approve tasks.

The action buttons at the bottom of an access request, with options to Approve eligible tasks and Override and approve tasks.

Finally, several additional small improvements were made to the search and filter functionality:

  • You can now search by request title and requested groups.

  • Selected filters now persist within and between sessions.

  • Redundant statuses and quick filters were removed.

    The updated view of the Approvals application overview page, with updated search and filter options and visibility.

The updated view of the Approvals application overview page, with updated search and filter options and visibility.

Learn more about approvals and access requests.

Configure any driver property to a JDBC-backed source using Data Connection [GA]

Date published: 2024-12-12

You can connect Foundry to a variety of relational databases and data warehouses, such as Snowflake and Salesforce, through custom JDBC sources configured in the Data Connection application. Now generally available across Foundry enrollments, Data Connection's JDBC properties panel allows you to configure any property on an underlying Palantir-provided JDBC driver when you create and configure your driver's behavior.

You can add driver properties to Palantir-provided JDBC sources when making connection configurations in Data Connection.

You can add driver properties to Palantir-provided JDBC sources when making connection configurations in Data Connection.

When creating a new source in Data Connection, you can create additional JDBC properties in the Connection details window by selecting More options to open the JDBC properties panel. Additionally, you can choose to encrypt select JDBC properties you provide to the driver by creating a New encrypted property.

Data Connection enables you to create additional JDBC properties beyond those automatically fed to the driver.

Data Connection enables you to create additional JDBC properties beyond those automatically fed to the driver.

You can reference Data Connection's documentation for a complete list of the available Palantir-provided drivers for JDBC sources.


Standardize Projects using Project templates [GA]

Date published: 2024-12-12

Project templates standardize the creation and configuration of projects within a space.

Governance frameworks can be supported through the configuration of platform security primitives like roles, groups, markings, and project constraints. ​These configurations can be encoded and mandated for all new Projects through Project templates, allowing organizations to set governance guardrails on created Projects.

Space owners can create, edit, and delete Project templates. Project templates can be administered in the Control Panel Spaces extension on a per-space level.

Set up a new project template using the Create project template wizard.

Set up a new project template using the Create project template wizard.

Learn more about managing Project templates on our documentation page.


Accelerate your use case building with Examples, now with the ability to bring your own data and build walkthroughs

Date published: 2024-12-12

Examples, a new in-platform component of Build with AIP, is a curated library of reference examples, tutorials, and building blocks designed to turbocharge your workflow building.

To access Examples, type "Examples" into the application search on the workspace navigation bar or via Support > Explore Examples. Additionally, you can explore application-specific examples directly from application splash pages.

Two new groundbreaking features for Examples are now available on all enrollments:

  • Bring your own data: You can now easily install complex workflows using your own data - just drag and drop from your desktop (or upload an existing resource in Foundry) on several of our most promising examples. Look for examples with the “bring your own data” tag to try this feature. This exciting feature allows you to easily install production-ready workflows.

Look out for the "Bring your own data" tag on the example.

Look out for the "Bring your own data" tag on the example.

Now you can install an example with your own data to drive a production-ready workflow in minutes.

Now you can install an example with your own data to drive a production-ready workflow in minutes.

  • Walkthroughs: Obtain a step by step walk through of exactly how a workflow was made. AIP Assist walkthroughs will automatically guide you from resource to resource to make it easier to comprehend your workflow.

Get step-by-step explanations of the resources in your example with the walkthrough feature in Examples.

Get step-by-step explanations of the resources in your example with the walkthrough feature in Examples.

Additionally, the Examples application covers installable examples for almost every part of Foundry, not just AIP. Explore specific reference examples for each Foundry application on the respective homepages of the application.

For more on Examples, visit our documentation.


Create annotations on documents using Workshop's PDF Viewer widget

Date published: 2024-12-10

The PDF Viewer widget in Workshop now supports more complex document tagging workflows by enabling users to create, display, and interact with text and area annotations overlaid on PDF files.

Use PDF Viewer to display ontology objects as annotations with customizable colors and interactions.

Use PDF Viewer to display ontology objects as annotations with customizable colors and interactions.

You can configure document annotation display and interaction settings through the Annotation options menu of the PDF Viewer widget's Widget setup panel. If you want to display and customize existing annotations with features like custom color highlighting and events on selection, then you can add an annotation layer through the Display existing annotations menu.

You can configure annotation display and creation settings within Annotation options.

You can configure annotation display and creation settings within Annotation options.

Additionally, you can use Create annotations (via Actions) to configure actions which run on new text or area annotations, enabling you to add a document annotation through the Add annotation: pop-up that appears when you highlight text or an area.

You can configure annotation actions which appear when you select a document's text or a specific area.

You can configure annotation actions which appear when you select a document's text or a specific area.

For more information on how to configure existing or create new annotations in Workshop, review PDF Viewer's documentation.


Announcing AIP rate limits control for Enrollment Admins at the Project level

Date published: 2024-12-10

Enrollment administrators can navigate to the AIP rate limits page in the Resource Management App to configure the maximum % of TPM (tokens per minute) and RPM (rate per minute) that all resources within a given Project could utilize at every given minute combined, per model.

This means that you have the flexibility to maximize LLM utilization for production use cases in case of ambitious use cases in AIP, and limit or disallow experimental projects from saturating the entire enrollment capacity.

By default, all Projects are given a specific limit to operate to. An admin can create additional Project limits, define which Projects are included in each one, and what percent of enrollment capacity can be used.

Read more in our new LLM Capacity Management documentation.

Navigate to Resource Management to the AIP rate limits page to add and manage Project limits.

Navigate to Resource Management to the AIP rate limits page to add and manage Project limits.


Pseudocode rendering now available in Pipeline Builder

Date published: 2024-12-10

Starting today, you can now use pseudocode transform rendering to help with your pipeline readability experience.

What is pseudocode?

Pseudocode is a simplified way of writing down the steps of an algorithm or process in plain language. It looks a bit like code but does not adhere to a specific programming language's syntax, making it easier to read and understand.

Pseudocode comes with the following benefits:

  • Improved readability: You now have the option to display your transforms in a cleaner pseudocode format, making it easier to follow the logic for those who are familiar or prefer this logic representation.
  • Automatic screen fitting: The pseudocode will automatically adjust to fit your screen, so you will not have to worry about endless scrolling.

New pseudocode view option on the Pipeline Builder graph showcasing the automatic layout adjustment when the screen width is adjusted.

New pseudocode view option on the Pipeline Builder graph showcasing the automatic layout adjustment when the screen width is adjusted.

You can turn this preference on in Settings under User preferences or in any transform path using the </> icon.

Select User preferences option to access your view preference. Toggle pseudocode style on for collapsed boards using the </> icon.

The below two images shows the two rendering options. The original format will show the current view of transforms in a collapsed format as follows:

The below two images shows the two rendering options. The original format will show the current view of transforms in a collapsed format as follows:

This image shows the current view of transforms in a collapsed board rendering format. This format may be preferred by users who are accustomed to this layout and find it straightforward.

Conversely, the new pseudocode format will render like in the following image:

This image showcases the new pseudocode transform rendering. The transforms are now displayed in a clean, easy-to-read format, which can be particularly beneficial for users familiar with coding.

This image showcases the new pseudocode transform rendering. The transforms are now displayed in a clean, easy-to-read format, which can be particularly beneficial for users familiar with coding.

Note that enabling pesudocode does not provide a way to write code into Pipeline Builder and will not affect diff views, joins, unions, or LLM nodes. The aim of the functionality is to provide a more complete summary of what each transform is doing to allow for easier skimming and comprehension.

Review the documentation on Transform views.


LLM Capacity is now increased if both Direct OpenAI and Azure OpenAI are enabled

Date published: 2024-12-05

Starting today, if you have both the Azure OpenAI and Direct OpenAI model families enabled, your enrollment will have 2x the amount of TPM (tokens per minute) and RPM (requests per minute) capacity for GPT4o and GPT4o-Mini. Enabling Direct OpenAI also guarantees increased stability and early access to new models.

Be aware that Direct OpenAI currently lacks support for geo-restriction. As a result, only non-geo-restricted enrollments can take advantage of the enhanced capacity.

Enrollment Administrators can enable Direct OpenAI or other model families under the Model enablement tab within the AIP settings extension of Control Panel.

You can enable specific Model families from within Control Panel's AIP settings.

You can enable specific Model families from within Control Panel's AIP settings.


Create custom widgets that can interact and bidirectionally communicate with Workshop

Date published: 2024-12-05

Support for custom widgets in Workshop powered by iframing OSDK-built or external web applications is now available in Workshop, via use of a plugin ↗ that facilitates bidirectional communication with Workshop. Features of bidirectional communication include:

  • Defining the configuration of fields and events for your custom widget to access and interact with
  • Reading from Workshop variables from within your custom widget
  • Writing to Workshop variables from within your custom widget
  • Executing Workshop events from within your custom widget

Define the configuration fields in terms of variables and events

You can define the shape of your custom widget’s own configuration panel in terms of variables and events. Taking the example of a OSDK app with an image carousel of rental house objects, the variables and events are defined in a configuration that consists of a list of fields in correspondence, that are then rendered in Workshop with the associated labels.

You can load your application URL into Workshop, which will then load the list of fields corresponding to those in the application as the configuration options for the widget. In the animation above, the configuration options for a custom carousel application include a string field Carousel Title, an object set field Carousel Objects,  an object set field Selected Carousel Object, and event field Carousel OnClick Event.

You can load your application URL into Workshop, which will then load the list of fields corresponding to those in the application as the configuration options for the widget. In the animation above, the configuration options for a custom carousel application include a string field Carousel Title, an object set field Carousel Objects, an object set field Selected Carousel Object, and event field Carousel OnClick Event.

Read from Workshop variables

A custom app displayed in an iframe will continue to receive updates from Workshop variables at the moment of any change.

Changing the values of the configured variables in Workshop immediately alerts the sample OSDK app of the change to allow it to re-render, as shown in this animation where the title for the carousel app is being dynamically modified.

Changing the values of the configured variables in Workshop immediately alerts the sample OSDK app of the change to allow it to re-render, as shown in this animation where the title for the carousel app is being dynamically modified.

Write to Workshop variables

Custom apps can write to Workshop variables. Using the API provided in the plugin package ↗, use the IDs of the config variable fields that you defined in order to set variable values in Workshop. An example is shown in the code snippet below where for config field selectedCarouselObject, you can use the following methods in your iframed application to write to the Workshop variable populating selectedCarouselObject:

workshopContext.selectedCarouselObject.setLoadedValue(val: OntologyObject[] | undefined);
workshopContext.selectedCarouselObject.setLoading();
workshopContext.selectedCarouselObject.setReloading(val: OntologyObject[] | undefined);
workshopContext.selectedCarouselObject.setFailed(errorMessage: string);

When a user interacts with the OSDK or custom app, Workshop will update accordingly and render the appropriate components that use the variable value to reflect the new value. In the gif above, when the carousel is clicked through, the selected carousel object field in the configuration is updated with the object corresponding to the current slide of the carousel.

When a user interacts with the OSDK or custom app, Workshop will update accordingly and render the appropriate components that use the variable value to reflect the new value. In the gif above, when the carousel is clicked through, the selected carousel object field in the configuration is updated with the object corresponding to the current slide of the carousel.

Execute Workshop event

Custom apps can execute Workshop events. Using the API provided in the plugin package ↗, use the IDs of the config event fields that you defined in order to execute an event in Workshop. In the example shown below where config field is carouselOnClickEvent, you can use the following method in your iframed application to execute the Workshop event(s) configured under field carouselOnClickEvent:

workshopContext.carouselOnClickEvent.executeEvent(mouseEvent?: React.MouseEvent);

You can configure events that will happen when it is triggered in the OSDK or custom app. In the animation above, a user clicking an image in the application's carousel opens an overlay containing more details on the object corresponding to the carousel slide that was clicked.

You can configure events that will happen when it is triggered in the OSDK or custom app. In the animation above, a user clicking an image in the application's carousel opens an overlay containing more details on the object corresponding to the carousel slide that was clicked.

How to set up your custom widget

To get started, review the Custom widget via iframe documentation for details on implementation. Then, we recommend you visit the NPM plugin documentation ↗ to configure the plugin with your React or OSDK app.

Once you have the plugin configured, continue to configure your widget via the steps provided on the Palantir documentation.


Improved access to and usability of AIP in Quiver [GA]

Date published: 2024-12-03

Access to AIP in Quiver is generally available the week of December 2.

Since July 2023, users have been able to leverage the power of AIP from their Quiver canvases to explore their data with ease by asking AIP to generate new or configure existing cards via natural language prompts. Quiver's two primary large language model-driven capabilities - AIP Generate and AIP Configure - are now accessible from most AIP-enabled Foundry enrollments beneath a card in addition to their existing availability in the top ribbon of Quiver's workspace and a selected card.

AIP Generate creates analysis from a user prompt to provide rapid insight on data available in the Ontology, offering another method for object set analysis in addition to capabilities native to Object Explorer and Contour. AIP Configure applies user prompts to update card configurations and tailor analytical outputs to a user's needs regardless of their familiarity with Quiver's visualization settings.

Quiver cards that can leverage AIP display an AIP logo to their right in the card search bar.

Quiver cards backed by AIP's analytical capabilities display an AIP logo to their right.

Quiver cards backed by AIP's analytical capabilities display an AIP logo to their right.

Access AIP Generate from a Quiver card

After you select an object set to analyze in Quiver, you will see an input field with an AIP logo beneath the card selection that instructs you to Enter a query to continue the exploration. Write an analytical action you would like to perform in the text input box, and AIP Generate will suggest options using a Palantir-provided large language model (LLM).

Users can access AIP Generate beneath a Quiver card to create a new analysis.

Users can access AIP Generate beneath a Quiver card to create a new analysis.

Select an option or press the Enter key to instruct AIP to add the card to the canvas and make all configurations on your behalf. You can also reconfigure your prompt in the text input box to produce alternative options.

If AIP believes your query requires multiple steps, it will highlight this as a + Follow Up. Hovering your cursor over the + Follow Up tag shows the next prompt which AIP will automatically apply.

AIP generates follow up actions if it believes your query requires multiple steps.

AIP generates follow up actions if it believes your query requires multiple steps.

Once you select + Follow Up, AIP Generate enters Chain Of Thought mode, showing its previous steps along with suggested next actions. You can select Reset to enter your own prompt as a reconfiguration of AIP Generate's initial follow up.

AIP Generate enters Chain Of Thought mode for multi-step queries.

AIP Generate enters Chain Of Thought mode for multi-step queries.

Access AIP Configure from a newly-created analysis

To configure an existing card, hover your cursor over the card and select the Modify button before entering your configuration prompt into the text field to the right of the AIP icon. Verify that AIP's suggestion answers your prompt before selecting the proposed modification.

Users can access AIP Configure beneath an analysis card.

Users can access AIP Configure beneath an analysis card.

AIP must be enabled on your Foundry enrollment to access these features in Quiver.

You can reference additional AIP Generate and AIP Configure details within Quiver's existing documentation.


Leverage custom LLM-powered assistants in Carbon workspaces

Date published: 2024-12-03

Starting the week of December 9, AIP Assist Agents will be generally available in Carbon workspaces. AIP Assist Agents are LLM-powered assistants that use custom sources as their only search context, allowing users to access immediate, interactive support tailored to their specific needs. Carbon workspace managers can now configure and deploy AIP Assist Agents in Agent Studio and make them available in Carbon workspaces, providing access to support in targeted, context relevant areas.

Carbon workspace managers can use this feature by navigating to workspace settings, enabling AIP Assist, and selecting one or more agents for end-users to access.

The Carbon workspace settings, now with the option to enable AIP Assist and select agents.

The Carbon workspace settings, now with the option to enable AIP Assist and select agents.

Each agent can provide expertise in different areas, allowing you the flexibility to customize support and offer expert interactive assistance when and where it is needed most.

A sample Carbon workspace with suggested questions for an AIP Assist Agent.

A sample Carbon workspace with suggested questions for an AIP Assist Agent.

Note that users require permissions to selected agents to access this feature.

Learn more about AIP Assist Agents and Carbon workspaces.


New Slack and webhook integrations for monitoring views

Date published: 2024-12-03

New integrations that allow users to receive monitoring view alerts through Slack and webhooks are now available, in addition to existing support for PagerDuty. This allows users to integrate monitoring views with Slack to receive alerts in specified channels, or with other external systems through webhooks. Users can now monitor their resources more effectively, ensuring prompt alerting through preferred channels and enabling seamless integration with existing workflows and tools.

Our new Slack integration allows users to select a Slack source from Data Connection, eliminating the need to manually enter authentication credentials or egress policies to configure Slack alerts for monitoring views.

The Slack integration configuration dialog, showing the selected Slack source, channels, and severity.

The Slack integration configuration dialog, showing the selected Slack source, channels, and severity.

Users can create webhook integrations for monitoring views by selecting a webhook, choosing the message parameter on that webhook, and selecting a severity level.

The webhook integration configuration dialog, showing the webhook, message parameter, and severity.

The webhook integration configuration dialog, showing the webhook, message parameter, and severity.

With these new integrations users can streamline their incident response processes and improve overall system reliability, with the flexibility to choose the notification channel that is most convenient and effective.

Note that all integrations are configured against a given severity level. Only alerts matching that severity will trigger associated integrations.

Learn more about configuring Slack, webhook and PagerDuty integrations.


Significant performance and accuracy boost for AIP Assist introduced

Date published: 2024-12-03

Following feedback from our first Developer Conference for technical leaders and builders held November 13-14, we are thrilled to share a series of significant improvements to AIP Assist that will enhance your experience and productivity.

Key updates implemented to boost AIP Assist's performance

  • Reduced hallucinations and improved tool usage: AIP Assist now provides more accurate answers with fewer instances of irrelevant or incorrect information. You can expect clearer responses and better tool utilization when interacting with the assistant.
  • More detailed and thorough responses: Enjoy more comprehensive answers to your queries. AIP Assist delivers in-depth information and explanations, ensuring you have the details you need.
  • Improved documentation search functionality for Platform Assist and Developer Assist: Search capabilities have been enhanced for both Platform and Developer Assist modes. Finding the information you need is now faster and more accurate than ever before.

Improved knowledge of Platform API Documentation

AIP Assist will now provide in-depth explanations and examples directly from platform API reference documentation to help developers make the most of our features and APIs. You can expect more relevant and precise assistance in your development tasks.

Anthropic Claude 3.5 Sonnet enabled for Developer Assist

We are excited to announce that Claude 3.5 is now the preferred model for Developer Assist. This upgrade brings a significant increase in accuracy for code-related assistance. Whether you are coding, debugging, or exploring new programming concepts, Developer Assist is now more precise and helpful than ever.

Contact your Palantir administrator to enable Claude 3.5 where regionally available.

New information option with model and query understanding details

You can now see which model AIP Assist used and how it understood your query by using the information icon under each answer provided.

Open the context menu using the information icon to learn how AIP Assist provided your response.

Open the context menu using the information icon to learn how AIP Assist provided your response.


Use Python Functions in Pipeline Builder [Beta]

Date published: 2024-12-03

Pipeline Builder now supports using Python Functions in your pipelines. This new feature allows you to seamlessly integrate custom Python logic, including powerful Python libraries, into your batch and streaming pipelines.

To get started, open the Python Functions template in Code Repositories and write your Python Function.

A Python Function created in Code Repositories.

A Python Function created in Code Repositories.

Once you have tagged a release of your function, you can now import and use your Python function in Pipeline Builder.

The output of applying a Python Function to a Manually Entered Table in Pipeline Builder preview.

The output of applying a Python Function to a Manually Entered Table in Pipeline Builder preview.

For more information on how to use Python Functions in Pipeline Builder, review documentation on Use a Python function in Pipeline Builder.


Use the new Send feedback option on our documentation website to help us improve our content

Date published: 2024-12-03

You can now share your thoughts on our public documentation website ↗ through a new Send feedback option at the bottom of all pages of the documentation. We are eager to hear your suggestions, from topics you would like us to expand upon or clarify, to resources we can add to make our documentation and in conjunction, AIP Assist, more powerful in leveraging the Palantir platform.

Your insights are extremely valuable to us as we endeavor to make your experience with the Palantir platform as smooth and efficient as possible. We look forward to incorporating your feedback and updating our documentation to better support your success.

Note: You do not need to register to submit your feedback and your participation is voluntary. For more information on how Palantir processes personal information, review our privacy statement .

You can find the Send feedback option at the bottom of all pages of the documentation.

You can find the Send feedback option at the bottom of all pages of the documentation.


Introducing serverless Python function support [Beta]

Date published: 2024-12-03

Serverless support for Python functions in Foundry is now entering beta. Like TypeScript, you will now be able to run Python functions with no additional setup once you have tagged and released a function repository. Python functions using the serverless backend will be available in Workshop, Actions, and any other place where TypeScript functions can currently be used. This new high-performance backend allows functions to be run with no setup or user-managed resources.

Serverless Python is now available in a beta state and being enabled on a rolling basis. You can verify whether the serverless option is enabled by navigating to your function in Ontology Manager as in the screenshot below.

Select between serverless and deployed function support for your existing function in Ontology Manager.

Select between serverless and deployed function support for your existing function in Ontology Manager.

If the serverless option is not available on your enrollment, contact Palantir Support for enablement.

Once enabled, all new function repositories will use the serverless backend by default after you have tagged and released. No other configuration or setup is necessary. You can learn more about Python functions in Palantir's Python documentation. You can also switch existing Python functions using the "deployed" backend to serverless within Ontology Manager.

We generally recommend serverless and it is the default where it has been activated. To help guide your consideration on which to use, review the guidance below.

Deployed functions have the following capabilities not available in serverless functions:

  • Support for external sources.
  • Possibility for local caching may be possible if the function is tolerant to restarts, given the long-lived nature of deployed functions.

Deployed functions have some limitations that do not apply to serverless execution:

  • Different versions of a single function can be executed on demand, making upgrades safer. With deployed functions, you can only run a single function version at a time.
  • Deployed functions incur costs as long as the deployment is running whereas serverless functions only incur costs when executed.
  • Deployed functions require more upfront set up and long-term management as compared to serverless functions which have infrastructure that is managed automatically.
  • The environment that a function executes in is created for each new execution, so there can be no inadvertent state shared between executions.

To understand the difference in capabilities in detail, review the following relevant documentation pages: