REMINDER: You can now sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Date published: 2024-04-30
We are excited to announce that Build with AIP will become available in a beta state in early May. The Build with AIP application provides a suite of pre-built products that can be used as reference examples, tutorials, or builder starter packs to adapt to your needs.
From the application, simply search for keywords, use cases, or capabilities to find a product that can accelerate your workflow building. Some of the workflows covered by products include:
And more! New AIP products are under active development and will be released as soon as they become available.
Explore the Build with AIP application.
To learn more about a product, select the product to see a detailed explanation of the value, implementation design, and a break-down of individual resources contained within. Users can install any product to a destination Project of their choice, and monitor the progress of the installation. Previously-installed products can always be accessed from within Build with AIP.
Inside the AIP product showcase.
Additional products will be added continuously based on recommended and requested implementation patterns.
The Build with AIP application is on an accelerated release track and will soon be available for all existing customers. Platform administrators opting to restrict access to this feature can do so via Control Panel through Application Access settings.
All products are backed by Foundry Marketplace. For more details, review the documentation.
Date published: 2024-04-30
HyperAuto is Palantir Foundry’s automation suite for the integration of ERP and CRM systems, providing a point-and-click workflow to dynamically generate valuable, usable outputs (datasets or Ontologies) from the source’s raw data. This feature is now available across all enrollments as of this week.
This new major version introduces a significantly updated user experience as well as increased performance and stability for automated SAP-related ingestion and data transformation workflows.
To create your first HyperAuto pipeline, see Getting Started.
Overview of a live HyperAuto pipeline.
Following the previous beta announcement, the following capabilities are now generally available:
For more information on HyperAuto V2, review the documentation.
Date published: 2024-04-30
Developer Console now provides the ability to host static websites. This addition allows developers to employ Foundry for hosting and serving their websites without the need for external web hosting infrastructure. Static websites consist of a fixed set of files (HTML, CSS, JavaScript) that are downloaded to the user's browser and displayed on the client side. Numerous web application development frameworks, such as React, can be used to create static websites.
Static website support is available for enrollments on domains managed by Palantir (example: enrollment.palantirfoundry.com or geographical equivalent). Feature support for enrollments on customer-managed domains is currently in active development.
Website hosting menu.
To set up and configure website hosting, you will need to:
You can also configure access to the website and set advance Content Security Policy (CSP) settings for your site.
Web hosting can be configured by any user with developer console edit or owner role. Setting up web hosting include creating a subdomain which need to be approved by the enrollment information officer
.
Automate the static website deployment process using a command line tool or by integrating with existing CI/CD tools. Review the API documentation within your OSDK application for more information.
Deploying applications tutorial within the OSDK application documentation.
We are actively developing support for website hosting on customer-owned domains, as well as adding support for hosting your code on Foundry and running CI/CD directly from within Palantir Foundry.
To learn more about website hosting configuration, review the documentation.
Date published: 2024-04-30
We are excited to announce that you can now remove Markings on outputs directly in Pipeline Builder on all stacks. From within Pipeline Builder, it is now possible to:
To use the remove Markings functionality, ensure you meet the following prerequisites:
Remove marking
permission for the specific Marking(s)To remove a marking, navigate to the output you want to remove the marking from and select Configure Markings.
Markings option panel with cursor hovering over the remove from icon.
Once you have selected the marking(s) you wish to remove, these markings will be listed under the Markings removed section
After the removals are applied, the marking will no longer appear on the output once your branch is merged and built on master.
Review the documentation to learn more about removing Markings directly in Pipeline Builder.
Date published: 2024-04-30
Beginning today, users can take full advantage of the existing Code Workspaces JupyterLab® integration to train, test, and publish models for consumption across the platform. Code Workspaces provides an alternative to the existing model training flow in Code Repositories by offering a more familiar, interactive environment for data scientists.
Models published through Code Workspaces are immediately available for consumption in downstream applications like Workshop, Slate, Python transforms, and more.
Models can now be published directly from within Code Workspaces. Just navigate to the Models tab, and select Create a model. Once you have selected your model's location, you will be asked to define a model alias, which is a human readable identifier that maps to the resource identifier used by Palantir Foundry to identify your model.
Create new model option from within Code Workspaces.
After defining your model's alias, follow the step-by-step guide located on the left of the view to publish your model to Foundry. Review details on installing the required packages, defining your model adapter, and finally, to publish your model.
When a model is first created, a skeleton adapter file will be generated with the name [MODEL_ALIAS]_adapter.py
. Review the Model adapter serialization documentation and Model adapter API documentation for more information on how to define a model adapter.
Once you have authored your adapter, you can use the provided model publishing snippet to publish your model to Foundry. Select the snippet to copy it, then paste it into a notebook cell, and execute it.
Models that publish successfully will display a preview output indicating that publishing was a success, and will display information about the model like the model's API and the saved model files.
Model version successful publish notice.
To learn more about developing and publishing models from Code Workspaces, review the Code Workspaces model training documentation. You can also review our full tutorial that walks you through setting up your project, building your model, and deploying your model for consumption in downstream applications.
Jupyter®, JupyterLab®, and the Jupyter® logos are trademarks or registered trademarks of NumFOCUS. All third-party trademarks (including logos and icons) referenced remain the property of their respective owners. No affiliation or endorsement is implied.
Date published: 2024-04-30
Preparation (Blacksmith) is being sunset and replaced by Pipeline Builder, the de facto tool for no-code pipelining and data cleaning. Preparation has been in maintenance mode for several years and will not support Marketplace integrations. While new enrollments will no longer be granted access, this change will not affect any existing installations of Preparation for now.
Date published: 2024-04-24
Model Catalog will be generally available at the end of April 2024, enabling users to view all Palantir-provided models and discover new models in AIP.
Model Catalog enables builders to:
Model Catalog has two main components: the Model Catalog home page and the Model entity page.
The Model Catalog homepage.
The Model Catalog homepage is a discovery and navigation interface, displaying all large language models available on your Foundry enrollment.
There are a few ways to filter models in the home page:
Each model has an entity page with three main sections:
The Model entity page.
For more information on Model Catalog, review the documentation.
Date published: 2024-04-24
In preparation for a major version upgrade (TypeScript 2.0), we are releasing minor version 1.1. Migrating your code to this new version will help you get ready for TypeScript 2.0.
To benefit from new Ontology primitives such as Ontology interfaces, real-time subscription, and for a more efficient way to work with data, we refactored the OSDK TypeScript semantics. This change will be labeled as major release v2.0, to be released before the end of 2024.
Prior to the release of version 2.0, we are aiming to minimize the impact for customers by first providing version 1.1 and 1.2 as a bridge to the upcoming upgrade.
To clarify, the next two upcoming versions are detailed below:
Version 1.1: This version maintains the current language syntax but will deprecate a few methods and properties that will no longer be supported.
Version 1.2: Released later this year, version 1.2 will mainly provide the same functionality as the 1.x version but will introduce the new language syntax.
The following table is a summary of deprecated features and their replacements, with more detailed info on the changes below:
Deprecated | Replacement |
---|---|
page | fetchPage, fetchPageWithErrors |
__apiName | $apiName |
__primaryKey | $primaryKey |
__rid | $rid |
all() | asyncIter() |
bulkActions | batchActions |
searchAround{linkApiName} | pivotTo(linkApiName) |
We are deprecating the page
function, and adding the following two methods for convenience:
fetchPageWithErrors
page
did, and was renamed for consistency with API naming conventions in V2.0 to come.Copied!1 2 3 4 5 6
const result: Result<Employee[], ListObjectsError> = await client.ontology .objects.Employee.page(); if(isOk(result)){ const employees = result.value.data; }
Copied!1 2 3 4 5 6
const result: Result<Employee[], ListObjectsError> = await client.ontology .objects.Employee.fetchPageWithErrors(); if(isOk(result)){ const employees = result.value.data; }
fetchPage
Result<Value,Error>
)try/catch
pattern as it will throw errors where relevant.Copied!1 2 3 4 5 6
const result: Result<Employee[], ListObjectsError> = await client.ontology .objects.Employee.page(); if(isOk(result)){ const employees = result.value.data; }
Copied!1 2 3 4 5 6 7 8
try{ const result: <Page<Employee>> = await client.ontology .objects.Employee.fetchPage(); const employees = result.data; } catch(e) { console.error(e); }
We are deprecating __apiName
, __primaryKey
and __rid
properties and replacing them with $apiName
, $primaryKey
and $rid
properties.
We are deprecating all()
method and added the following method instead for convenience:
asyncIter()
This returns an async iterator that will allow you to iterate through all results of your object set
Before
const result: Result<Employee[], ListObjectsError> = await client.ontology
.objects.Employee.all();
if(isOk(result)){
const employees = result.value.data
}
const employees: Employee[] = await Array.fromAsync(await client.ontology.objects.Employee.asyncIter());
We are deprecating bulkActions
and renaming it to batchActions
This was renamed to match the REST API name.
await client.ontology.bulkActions.doAction([...])
await client.ontology.batchActions.doAction([...])
We are deprecating searchAround{linkApiName}
and replacing it with:
pivotTo(linkApiName)
Before
await client.ontology.objects.testObject.where(...).searchAroundLinkedObject().page()
await client.ontology.objects.testObject.where(...).pivotTo("linkedObject").fetchPage()
After the updated generator version is released, when you create a new SDK version for your application using Developer Console, the produced version will contain these features. Once you update the dependencies in your application, you will see the deprecated methods and properties with a crossover line searchAroundLinkedObject()
, but the code will still function as before. Version 2.0 will drop the support for the deprecated methods.
You can also decide to use an older version of the generator by using the dropdown list using the Generate new version option.
As a summary of the above, the OSDK TypeScript versions to come are as follows:
Version 1.1: This version maintains the current language syntax but will deprecate a few methods and properties which will no longer be supported.
Version 1.2: With release marked for later this year, version 1.2 will introduce the new language syntax.
Version 2.0: Marks the new language syntax as general availability (GA), while dropping support for the deprecated methods and properties and introduce new capabilities which are not available yet.
Date published: 2024-04-11
Time-based group memberships are now available as a security feature in the platform. This feature can support data protection principles such as use limitation and purpose specification by limiting user access to data.
Users with Manage membership
permissions can now grant group memberships for a given amount of time, allowing temporary access to data based on the set configuration. You can choose to configure access based on latest expiration or maximum duration:
The group settings page with time constraint settings.
Any membership access requests to groups with time constraints applied to them will be bounded by those constraints.
A group membership request with the latest expiration constraint of 31/12/2024.
For more information, review our documentation on how to manage groups.
Date published: 2024-04-11
We are happy to announce the launch of our newly redesigned resource sidebar, targeted at streamlining administrative workflows and making them more easily discoverable. You can experience the new visual refresh and organization on the Overview, Access, Activity, Usage, and Settings right-side panels of the resource sidebar in Palantir Foundry.
An updated interface for the Project overview pane.
An updated appearance of the Project access panel.
The updated resource sidebar retains all existing functionality and continues to support all your current workflows without any disruption.
Date published: 2024-04-11
We are excited to introduce Natively Accelerated Spark in Pipeline Builder and Python code repositories, a feature designed to enhance performance and reduce the cost of batch jobs. This feature allows you to leverage low-level hardware optimizations to increase the efficiency of your data pipelines.
Use native acceleration for the following benefits:
select
, filter
, and partition
, significantly improving the speed of your batch jobs.You can enable native acceleration within Pipeline Builder and Python transforms. Note that repositories that use user-defined functions (UDFs) and resilient distributed dataset (RDD) operations may marginally benefit from this feature as well.
Native acceleration is particularly beneficial for data scientists with big data or latency-critical workflows. If you are running frequent builds (every 5 minutes) or operating in a cost-sensitive environment, native acceleration offers a powerful solution.
To test the benefits of native acceleration, simply run your pipeline once with the feature enabled. If you notice an improvement in performance, keep the feature enabled for the pipeline.
To leverage the full potential of native acceleration, review the documentation for Natively Accelerated Spark in Pipeline Builder and Python code repositories.
Date published: 2024-04-09
We created Code Workspaces so users can conduct effective exploratory analyses using widely-known tools while seamlessly leveraging the power of Palantir Foundry. Now generally available, Code Workspaces will bring the JupyterLab® and RStudio® third-party IDEs to Foundry, enabling users to boost productivity and accelerate data science and statistics workflows. Code Workspaces serves as the definitive hub within Foundry for conducting comprehensive exploratory analysis by delivering unparalleled speed and flexibility.
Jupyter® Code Workspace used for data science.
Code Workspaces integrates widely-used tools such as JupyterLab® Notebooks and RStudio® Workbench with the power of the Palantir ecosystem, establishing itself as the go-to place in Foundry for efficient exploratory analysis.
You can soon benefit from the following capabilities:
RStudio® Code Workspace used for machine learning training.
Code Workspaces provides a comprehensive set of tools for performing quick and efficient iterative analysis on your data, seamlessly integrating the best of third-party tools with Palantir's extensive suite of built-in data analytics capabilities. Use it for data visualization, prototyping, report building, and iterative data transformations. Code Repositories and Pipeline Builder can be employed to convert these data transformations into robust production pipelines.
For most workflows, Code Workspaces and other Foundry tools complement each other, with the outputs of any pipeline being accepted as inputs to Code Workspaces analyses, and vice versa. Both Code Repositories and Code Workspaces rely on the same Git version control system. Large R pipelines, model training, or extensive visualization workflows can also be exported to Code Workbook to leverage its distributed Spark architecture.
The following examples are some many real-world use cases that leverage Code Workspaces:
To learn more about what you can do in Code Workspaces, review the relevant documentation:
The following features are currently under active development:
Jupyter®, JupyterLab®, and the Jupyter® logos are trademarks or registered trademarks of NumFOCUS. RStudio® and Shiny® are trademarks of Posit™. Streamlit® is a trademark of Snowflake Inc.
All third-party referenced trademarks (including logos and icons) remain the property of their respective owners. No affiliation or endorsement is implied.
Date published: 2024-04-04
We're excited to launch a new cutting-edge feature in Pipeline Builder: the Use LLM node. This feature allows you to effortlessly combine the power of Large Language Models (LLMs) with your production pipelines in a few simple clicks.
Integrating LLM capabilities with production pipelines has traditionally been a complex challenge, often leaving users in search of seamless, scalable solutions for their data processing requirements. Our solution addresses these challenges by providing the following features:
Pipeline Builder interface showing the addition of five guided prompts.
Pipeline Builder interface showing a prompt to classify restaurant reviews into three different categories.
Pipeline Builder interface showing a prompt to rate positive or negative sentiment of restaurant reviews.
Pipeline Builder interface showing a prompt to summarize restaurant review by length.
Pipeline Builder interface showing a prompt to extract any food or drink elements from each restaurant review.
Pipeline Builder interface showing a prompt to translate restaurant reviews to Spanish.
Tailor the model to your specific requirements by creating your own LLM prompt.
Pipeline Builder interface showing a custom prompt.
See our documentation for more on using the new LLM node in Pipeline Builder.
Date published: 2024-04-02
Workshop has rebranded "promoted variables" as "module interface." The module interface is the set of variables that are able to be mapped to variables from a parent module when embedded, and initialized from the URL. You can think of the module interface as the API for a Workshop module.
To add a variable to the module interface, navigate to the Settings panel for a variable, add an external ID, and make sure the toggle for module interface is enabled. Optionally, you can give a module interface variable a display name and description, which will be shown when the module is embedded or used in an Open Workshop module event.
The transition from promoted variables to the module interface entails minimal functionality changes, which are detailed below.
The Workshop module Overview tab now provides a comprehensive overview of the module interface, along with a new ability to set a static override preview value for interface variables.
New module interface overview section in the overview panel of the Workshop editor.
Variables can be added to the module interface using the newly added variable settings panel. The settings panel provides the ability to set a display name and description for module interface variables, offering an improved experience when configuring variables to pass through the module interface from another module, such as when embedding a module or configuring an open Workshop module event. Moreover, the state saving and routing features can now be independently enabled, offering greater flexibility and control over the module interface.
The variable settings panel, featuring an object set variable configured for use with the module interface.
Date published: 2024-04-02
Workshop is now the sole application builder for content in new tabs in object views. While existing Object View tabs built using the legacy application builder are still supported, you can no longer create new tabs of this type.
Building with Workshop means more flexible layout options, more powerful widgets and visualizations, and configurable data edits via actions. Creating a new tab will now automatically generate a new Workshop module to back the tab. When a new object type is created, Workshop also backs a cleaner default object view which contains a single tab displaying the object's properties and links.
Additionally, you can also embed existing modules to be used as object views by using the ▼ dropdown menu available next to Add tab.
Recent improvements to the Workshop object view building experience include the following:
Inline editing and one-click value copying.
Time dependent properties.
Object edits history at a glance.
Editable module name, and editable object view tab name from within Workshop.
Review the Configure tabs documentation to learn more about editing object views.
Listing Maintenance Operator contact information in Upgrade Assistant and related resources | We are working on providing maintenance operators and end users with a seamless, efficient experience when performing required upgrades on Palantir Foundry. As a first step, the maintenance operator role will be more visible across the platform. Users will be able to find their Organization's maintenance operator listed in Upgrade Assistant and notifications related to upgrades, as well as any other relevant areas. Users are now able to easily identify the maintenance operator when action is required for upgrades and communicate with the operator when clarification and guidance is required in a timely manner.
Branch pinning for dataset aliases | Dataset alias mappings in Code Workspaces can now be pinned to a specific branch, allowing you to use data from the exact branch of a dataset. As part of this change, it is now also possible to import the same dataset more than once so that you can use data from multiple branches of the same dataset. To use this feature, open the context menu for an imported dataset and select Pin dataset branch. Select a specific branch to pin, or select Recommended to restore the default behavior.
Seamless Media Set Link Transformation in Notepad | Embedding a media set link into Notepad now intuitively transforms it into a resource link, enhancing the ease of connecting to media sets.
Autocomplete match conditions for object-derived datasets in the Materialization join editor | The Materialization join editor now provides autocomplete functionality for datasets derived from object sets. If both the left and right sides of the join are converted object sets, the editor will attempt to auto-complete the join condition with the primary / foreign key columns. The autocomplete logic will run whenever either of the two sides is changed; if either of the two is not an object set, the configuration will be left unchanged. If both sides are object types but have no link, the match condition will be undefined. Note that the inputs must be directly-converted object sets; an object set that has been previously converted to a materialization will not be detected.
Refined Ontology SDK Resource Handling | The Ontology SDK now offers improved mechanisms for managing application resources, ensuring dependencies are automatically accounted for when resources are added or removed. A new alert feature cautions developers about potential issues when excluding resources, providing an option to preserve them and prevent application disruptions. This enhancement promotes a seamless transition across SDK updates and simplifies the removal of outdated resources through the OAuth & Scopes interface.
Gantt Chart widgets now support more timing options | The Gantt Chart widget has been upgraded to support custom timings backed by functions on objects (FoO) to provide more flexibility in visualizing Gantt charts. Previously, Gantt Chart widgets only supported properties for determining event timing.
Expanded Workshop Conversion Capabilities | Workshop now supports additional data type conversions in Variable Transforms, including Date to Timestamp, Timestamp to Date, and String to Number, enabling more flexibility in data manipulation.
Enable column renames on export button events | Workshop now offers the ability to personalize property column names when setting up exports to CSV or clipboard formats. This update introduces a new level of customization for organizing and labeling exported data.
Accelerated Batch Processing in Pipeline Builder | Pipeline Builder now supports natively accelerated pipelines, offering significant performance improvements for batch processing. Refer to the updated documentation for details on managing native acceleration.
Improved Dataset and Time Series Labeling in Pipeline Builder | Within Pipeline Builder, users now have the enhanced capability to directly modify the names of dataset inputs and constructed dataset outputs via the graph's context menu. This enhancement extends to the renaming of constructed time series synchronizations, which is also accessible through the same context menu interaction.
Customizable Thumbnails for Marketplace Listings | The Marketplace has been enhanced to support customizable thumbnails for each product version within the DevOps application, ensuring a more visually appealing presentation on the Marketplace Discovery page. Should a thumbnail not be provided, a default image corresponding to the product category will be displayed instead.
Introducing Branching in AIP Logic | AIP Logic now supports branching, which is a more efficient way to collaboratively manage edits. Users can now create a new branch, save changes, and merge those changes into the main version when they are done iterating.
Precise Parameter Control with New Apply Action Block | The new Apply Action block in AIP Logic enables users to invoke actions with exact parameter configurations without having to go via an LLM block. This block gives you precise control over how parameters are filled out and speeds up the execution.
Streamlined Function Execution in AIP Logic | Users can now deterministically use functions within AIP Logic through the Execute block. This enhancement allows for precise data loading, which can be passed to an LLM block or used alongside the Transforms block. For example, users can leverage the output from a semantic search function through the Execute block to facilitate the retrieval of resolution text from similar incidents.
Streamlined Evaluation Setup with Direct Media Set Integration | Users can now directly integrate media sets into the evaluation configuration workflow in Modeling. The new media set integration feature automatically presents available options when selecting a dataset, simplifying the evaluation setup process and ensuring the correct media sets are easily included.
Enhanced Indexing Prioritization in Object Storage V2 | Enhanced queuing logic in Object Storage V2 now ensures more efficient handling of indexing tasks, particularly for large shards, leading to reduced wait times for metadata updates and an overall boost in system performance.
Update your action types to the latest function version in Ontology Manager | You can now update in bulk your function-backed actions to use the latest version of the backing function. This feature is accessible from the Action Types tab in Ontology Manager. Simply filter down the set of action types to those with the Run function rule, select the action types to update, and then select Update function version.
Enhanced Bulk Function Version Management in Ontology Manager | Ontology Manager introduces enhanced capabilities for managing multiple function versions simultaneously. Users can now efficiently update function version references in bulk for a variety of action types, ensuring up-to-date functionality across the board. To optimize server performance, updates are capped at 50 action types per operation. Note, action types linked to high-scale functions are not eligible for bulk updates.
Improved Alias Creation in Ontology Manager | Ontology Manager has been upgraded to facilitate the establishment of alternative designations, or aliases, for object types, enhancing the ability to locate objects when precise names are not known.
Streamlined Node Operations in Vertex Diagrams | Vertex diagrams now offer enhanced node management, enabling users to directly modify the backing object of a node within the diagram interface. The functionality for adjusting the backing object of edges has also been improved, now accessible via the component root editor for more intuitive use.
Streamlined Justification Retrieval for Checkpoints | Users can now swiftly access their top five most recent justifications for text-based prompts within Checkpoints, enhancing efficiency in adhering to organizational standards. This convenience is extended to both Response and Dropdown Checkpoints justification categories and remains available for 30 days post-justification creation. To activate this feature, administrators may select the corresponding setting in the Checkpoints configuration. For further details, refer to documentation on configuring recent justifications.
Introduction of New Checkpoint Varieties: Schedule Delete and Run | Users now have access to additional checkpoint types: Schedule, Delete, and Run. These new options enhance the flexibility and efficiency of managing checkpoint timetables, contributing to more streamlined workflows.
General Availability of Project Constraints | The introduction of Project constraints allows for the support of advanced governance guardrails on how data can be used. With project constraints, owners can configure which markings may or may not be applied on files within a Project. Project constraints can allow for the effective technical implementation of data protection principles like purpose limitation.
This is typically used to prevent users from accidentally joining data that should not be joined and is relevant in situations where users might need access to multiple markings though specific combinations of marked data should not be allowed. For example, a bank might have a requirement that sensitive investment data can never be joined with research data for compliance reasons. However, compliance officers may need access to both investment and research data separately to do their work.
For more information, see project constraints documentation and details on how to configure.