Palantir AIP (Artificial Intelligence Platform) is enabled by default in new enrollments. Enrollments that began prior to 2024 may need to manually turn on access to AIP features in Control Panel. Your AIP configuration can be managed in Control Panel > AIP settings if you are an enrollment administrator.
Note that enabling AIP may incur additional compute usage.
Learn more about AIP, including a list of supported models and developer capabilities.
AIP usage on the Palantir platform is governed by two levels of permissions:
AIP and core assistant features: Turns on AIP, AIP Assist, AIP Threads, and associated assistant features in Code Repositories, Pipeline Builder, and Workshop.
AIP capabilities for custom workflows: With AIP enabled, platform administrators can enable an additional layer of capabilities to empower developers and application builders to create custom AIP workflows and grant users the necessary permissions to use these custom AIP workflows. Refer to the following table for capabilities that are unlocked when permission is granted.
Capabilities with LLM support in point-and-click interfaces | Capabilities for development with code-based tools |
---|---|
AIP Logic: Use LLM Board | Transforms using LLMs |
Pipeline Builder: Use LLM node and Text-to-embeddings | Functions using LLMs |
AIP Automate | Jupyter® in Code Workspaces with LLMs |
AIP Model Catalog | |
AIP Agent Studio | |
AIP Workshop widgets: AIP Interactive; AIP Generated Content | |
Quiver |
Platform administrators can choose to enable usage for Everyone or given User Groups, or restrict usage by selecting Nobody.
Note that certain apps, such as AIP Logic, may need to first be enabled in Control Panel > Application access before it can be used.
Enabling specific LLMs is performed separately after general AIP enablement and can be managed by enrollment administrators under the Model enablement tab within the AIP settings extension of Control Panel.
Enrollment administrators may view and manage the available models for use within AIP, which may differ across enrollments. View a list of all supported models. Models are grouped according to their legal requirements and terms of use, and groups can be individually enabled. Disabling a model group will break workflows that rely on a model in that specific group.
Model groups may be currently disallowed if the legal requirements for use have not been satisfied. In this case, an enrollment administrator may need to first accept the relevant terms and conditions for the specific model. Note that certain LLM groups may require manual configuration by your Palantir representative to use.
Usage of experimental models can be enabled and disabled by enrollment administrators. For an experimental model to be visible for use in workflows, the Enable experimental models toggle must be enabled as well as the model family to which the experimental model belongs.
AIP is model-agnostic and supports a diverse selection of models for LLM-powered use cases. However, LLM selection and availability differs across enrollments and there are a few prerequisites for a specific model to be available on an enrollment. The criteria are listed below:
Some enrollments are geographically-restricted in that any AIP request to a LLM stays within the boundaries of a certain region. For example, if an enrollment is defined as EU geo-restricted, all requests will be processed in the EU.
Model regional availability refers to the enrollment setup, not to the location of a specific user. Review the following table:
Model Provider | Model | Availability | United States | Europe | United Kingdom | Canada | Australia | Japan |
---|---|---|---|---|---|---|---|---|
Azure / OpenAI | GPT-4o | ✅ | ✅ | ✅ | ✅ | ✅ | ||
Azure / OpenAI | GPT-4o-mini | ✅ | ✅ | |||||
Azure / OpenAI | GPT4-Turbo | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
Azure / OpenAI | GPT3.5-Turbo | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Azure / OpenAI | ada002 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
Azure / OpenAI | embedding3-large | ✅ | ✅ | ✅ | ||||
Azure / OpenAI | embedding3-small | ✅ | ✅ | ✅ | ||||
Azure / OpenAI | GPT4 (8K, 32K) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
AWS Bedrock | Claude3 Sonnet | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
AWS Bedrock | Claude3.5 Sonnet | ✅ | ✅ | ✅ | ||||
AWS Bedrock | Claude3 Haiku | ✅ | ✅ | ✅ | ✅ | ✅ | ||
Open Source (Palantir-hosted) | Llama3 8B | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
Open Source (Palantir-hosted) | Llama3 70B | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
Open Source (Palantir-hosted) | Llama3.1 8B | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
Open Source (Palantir-hosted) | Llama3.1 70B | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
Open Source (Palantir-hosted) | Mixtral 8x7B | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
Google Gemini | Gemini 1.5 Flash | ✅ | ✅ | ✅ | ||||
Google Gemini | Gemini 1.5 Pro | ✅ | ✅ | ✅ |
For information on LLM rate limits, review the documentation on LLM capacity management.
Note: AIP feature availability is subject to change and may differ between customers.