Language models within Functions

Palantir provides a set of language models which can be used within Functions. Read more about Palantir-provided LLMs.

Prerequisites

To use Palantir-provided language models, AIP must first be enabled on your enrollment. You also must have permissions to use AIP developer capabilities.

Import a language model

To begin using a language model, you must import the specific model into the Code Repository where you are writing your Functions by following the steps below:

  1. Navigate and open the Model Imports side panel to see all existing imported models.
Model import sidebar.
  1. To import a new language model, select Add in the top-right corner of the Resource Imports panel and select Models. This will open a new window where you will be able to see Palantir-provided models which are available to you.
Model import dialog showing a few Palantir-provided LLMs.
  1. You will also see a tab where you can view custom models which have been created through the Modeling Objectives app previously. More information on using those models can be found in the Functions on models documentation.

  2. Select the models you would like to import, then click Confirm selection to import these models into your repository. Task runner will execute the localDev task, generating code bindings to interact with these models.

  3. After importing the language models, you may now use them in your repository by adding the following import statement, replacing GPT_4o with the name of the language model you have imported into your repository:

Copied!
1 import { GPT_4o } from "@foundry/models-api/language-models"

Writing a Function that uses a language model

At this stage, we can now write a Function that uses a language model we imported. For this example, we assume that we have imported GPT_4o as described above.

We begin by adding the following import statement to our file:

Copied!
1 import { GPT_4o } from "@foundry/models-api/language-models"

Each language model will have generated methods available with strongly typed inputs and outputs. For example, the GPT_4o model provides a createChatCompletion method which allows the user to pass a set of messages along with additional parameters to modify the model’s behavior, such as the temperature or maximum number of tokens.

In the following illustrative example, we use the provided GPT_4o model to run a simple sentiment analysis on a piece of text provided by a user. The function will classify the text as "Good", "Bad", or "Uncertain".

Copied!
1 2 3 4 5 6 7 8 9 10 11 @Function() public async sentimentAnalysis(userPrompt: string): Promise<string> { const systemPrompt = "Provide an estimation of the sentiment the text the user has provided. \ You may respond with either Good, Bad, or Uncertain. Only choose Good or Bad if you are overwhelmingly \ sure that the text is either good or bad. If the text is neutral, or you are unable to determine, choose Uncertain." const systemMessage = { role: "SYSTEM", contents: [{ text: systemPrompt }] }; const userMessage = { role: "USER", contents: [{ text: userPrompt }] }; const gptResponse = await GPT_4o.createChatCompletion({messages: [systemMessage, userMessage], params: { temperature: 0.7 } }); return gptResponse.choices[0].message.content ?? "Uncertain"; }

This function can then be used throughout the platform.

Embeddings

Along with generative language models, Palantir also provides models which can be used to generate embeddings. A simple example is as follows:

Copied!
1 2 3 4 5 @Function() public async generateEmbeddingsForText(inputs: string[]): Promise<Double[][]> { const response = await TextEmbeddingAda_002.createEmbeddings({ inputs }); return response.embeddings; }

This is most commonly used to perform Semantic Search workflows.

Performance considerations

Do note that certain models may have rate limits applied to them, limiting the number of tokens which may be passed over a certain time period. This will be enforced along with any standard limits that apply to Functions.


Note: AIP feature availability is subject to change and may differ between customers.