Register an LLM using function interfaces

You can use your own LLMs in the Palantir platform using function interfaces. For example, you can bring your own fine-tuned model to use with AIP Logic, enabling more flexibility and choice for users. Function interfaces enable you to register and use LLMs whether they are hosted on-premise, hosted on your own cloud, or fine-tuned on another platform.

There are currently two ways to build a custom connection to OpenAI:

  • Webhook: You can create a webhook and call it directly from your TypeScript repository. This tutorial follows this method.
  • Direct source (beta): You can create a REST API source in Data Connection and make a fetch call from your TypeScript function.

Direct source connections are currently in beta and may not be available on your enrollment.

This tutorial explains how to create a source to define your LLM’s API endpoint, call the model from a TypeScript function using a webhook, and publish the function for use in the Palantir platform (for instance, with AIP Logic or Pipeline Builder).

Example of "bring your own model" usage in Logic with GPTo1.

Prerequisites

Tutorial overview

In this tutorial, you will write a TypeScript function that calls an external OpenAI model via a webhook, implements the ChatCompletion function interface, and registers the model in Foundry. Completing the tutorial will allow you to use the custom LLM API natively in AIP Logic.

  1. Set up REST source and webhook.
  2. Implement the ChatCompletion interface with a TypeScript function.
    • Create a TypeScript code repository in which you will author a function that implements the function interface.
    • Import the following resources into your repository:
      • The OpenAI source and associated webhook
      • The ChatCompletion function interface
    • Author a TypeScript function that is decorated with the @ChatCompletion function interface and calls out to your source.
    • Save and publish your function.
  3. Your function will now enable you to use your registered LLM in AIP Logic.

Set up REST source and webhook

To maintain platform security, you need to register the call to OpenAI as a webhook using the Data Connection application. The steps below describe how to set up a REST API source and webhook with Data Connection.

Learn more about how to create a webhook and use it in a TypeScript function.

Set up source

  1. Open the Data Connection application.

  2. Select New Source.

  3. Search for REST API.

  4. Under Protocol sources, select REST API.

  5. On the Connect to your data source page, select Direct connection.

  6. Name your source and save the source in a folder. This example uses the source name MyOpenAI.

  7. Under Connection details, perform the following steps:

    • Set the domain base URL to https://api.openai.com and set Authentication to Bearer token. Learn more about OpenAI APIs. ↗.
    • Follow OpenAI's instructions ↗ to create an API key ↗.
    • Copy-paste the newly-created API key into the Bearer token field in Data Connection.
    • Set the port to 443.
    • Create an additional secret called APIKey and paste the same API key used for the bearer token field.
    • Add https://api.openai.com to the allowlist for network egress between Palantir's managed SaaS platform and external domains. You can do this by navigating to Network connectivity and choosing Request and self-approve new policy
      • If you do not have permissions for this step, contact your Palantir representative.
  8. You must enable Export configurations to use this API endpoint in platform applications like AIP Logic and Pipeline Builder. To enable Export configurations, toggle these options:

    • AIP Logic: Toggle on Enable exports to this source without markings validations; this will enable you to use your LLM in AIP Logic.
    • Pipeline Builder: Toggle on Enable exports to this source; this will enable you to use your LLM in Pipeline Builder. Note that this feature is currently in beta and not available on all enrollments.
  9. You must Enable code imports to use this endpoint in your function.

    • Toggle on Allow this source to be imported into code repositories.
    • Toggle on Allow this source to be imported into pipelines.
  10. Select Continue and Get started to complete your API endpoint and egress setup.

Add webhooks to source

  1. On the Source overview page, select Create webhook.

  2. Save your webhook with the name Create Chat Completion and API name CreateChatCompletion.

  3. Import the example curl from the OpenAI Create chat completion documentation ↗.

  4. Configure the messages and model input parameters as in the example below.

    Webhook configuration input configuration.
  5. Configure the choices and usage output parameters as in the example below.

    Webhook configuration output configuration.
  6. Test and save your webhook.

Now you have a REST source and a webhook that you can import into your TypeScript repository.

Implement the ChatCompletion interface with a TypeScript function

After setting up a webhook that retrieves a chat completion from an external LLM, you can create a function that implements the ChatCompletion interface provided by Foundry and calls out to your OpenAI webhook.

AIP Logic searches for all functions which implement the ChatCompletion interface when displaying registered models, so you must declare that your function implements this interface. Additionally, declaring that your function implements this interface enforces at compile-time that the signature matches the expected shape.

You can write your chat completion implementation in TypeScript. To do so, you will need to create a new TypeScript functions repository.

This example function will:

  • Make a call to the previously-created OpenAI webhook.
  • Implement the chat completion interface.

Import webhook and interface into a TypeScript repository

This tutorial assumes a basic understanding of writing TypeScript functions in Foundry. Review the getting started guide for an introduction to TypeScript functions in Foundry.

To start, you will need to import both the OpenAI webhook and the ChatCompletion function interface into the repository. With the TypeScript functions repository open, select the resource imports icon and import both the chat completion function interface and the OpenAI source which is associated with the webhook you previously created.

  1. Use the Add option in the Resource imports side panel to import:

    • The OpenAI Rest API Source that contains the CreateChatCompletion webhook 

    • The ChatCompletion function interface

      Import resources into Typescript repository.
  2. In the Resource imports panel, search for the OpenAI source that contains the CreateChatCompletion webhook and import it into your TypeScript repository. Learn more about how to import resources into Code Repositories.

    Import source and webhook into Typescript repository.
  3. In the Resource imports panel, search for the ChatCompletion interface and import it into your TypeScript repository.

    Import Function interface into Typescript repository.

At this point, your Resource imports should include both the OpenAI source and ChatCompletion interface as seen in the following image.

Post Typescript resource import view.

After importing resources, the Task Runner will re-run a localDev task that generates the relevant code bindings. You can check on the progress of this task by opening the Task Runner tab on the ribbon at the bottom of the page.

Task Runner view.

Write a TypeScript function

In this section, you will write a TypeScript function that calls the previously-created OpenAI webhook and implements the chat completion interface.

Implement function scaffolding

Importing both the CreateChatCompletion webhook (via the OpenAI source) and the ChatCompletion function interface will generate code bindings to interact with those resources. 

You can find code snippets to set up your function scaffolding by selecting the ChatCompletion function interface in the Resource imports panel.

Chat Completion more info navigation.

The following is an example of what your code might look like at this point:

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 // index.ts import { ChatCompletion } from "@palantir/languagemodelservice/contracts"; import { FunctionsGenericChatCompletionRequestMessages, GenericCompletionParams, FunctionsGenericChatCompletionResponse } from "@palantir/languagemodelservice/api"; import { OpenAI } from "@foundry/external-systems/sources"; // This decorator tells the compiler and Foundry that our function is implementing the ChatCompletion interface. // Note that the generic @Function decorator is not required. @ChatCompletion() public myCustomFunction(messages: FunctionsGenericChatCompletionRequestMessages, params: GenericCompletionParams): FunctionsGenericChatCompletionResponse { // TODO: Implement the body }

Implement function body

This section contains the simplest implementation of this function that completes the request.

Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 import { isErr, UserFacingError } from "@foundry/functions-api"; import * as FunctionsExperimentalApi from "@foundry/functions-experimental-api"; import { OpenAI } from "@foundry/external-systems/sources"; import { ChatCompletion } from "@palantir/languagemodelservice/contracts"; import { FunctionsGenericChatCompletionRequestMessages, GenericChatMessage, ChatMessageRole, GenericCompletionParams, FunctionsGenericChatCompletionResponse } from "@palantir/languagemodelservice/api"; export class MyFunctions { @ChatCompletion() public async myFunction( messages: FunctionsGenericChatCompletionRequestMessages, params: GenericCompletionParams ): Promise<FunctionsGenericChatCompletionResponse> { const res = await OpenAI.webhooks.CreateChatCompletion.call({ model: "gpt-4o", messages: convertToWebhookList(messages) }); if (isErr(res)) { throw new UserFacingError("Error from OpenAI."); } return { completion: res.value.output.choices[0].message.content ?? "No response from AI.", tokenUsage: { promptTokens: res.value.output.usage.prompt_tokens, maxTokens: res.value.output.usage.total_tokens, completionTokens: res.value.output.usage.completion_tokens, } } } } function convertToWebhookList(messages: FunctionsGenericChatCompletionRequestMessages): { role: string; content: string; }[] { return messages.map((genericChatMessage: GenericChatMessage) => { return { role: convertRole(genericChatMessage.role), content: genericChatMessage.content }; }); } function convertRole(role: ChatMessageRole): "system" | "user" | "assistant" { switch (role) { case "SYSTEM": return "system"; case "USER": return "user"; case "ASSISTANT": return "assistant"; default: throw new Error(`Unsupported role: ${role}`); } }

Testing

You can now test your function by selecting the Functions tab from the bottom toolbar, which will open a preview panel. Select Published, choose your function myChatCompletion, and select the option for providing your input as JSON.

You can test with a message such as:

Copied!
1 2 3 4 5 6 7 8 { "messages": [ { "role": "USER", "content": "hello world" } ] }

Use your registered LLM in AIP Logic

You can use your function natively in AIP Logic. To do so, select the Use LLM board as you normally would, then select the Registered tab in the model dropdown and select the myChatCompletion model.

Use bring your own model in Logic.

Use your registered LLM in Pipeline Builder (beta)

This feature is currently in beta and may not be available on your enrollment.

You can use your function natively in Pipeline Builder LLM transforms. To do so, select the Use LLM transform as you normally would, then expand Show configurations in the Model section. From the Model type dropdown, select the Registered tab and choose your LLM (shown in the example below as myChatCompletion).

Use LLM interface in Pipeline Builder showing registered LLM.