Core concepts

The following core concepts are essential to understanding and getting the most out of AIP Logic. You can learn more about applying these concepts in the getting started tutorial.

Logic function

A Logic function takes inputs like Ontology objects or text strings, and returns an output that can be a string, an object, or an edit to the Ontology itself.

Logic functions can be leveraged and used like any other function in the platform, such as in Workshop modules. To edit the Ontology, Logic functions must be published and called from an action. For more information, see how to use a Logic function in an action.

Blocks

Each Logic function is composed of blocks, which are ways for an LLM (or set of LLMs) to interact with your data; you can choose a different LLM for each block in your function. AIP Logic supports any available LLM in the platform, in keeping with Palantir's k-LLM philosophy.

There are currently four types of blocks:

The output of a block can be used in subsequent blocks, enabling complex operations to be constructed by chaining blocks together.

Prompts

Prompts are instructions for an LLM, written in natural language. We recommend starting with the most important information (such as an overview of the task you want the LLM to complete), followed by the data the LLM will need and guidance on when to use the tools. When composing a prompt, keep in mind that an LLM only has access to what you specifically make available to it.

Tools

Tools are the mechanism by which AIP Logic enables the LLM to read or write to the Ontology and power real-world operations. AIP Logic leverages three categories of Ontology-driven tools - data, logic, and action - to effectively query data, execute logical operations, and safely take actions. Note that LLMs do not have direct access to tools; LLMs can only ask to use tools, and these tool calls are then executed by AIP Logic within the invoking user's permissions.

The available tools include:

Evaluations

After publishing a Logic function, you can configure Evaluations, which enable you to write detailed tests for your Logic functions. Evaluations for AIP Logic can be used to:

  • Debug and improve Logic functions and prompts.
  • Compare different models, like GPT-4 vs. GPT-3.5 on your functions.
  • Examine variance across multiple runs of Logic functions.

Debugging

After composing a Logic function, you can run the function as a test. Running your function will open the Debugger panel, showing the LLM chain-of-thought (CoT) for the component blocks in the Logic function. Examining the LLM's CoT makes debugging easier by showing each individual step of the LLM’s "thought process" and providing information on any supporting tools used by the LLM.