Data connectivity & integrationPipeline Builder ExpressionsUse LLM

Use LLM

Supported in: Batch

Call an LLM with a configurable prompt.

Expression categories: String

Declared arguments

  • Model - The LLM model to use.
    Model
  • Prompt - The user prompt to pass to an LLM model.
    List<Expression<AnyType>>
  • optional Output mode - Choose to output as a simple output where the output is the type of the output type parameter and errors are returned as null, or output a struct with the output the output type and error as fields.
    Enum<Simple, With errors>
  • optional Output type - The output type LLM responses should adhere to.
    Type<Array<AnyType> | Boolean | Date | Decimal | Double | Float | Integer | Long | Short | String | Struct | Timestamp>
  • optional System prompt - The system prompt to pass to an LLM model.
    Literal<String>

Output type: Array<AnyType> | Boolean | Date | Decimal | Double | Float | Integer | Long | Short | String | Struct | Struct<ok<AnyType> | Boolean | Date | Decimal | Double | Float | Integer | Long | Short | String | Struct | Timestamp, error> | Timestamp

Examples

Example 1: Base case

Argument values:

  • Model:
    gpt4ChatModel(
     temperature: 0.0,
    )
  • Prompt: prompt
  • Output mode: null
  • Output type: null
  • System prompt: In the context of a food delivery app, your job is to rate reviews given in the following user promp...
promptOutput
The food was great!5

Example 2: Base case

Argument values:

  • Model:
    gpt4ChatModel(
     temperature: 0.0,
    )
  • Prompt: [prompt, mediaRef]
  • Output mode: null
  • Output type: null
  • System prompt: You are a highly advanced AI designed to assist healthcare professionals by interpreting medical ima...
promptmediaRefOutput
Patient: John Doe, Age: 45, Symptoms: Persistent cough, shortness of breath, and chest pain. Please analyze the attached chest X-ray for any signs of pneumonia or other abnormalities.{"mimeType":"image/jpeg","reference":{"type":"mediaSetViewItem","mediaSetViewItem":{"mediaSetRid":"r...Diagnostic Report:

Patient: John Doe
Age: 45
Symptoms: Persistent cough, shortness of b...

Example 3: Null case

Argument values:

  • Model:
    gpt4ChatModel(
     temperature: 0.0,
    )
  • Prompt: prompt
  • Output mode: null
  • Output type: null
  • System prompt: null
promptOutput
nullnull

Example 4: Null case

Description: Only MediaSet Reference, without a prompt, should have a null output. Argument values:

  • Model:
    gpt4ChatModel(
     temperature: 0.0,
    )
  • Prompt: mediaRef
  • Output mode: null
  • Output type: null
  • System prompt: null
mediaRefOutput
{"mimeType":"image/jpeg","reference":{"type":"mediaSetViewItem","mediaSetViewItem":{"mediaSetRid":"r...null

Example 5: Edge case

Description: Empty input string should have a null output. Argument values:

  • Model:
    gpt4ChatModel(
     temperature: 0.0,
    )
  • Prompt: prompt
  • Output mode: null
  • Output type: null
  • System prompt: null
promptOutput
empty stringnull

Example 6: Edge case

Description: Input prompt surpassing model limits should have a null output. Argument values:

  • Model:
    gpt4ChatModel(
     temperature: 0.0,
    )
  • Prompt: prompt
  • Output mode: null
  • Output type: WITH_ERRORS
  • System prompt: null
promptOutput
What is the capital of France?{
error: null,
ok: Paris,
}
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a ...{
error: Context limit exceeded.,
ok: null,
}

Example 7: Edge case

Description: Input prompt surpassing model limits should have a null output. Argument values:

  • Model:
    gpt4ChatModel(
     temperature: 0.0,
    )
  • Prompt: prompt
  • Output mode: null
  • Output type: null
  • System prompt: null
promptOutput
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a ...null