output-sdk
    Preparing search index...

    Type Alias Prompt

    Configuration for LLM prompt generation.

    const prompt: Prompt = {
    name: 'summarizePrompt',
    config: {
    provider: 'anthropic',
    model: 'claude-opus-4-1',
    temperature: 0.7,
    maxTokens: 2048
    },
    messages: [...]
    };
    type Prompt = {
        config: {
            maxTokens?: number;
            model: string;
            provider: string;
            providerOptions?: Record<string, unknown>;
            temperature?: number;
            tools?: Record<string, Record<string, unknown>>;
        };
        messages: PromptMessage[];
        name: string;
    }
    Index

    Properties

    Properties

    config: {
        maxTokens?: number;
        model: string;
        provider: string;
        providerOptions?: Record<string, unknown>;
        temperature?: number;
        tools?: Record<string, Record<string, unknown>>;
    }

    General configuration for the LLM

    Type Declaration

    • OptionalmaxTokens?: number

      Maximum number of tokens in the response

    • model: string

      Model name/identifier

    • provider: string

      LLM provider (built-in: 'anthropic', 'openai', 'azure', 'vertex'; or any registered custom provider)

    • OptionalproviderOptions?: Record<string, unknown>

      Provider-specific options

    • Optionaltemperature?: number

      Generation temperature (0-2). Lower = more deterministic

    • Optionaltools?: Record<string, Record<string, unknown>>

      Provider-specific tools with configuration.

      tools:
      googleSearch:
      mode: MODE_DYNAMIC
      dynamicThreshold: 0.8
      tools:
      webSearch:
      searchContextSize: high
      filters:
      allowedDomains: [wikipedia.org]
    messages: PromptMessage[]

    Array of messages in the conversation

    name: string

    Name of the prompt file