General configuration for the LLM
OptionalmaxTokens?: numberMaximum number of tokens in the response
Model name/identifier
LLM provider (built-in: 'anthropic', 'openai', 'azure', 'vertex'; or any registered custom provider)
OptionalproviderOptions?: Record<string, unknown>Provider-specific options
Optionaltemperature?: numberGeneration temperature (0-2). Lower = more deterministic
Optionaltools?: Record<string, Record<string, unknown>>Provider-specific tools with configuration.
Array of messages in the conversation
Name of the prompt file
Configuration for LLM prompt generation.
Example