OptionalanthropicOptionalgoogleAPI key for Google Gemini. Required when using Google models.
Optionalhooks
Lifecycle hooks called at various points during execution.
OptionalllamaDirectory path for Llama.cpp models. Required when using the Llama.cpp client.
OptionallogLog level for internal debug logging.
Optionalmetadata
Arbitrary metadata passed to custom model providers.
The given model determines both
The simplest case is to specify the name of a model from lib/models.ts. Example:
model: "claude-sonnet-4-6"
You can instead specify a strategy to execute. For example:
model: {
type: "race",
params: {
strategies: ["gemini-2.5-flash-lite", "gemini-2.5-pro"],
},
}
In this case, Smoltalk will run your request over using both LLMs simultaneously, and take the response that finishes first.
You can also choose to specify fallbacks in case the first model returns an error for some reason. This can be a good way to try something with a fast model and then use a slower but more powerful model if the first one fails.
model: {
type: "fallback",
params: {
primaryStrategy: "gemini-2.5-flash-lite",
config: {
error: ["gemini-2.5-pro"],
},
},
}
You can of course combine strategies together to create more complex behavior:
const geminiLiteWithFallback = {
type: "fallback",
params: {
primaryStrategy: "gemini-2.5-flash-lite",
config: {
error: ["gemini-2.5-pro"],
},
},
};
model: {
type: "race",
params: {
strategies: ["gemini-2.5-pro", geminiLiteWithFallback],
},
}
OptionalollamaAPI key for Ollama. Only needed when connecting to a cloud-hosted Ollama instance.
OptionalollamaBase URL for the Ollama server. Defaults to localhost if not set. (Ollama only)
OptionalopenAPI key for OpenAI. Required when using OpenAI models.
Optionalprovider
Override the provider for the given model (e.g., use a custom endpoint for an OpenAI-compatible model).
Optionalstatelog
Configuration for Statelog observability/tracing integration.
API key for Anthropic. Required when using Anthropic/Claude models.