smoltalk
    Preparing search index...

    Type Alias SmolConfig

    type SmolConfig = {
        anthropicApiKey?: string;
        googleApiKey?: string;
        hooks?: Partial<
            {
                onEnd: (result: PromptResult) => void;
                onError: (error: Error) => void;
                onStart: (config: PromptConfig) => void;
                onStrategyStart: (strategy: Strategy, config: SmolPromptConfig) => void;
                onToolCall: (toolCall: ToolCall) => void;
            },
        >;
        llamaCppModelDir?: string;
        logLevel?: LogLevel;
        metadata?: Record<string, any>;
        model: ModelParam;
        ollamaApiKey?: string;
        ollamaHost?: string;
        openAiApiKey?: string;
        provider?: string;
        statelog?: Partial<
            {
                apiKey: string;
                debugMode: boolean;
                host: string;
                projectId: string;
                traceId: string;
            },
        >;
    }
    Index

    Properties

    anthropicApiKey?: string

    API key for Anthropic. Required when using Anthropic/Claude models.

    googleApiKey?: string

    API key for Google Gemini. Required when using Google models.

    hooks?: Partial<
        {
            onEnd: (result: PromptResult) => void;
            onError: (error: Error) => void;
            onStart: (config: PromptConfig) => void;
            onStrategyStart: (strategy: Strategy, config: SmolPromptConfig) => void;
            onToolCall: (toolCall: ToolCall) => void;
        },
    >

    Lifecycle hooks called at various points during execution.

    llamaCppModelDir?: string

    Directory path for Llama.cpp models. Required when using the Llama.cpp client.

    logLevel?: LogLevel

    Log level for internal debug logging.

    metadata?: Record<string, any>

    Arbitrary metadata passed to custom model providers.

    model: ModelParam

    The given model determines both

    • what client is used
    • what strategy is executed.

    The simplest case is to specify the name of a model from lib/models.ts. Example:

      model: "claude-sonnet-4-6"
    

    You can instead specify a strategy to execute. For example:

      model: {
    type: "race",
    params: {
    strategies: ["gemini-2.5-flash-lite", "gemini-2.5-pro"],
    },
    }

    In this case, Smoltalk will run your request over using both LLMs simultaneously, and take the response that finishes first.

    You can also choose to specify fallbacks in case the first model returns an error for some reason. This can be a good way to try something with a fast model and then use a slower but more powerful model if the first one fails.

      model: {
    type: "fallback",
    params: {
    primaryStrategy: "gemini-2.5-flash-lite",
    config: {
    error: ["gemini-2.5-pro"],
    },
    },
    }

    You can of course combine strategies together to create more complex behavior:

      const geminiLiteWithFallback = {
    type: "fallback",
    params: {
    primaryStrategy: "gemini-2.5-flash-lite",
    config: {
    error: ["gemini-2.5-pro"],
    },
    },
    };

    model: {
    type: "race",
    params: {
    strategies: ["gemini-2.5-pro", geminiLiteWithFallback],
    },
    }
    ollamaApiKey?: string

    API key for Ollama. Only needed when connecting to a cloud-hosted Ollama instance.

    ollamaHost?: string

    Base URL for the Ollama server. Defaults to localhost if not set. (Ollama only)

    openAiApiKey?: string

    API key for OpenAI. Required when using OpenAI models.

    provider?: string

    Override the provider for the given model (e.g., use a custom endpoint for an OpenAI-compatible model).

    statelog?: Partial<
        {
            apiKey: string;
            debugMode: boolean;
            host: string;
            projectId: string;
            traceId: string;
        },
    >

    Configuration for Statelog observability/tracing integration.