smoltalk
    Preparing search index...

    Type Alias SmolConfig

    type SmolConfig = {
        anthropicApiKey?: string;
        googleApiKey?: string;
        logLevel?: LogLevel;
        model: ModelParam;
        ollamaApiKey?: string;
        ollamaHost?: string;
        openAiApiKey?: string;
        provider?: string;
        statelog?: Partial<
            {
                apiKey: string;
                debugMode: boolean;
                host: string;
                projectId: string;
                traceId: string;
            },
        >;
    }
    Index

    Properties

    anthropicApiKey?: string
    googleApiKey?: string
    logLevel?: LogLevel
    model: ModelParam

    The given model determines both

    • what client is used
    • what strategy is executed.

    The simplest case is to specify the name of a model from lib/models.ts. Example:

      model: "claude-sonnet-4-6"
    

    You can instead also choose to let Smoltalk pick the model that it thinks will be best for certain parameters. For example:

      model: {
    // find the fastest model
    optimizeFor: ["speed"],

    // from either Anthropic or Google, whichever is faster
    providers: ["anthropic", "google"],
    limit: {
    // 1 mil input tokens + 1 mil output tokens together
    // should cost less than $10 for the models being considered
    cost: 10,
    },
    }

    This can be a good option because as better models come out, you won't need to update your code. You can just update Smoltalk and it will pick the best model automatically.

    Finally, you can instead specify a strategy to execute. For example:

      model: {
    type: "race",
    params: {
    strategies: ["gemini-2.5-flash-lite", "gemini-2.5-pro"],
    },
    }

    In this case, Smoltalk will run your request over using both LLMs simultaneously, and take the response that finishes first.

    You can also choose to specify fallbacks in case the first model returns an error for some reason. This can be a good way to try something with a fast model and then use a slower but more powerful model if the first one fails.

      model: {
    type: "fallback",
    params: {
    primaryStrategy: "gemini-2.5-flash-lite",
    config: {
    error: ["gemini-2.5-pro"],
    },
    },
    }

    You can of course combine strategies together to create more complex behavior:

      const geminiLiteWithFallback = {
    type: "fallback",
    params: {
    primaryStrategy: "gemini-2.5-flash-lite",
    config: {
    error: ["gemini-2.5-pro"],
    },
    },
    };

    model: {
    type: "race",
    params: {
    strategies: ["gemini-2.5-pro", geminiLiteWithFallback],
    },
    }
    ollamaApiKey?: string
    ollamaHost?: string
    openAiApiKey?: string
    provider?: string
    statelog?: Partial<
        {
            apiKey: string;
            debugMode: boolean;
            host: string;
            projectId: string;
            traceId: string;
        },
    >