Multi-LLM Providers
Teleton supports multiple LLM providers for flexibility and cost optimization.
Supported Providers
| Provider | Models | Best For |
|---|---|---|
anthropic | Claude 3.5, Claude 3 | Complex reasoning, tool use |
openai | GPT-4o, GPT-4 | General tasks |
groq | Llama 3, Mixtral | Fast inference, cost-effective |
ollama | Any local model | Privacy, offline use |
Anthropic (Default)
config.yaml
llm:
provider: anthropic
model: claude-sonnet-4-20250514
apiKey: ${ANTHROPIC_API_KEY}
maxTokens: 4096OpenAI
config.yaml
llm:
provider: openai
model: gpt-4o
apiKey: ${OPENAI_API_KEY}Groq
Ultra-fast inference with open-source models:
config.yaml
llm:
provider: groq
model: llama-3.1-70b-versatile
apiKey: ${GROQ_API_KEY}Ollama (Local)
Run models locally for privacy and offline use:
config.yaml
llm:
provider: ollama
model: llama3.1
baseUrl: http://localhost:11434
Hardware requirements
Local models require sufficient hardware. Recommended: 16GB+ RAM for 7B models, 32GB+ for larger models.
Fallback Configuration
Configure fallback providers for reliability:
config.yaml
llm:
provider: anthropic
model: claude-sonnet-4-20250514
apiKey: ${ANTHROPIC_API_KEY}
fallback:
provider: groq
model: llama-3.1-70b-versatile
apiKey: ${GROQ_API_KEY}