LLM providers
Koog works with major LLM providers and also supports local models using Ollama. The following providers are currently supported:
LLM provider |
Choose for |
|---|---|
| OpenAI (including Azure OpenAI Service) | Advanced models with a wide range of capabilities. |
| Anthropic | Long contexts and prompt caching. |
| Multimodal processing (audio, video), large contexts. | |
| DeepSeek | Cost-effective reasoning and coding. |
| OpenRouter | One integration with an access to multiple models from multiple providers for flexibility, provider comparison, and unified API. |
| Amazon Bedrock | AWS-native environment, enterprise security and compliance, multi-provider access. |
| Mistral | European data hosting, GDPR compliance. |
| Alibaba (DashScope) | Large contexts and cost-efficient Qwen models. |
| Ollama | Privacy, local development, offline operation, and no API costs. |
The table below shows the LLM capabilities that Koog supports and which providers offer these capabilities in their models.
The * symbol means that the capability is supported by specific models of the provider.
LLM capability |
OpenAI | Anthropic | DeepSeek | OpenRouter | Amazon Bedrock | Mistral | Alibaba (DashScope) | Ollama (local models) | |
|---|---|---|---|---|---|---|---|---|---|
| Supported input | Text, image, audio, document | Text, image, document* | Text, image, audio, video, document* | Text | Differs by model | Differs by model | Text, image, document* | Text, image, audio, video* | Text, image* |
| Response streaming | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Tools | ✓ | ✓ | ✓ | ✓ | ✓ | ✓* | ✓ | ✓ | ✓ |
| Tool choice | ✓ | ✓ | ✓ | ✓ | ✓ | ✓* | ✓ | ✓ | – |
| Structured output (JSON Schema) | ✓ | – | ✓ | ✓ | ✓* | – | ✓ | ✓* | ✓ |
| Multiple choices | ✓ | – | ✓ | – | ✓* | ✓* | ✓ | ✓* | – |
| Temperature | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Speculation | ✓* | – | – | – | ✓* | – | ✓* | ✓* | – |
| Content moderation | ✓ | – | – | – | – | ✓ | ✓ | – | ✓ |
| Embeddings | ✓ | – | – | – | – | ✓ | ✓ | – | ✓ |
| Prompt caching | ✓* | ✓ | – | – | – | – | – | – | – |
| Completion | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Local execution | – | – | – | – | – | – | – | – | ✓ |
Note
Koog supports the most commonly used capabilities for creating AI agents. LLMs from each provider may have additional features that Koog does not currently support. To learn more, refer to Model capabilities.
Working with providers
Koog lets you work with LLM providers on two levels:
-
Using an LLM client for direct interaction with a specific provider. Each client implements the
LLMClientinterface, handling authentication, request formatting, and response parsing for the provider. For details, see Running prompts with LLM clients. -
Using a prompt executor for a higher-level abstraction that wraps one or multiple LLM clients, manages their lifecycles, and unifies an interface across providers. It can optionally fall back to a single LLM client if a specific provider is unavailable. Prompt executors also handle failures, retries, and switching between providers. You can either create your own executor or use a pre-defined prompt executor for a specific provider. For details, see Running prompts with prompt executors.
Next steps
- Create and run an agent with a specific LLM provider.
- Learn more about prompts and how to choose between LLM clients and prompt executors.