LLM providers
Koog works with major LLM providers and also supports local models using Ollama. The following providers are currently supported:
LLM provider |
Choose for |
|---|---|
| OpenAI (including Azure OpenAI Service) | Advanced models with a wide range of capabilities. |
| Anthropic | Long contexts and prompt caching. |
| Multimodal processing (audio, video), large contexts. | |
| DeepSeek | Cost-effective reasoning and coding. |
| OpenRouter | One integration with an access to multiple models from multiple providers for flexibility, provider comparison, and unified API. |
| Amazon Bedrock | AWS-native environment, enterprise security and compliance, multi-provider access. |
| Mistral | European data hosting, GDPR compliance. |
| Alibaba (DashScope OpenAI-compatible client) | Large contexts and cost-efficient Qwen models. |
| Ollama | Privacy, local development, offline operation, and no API costs. |
The table below shows the LLM capabilities that Koog supports and which providers offer these capabilities in their models.
The * symbol means that the capability is supported by specific models of the provider.
LLM capability |
OpenAI | Anthropic | DeepSeek | OpenRouter | Amazon Bedrock | Mistral | Alibaba (DashScope OpenAI-compatible client) | Ollama (local models) | |
|---|---|---|---|---|---|---|---|---|---|
| Supported input | Text, image, audio, document | Text, image, document* | Text, image, audio, video, document* | Text | Differs by model | Differs by model | Text, image, document* | Text, image, audio, video* | Text, image* |
| Response streaming | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Tools | ✓ | ✓ | ✓ | ✓ | ✓ | ✓* | ✓ | ✓ | ✓ |
| Tool choice | ✓ | ✓ | ✓ | ✓ | ✓ | ✓* | ✓ | ✓ | – |
| Structured output (JSON Schema) | ✓ | – | ✓ | ✓ | ✓* | – | ✓ | ✓* | ✓ |
| Multiple choices | ✓ | – | ✓ | – | ✓* | ✓* | ✓ | ✓* | – |
| Temperature | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Speculation | ✓* | – | – | – | ✓* | – | ✓* | ✓* | – |
| Content moderation | ✓ | – | – | – | – | ✓ | ✓ | – | ✓ |
| Embeddings | ✓ | – | – | – | – | ✓ | ✓ | – | ✓ |
| Prompt caching | ✓* | ✓ | – | – | – | – | – | – | – |
| Completion | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Local execution | – | – | – | – | – | – | – | – | ✓ |
Note
Koog supports the most commonly used capabilities for creating AI agents. LLMs from each provider may have additional features that Koog does not currently support. To learn more, refer to Model capabilities.
Working with providers
Koog lets you work with LLM providers on two levels:
-
Using an LLM client for direct interaction with a specific provider. Each client implements the
LLMClientinterface, handling authentication, request formatting, and response parsing for the provider. For details, see LLM clients. -
Using a prompt executor for a higher-level abstraction that wraps one or multiple LLM clients, manages their lifecycles, and unifies an interface across providers. It can switch between providers and optionally fall back to a configured provider and LLM using the corresponding client. You can either create your own executor or use a pre-defined prompt executor for a specific provider. For details, see Prompt executors.
Using a prompt executor offers a higher‑level layer over one or more LLMClients. It manages client lifecycles and exposes a unified interface across providers. In multi‑provider setups, it can route requests between providers and optionally fall back to a designated client when needed for core requests. You can create your own executor or use pre‑defined ones—both single‑provider and multi‑provider options are available.
Next steps
- Create and run an agent with a specific LLM provider.
- Learn more about prompts.