Skip to content

Prompts

Prompts are instructions for Large Language Models (LLMs) that guide them in generating responses. They define the content and structure of your interactions with LLMs. This section describes how to create and run prompts with Koog.

Creating prompts

In Koog, prompts are instances of the Prompt data class with the following properties:

  • id: A unique identifier for the prompt.
  • messages: A list of messages that represent the conversation with the LLM.
  • params: Optional LLM configuration parameters (such as temperature, tool choice, and others).

Although you can instantiate the Prompt class directly, the recommended way to create prompts is by using the Kotlin DSL, which provides a structured way to define the conversation.

val myPrompt = prompt("hello-koog") {
    system("You are a helpful assistant.")
    user("What is Koog?")
}

Note

AI agents can take a simple text prompt as input. They automatically convert the text prompt to the Prompt object and send it to the LLM for execution. This is useful for a basic agent that only needs to run a single request and does not require complex conversation logic.

Running prompts

Koog provides two levels of abstraction for running prompts against LLMs: LLM clients and prompt executors. Both accept Prompt objects and can be used for direct prompt execution, without an AI agent. The execution flow is the same for both clients and executors:

flowchart TB
    A([Prompt built with Kotlin DSL])
    B{LLM client or prompt executor}
    C[LLM provider]
    D([Response to your application])

    A -->|"passed to"| B
    B -->|"sends request"| C
    C -->|"returns response"| B
    B -->|"returns result"| D
  • LLM clients


    Low‑level interfaces for direct interaction with specific LLM providers. Use them when you work with a single provider and do not need advanced lifecycle management.

  • Prompt executors


    High-level abstractions that manage the lifecycles of one or multiple LLM clients. Use them when you need a unified API for running prompts across multiple providers, with dynamic switching between them and fallbacks.

Optimizing performance and handling failures

Koog allows you to optimize performance and handle failures when running prompts.

  • LLM response caching


    Cache LLM responses to optimize performance and reduce costs for repeated requests.

  • Handling failures


    Use built-in retries, timeouts, and other error handling mechanisms in your application.

Prompts in AI agents

In Koog, AI agents maintain and manage prompts during their lifecycle. While LLM clients or executors are used to run prompts, agents handle the flow of prompt updates, ensuring the conversation history remains relevant and consistent.

The prompt lifecycle in an agent usually includes several stages:

  1. Initial prompt setup.
  2. Automatic prompt updates.
  3. Context window management.
  4. Manual prompt management.

Initial prompt setup

When you initialize an agent, you define a system message that sets the agent's behavior. Then, when you call the agent's run() method, you typically provide an initial user message as input. Together, these messages form the agent's initial prompt. For example:

// Create an agent
val agent = AIAgent(
    promptExecutor = simpleOpenAIExecutor(apiKey),
    systemPrompt = "You are a helpful assistant.",
    llmModel = OpenAIModels.Chat.GPT4o
)

// Run the agent
val result = agent.run("What is Koog?")

In the example, the agent automatically converts the text prompt to the Prompt object and sends it to the prompt executor:

flowchart TB
    A([Your application])
    B{{Configured AI agent}}
    C["Text prompt"]
    D["Prompt object"]
    E{{Prompt executor}}
    F[LLM provider]

    A -->|"run() with text"| B
    B -->|"takes"| C
    C -->|"converted to"| D
    D -->|"sent via"| E
    E -->|"calls"| F
    F -->|"responds to"| E
    E -->|"result to"| B
    B -->|"result to"| A

For more advanced configurations, you can also use AIAgentConfig to define the agent's initial prompt.

Automatic prompt updates

As the agent runs its strategy, predefined nodes automatically update the prompt. For example:

Context window management

To avoid exceeding the LLM context window in long-running interactions, agents can use the history compression feature.

Manual prompt management

For complex workflows, you can manage the prompt manually using LLM sessions. In an agent strategy or custom node, you can use llm.writeSession to access and change the Prompt object. This lets you add, remove, or reorder messages as needed.