The Problem
A single LLM call rarely solves complex tasks. The model may need to look up information, run a calculation, check a file, and then synthesize the results — steps that can’t all happen in one prompt. Without a loop, the agent is blind: it can’t act on what it learned.
The Solution
The ReAct (Reason + Act) pattern gives the agent a tight loop. Each iteration asks the model to reason about the current state, decide on one action, observe the result, and repeat. The loop terminates when the model emits a final answer or the iteration cap is reached. In Go, this translates naturally to a for range maxIterations loop with a Tool interface for dispatch and a []HistoryEntry slice for context accumulation.
Structure
Query Entry
The caller passes a natural-language query to ReactAgent.Run(). The agent initializes a History slice with one user entry and starts the loop.
flowchart TD
User["User Query"]
Agent["ReactAgent.Run()"]
LLM["LLM (think)"]
Parse["Parse Response"]
Final{"IsFinal?"}
Dispatch["Tool.Execute()"]
History["Append to History"]
Answer["Return Answer"]
Error["Return Error (max iterations)"]
User --> Agent
Agent --> LLM
LLM --> Parse
Parse --> Final
Final -->|"yes"| Answer
Final -->|"no"| Dispatch
Dispatch --> History
History --> LLM
Agent -->|"cap exceeded"| Error Implementation
package main
import "context"
// Tool is a named, executable capability the agent can call.
type Tool interface {
Name() string
Description() string
Execute(ctx context.Context, args map[string]any) (string, error)
}
// HistoryEntry records one turn in the agent's reasoning trace.
type HistoryEntry struct {
Role string // "thought", "action", "observation", "answer"
Content string
}
// Step represents one iteration of the ReAct loop.
type Step struct {
Thought string
Action string
ActionArgs map[string]any
IsFinal bool
Answer string
} Real-World Analogy
A researcher solving a problem doesn’t write the whole answer from memory. They read a source, take notes, look up a reference, cross-check a fact, and only then write the conclusion. The ReAct loop is the same workflow: think, act, observe, repeat until done.
Pros and Cons
| Pros | Cons |
|---|---|
| Handles multi-step tasks that require intermediate lookups | Each iteration costs one LLM API call |
| Clean separation between reasoning (LLM) and execution (tools) | Errors compound — a bad observation can mislead subsequent reasoning |
| History slice gives the model full context for complex chains | Requires careful prompt design for the LLM to emit structured Steps |
| Iteration cap prevents runaway loops | Long chains exhaust the context window |
Best Practices
- Keep
maxIterationslow (5–10) in production; long chains are usually a sign of unclear tool design. - Represent
Historyas a typed slice, not a raw string — it makes serialization and testing easier. - Return structured errors from tools so the agent can reason about failures, not just observe opaque strings.
- Log every
Stepat DEBUG level; the reasoning trace is your primary debugging artifact. - Test the
thinkstub independently from the loop to avoid burning LLM tokens during unit tests.
When to Use
- Tasks that require multiple sequential information-gathering steps before an answer is possible.
- Agentic workflows where the model needs to call tools, evaluate results, and adapt its plan.
- Automated research, code generation with verification, or multi-step form filling.
When NOT to Use
- Simple single-turn question answering — one LLM call is cheaper and faster.
- Workflows where the steps are fully known in advance — use a pipeline instead.
- Real-time latency-sensitive paths where multiple sequential LLM calls are unacceptable.