Agentic កម្រិតស្មុគស្មាញ: មធ្យម

Agent Loop (ReAct) in Go

Drive an agent by alternating reasoning and acting steps inside a structured loop that terminates when the model signals a final answer.

The Problem

A single LLM call rarely solves complex tasks. The model may need to look up information, run a calculation, check a file, and then synthesize the results — steps that can’t all happen in one prompt. Without a loop, the agent is blind: it can’t act on what it learned.

The Solution

The ReAct (Reason + Act) pattern gives the agent a tight loop. Each iteration asks the model to reason about the current state, decide on one action, observe the result, and repeat. The loop terminates when the model emits a final answer or the iteration cap is reached. In Go, this translates naturally to a for range maxIterations loop with a Tool interface for dispatch and a []HistoryEntry slice for context accumulation.

Structure

Agent Loop (ReAct) Pattern
Step 1 of 5

Query Entry

The caller passes a natural-language query to ReactAgent.Run(). The agent initializes a History slice with one user entry and starts the loop.

Implementation

package main

import "context"

// Tool is a named, executable capability the agent can call.
type Tool interface {
	Name() string
	Description() string
	Execute(ctx context.Context, args map[string]any) (string, error)
}

// HistoryEntry records one turn in the agent's reasoning trace.
type HistoryEntry struct {
	Role    string // "thought", "action", "observation", "answer"
	Content string
}

// Step represents one iteration of the ReAct loop.
type Step struct {
	Thought    string
	Action     string
	ActionArgs map[string]any
	IsFinal    bool
	Answer     string
}

Real-World Analogy

A researcher solving a problem doesn’t write the whole answer from memory. They read a source, take notes, look up a reference, cross-check a fact, and only then write the conclusion. The ReAct loop is the same workflow: think, act, observe, repeat until done.

Pros and Cons

ProsCons
Handles multi-step tasks that require intermediate lookupsEach iteration costs one LLM API call
Clean separation between reasoning (LLM) and execution (tools)Errors compound — a bad observation can mislead subsequent reasoning
History slice gives the model full context for complex chainsRequires careful prompt design for the LLM to emit structured Steps
Iteration cap prevents runaway loopsLong chains exhaust the context window

Best Practices

  • Keep maxIterations low (5–10) in production; long chains are usually a sign of unclear tool design.
  • Represent History as a typed slice, not a raw string — it makes serialization and testing easier.
  • Return structured errors from tools so the agent can reason about failures, not just observe opaque strings.
  • Log every Step at DEBUG level; the reasoning trace is your primary debugging artifact.
  • Test the think stub independently from the loop to avoid burning LLM tokens during unit tests.

When to Use

  • Tasks that require multiple sequential information-gathering steps before an answer is possible.
  • Agentic workflows where the model needs to call tools, evaluate results, and adapt its plan.
  • Automated research, code generation with verification, or multi-step form filling.

When NOT to Use

  • Simple single-turn question answering — one LLM call is cheaper and faster.
  • Workflows where the steps are fully known in advance — use a pipeline instead.
  • Real-time latency-sensitive paths where multiple sequential LLM calls are unacceptable.