Skip to main content

Memory Nodes

Overview

Agent memories control how much of the session transcript is replayed to the model on each turn. Every memory node persists messages in session state so the full conversation remains available for auditing, while dynamically shaping the slice of history that appears inside CHAT_HISTORY. BotDojo ships two memory implementations you can attach to the memory input of an AI Agent node.

Buffer Memory (botdojo/node/buffer_memory)

A lightweight window over the chat transcript. It keeps the most recent messages verbatim and is ideal when the conversation fits comfortably within the model’s context window.

  • Number of Messages – Defaults to 30. The node returns the last N user/assistant pairs (internally, it slices the final numberOfMessages * 2 messages).
  • Memory Key – Storage slot in session state (default buffer_memory). Use unique keys when you need multiple buffers in the same flow.
  • User Message Behaviordefault stores the raw user utterance. Switching to override hides the actual user text from memory and enables a User Message string input so you can supply a sanitized version (for example, after redaction). The AI Agent still sees the real-time user message via the Start node; the override affects only the transcript replayed in later turns.

Buffer Memory writes and retrieves directly from session-state storage. Because it replays exact messages, pair it with prompt strategies that rely on verbatim history.

Compact Memory (botdojo/node/compact_memory)

A dynamic memory that compresses older turns, tracks token usage, and exposes management tools.

  • Inputs – Optional storage (botdojo/interface/storage) lets the node archive large tool outputs to your storage integration and link to them from the transcript.
  • Outputsmemory for the agent, plus a modelContext that publishes MCP tools for inspecting and expanding compressed messages.
  • Memory Key – Session-state namespace (default compact_memory). Matches the key used for token-tracking metadata.
  • Max Context Window (tokens) – Hard limit for total tokens before compression triggers. If unset, the node uses either the value configured at runtime or a 32 k token default.
  • Compression Percentage – Defaults to 0.7. When the transcript exceeds this fraction of the max window, the node begins compressing the oldest groups of messages.
  • Max Default Window (chars) – Size of the context window used for character-based fallbacks (default 1000).
  • Enable Token Counting – Default true. Uses precise token counts when the connected model supports them; disable to fall back to character heuristics.
  • Min Messages to Compress – Do not compress until at least this many messages exist (default 5).
  • Compression Preservation Ratio – Percentage of original content to keep when generating summaries (default 0.3).
  • Max Tool Output Chars – Tool outputs larger than this value (default 2000) are summarized automatically.

Compact Memory groups assistant/tool interactions, preserves attachment references, and exposes MCP tools such as expand_memory so human reviewers or downstream agents can restore full content for the next few turns. It keeps the entire raw transcript in session state while returning a condensed view to the agent, maintaining relevance without blowing the context window.

When to use which

  • Choose Buffer Memory when you simply need the latest turns verbatim and the conversation stays within model limits.
  • Choose Compact Memory when tools produce verbose output, conversations are long-running, or you need human-in-the-loop tooling to inspect compressed history.

Both memories plug into the same memory input on the AI Agent node and can coexist in a flow (using different memory keys) when you need multiple perspectives on the transcript.