How to evolve your Engine scientifically
Your EvoStudio is not static — it gets smarter the more you invest in its memory and knowledge. This guide walks you through the recommended tools and strategies to make your Engine evolve systematically over time.
Recommended memory tools
Memory is the core of evolution.
Without persistent memory, your Engine forgets every conversation. The tools below let it accumulate knowledge across sessions and users.
ByteRover is a memory MCP server designed for AI agents. It stores structured memories — facts, preferences, decisions — and surfaces the most relevant ones at query time via semantic search. Install it as an MCP tool inside your EvoStudio to give it long-term memory that persists across sessions.
Mem0 provides a managed memory layer for AI agents. It automatically extracts and stores important facts from conversations, then injects relevant context into future prompts without any manual tagging.
Zep is an open-source memory store for AI assistants. It persists conversation history, extracts entities and facts, and provides a retrieval API. A good option if you want full control over your data.
How to install a memory MCP
Inside your EvoStudio chat, tell the agent to install the MCP server. For ByteRover:
install mcp byterover
# or with your API key:
install mcp byterover --key byr_xxxxxxxxxxxxxxxxThe agent will add ByteRover to its MCP config and restart automatically. Once installed, it will begin storing and recalling memories on every turn.
Configuring OpenClaw QMD memory
QMD is the native memory layer built into EvoStudio.
Unlike external MCP memory servers, QMD memory lives directly inside your instance's workspace as structured Markdown files. No extra service to deploy — it works out of the box.
What QMD stores
QMD organises memory into purpose-built directories inside the instance workspace: sessions (conversation summaries), characters (consistent personas), series (project arcs), references (brand assets, style guides), and channels (per-platform config). Each file is plain Markdown, fully readable and editable.
Enable persistent QMD memory
Tell your Engine to remember specific information by saying it explicitly in chat. The agent will write a structured memory file automatically:
Remember this for future sessions:
My brand color is #0ea5e9, tone is friendly but professional,
and our main product is an AI video generator.Inject the QMD API key
When calling your Engine from an external app using a sk-oc-* key, include the key in the Authorization header. The gateway automatically attaches the correct QMD memory context to every request:
curl -X POST "$CHRAFT_BASE_URL/api/evostudio/chat" \
-H "Authorization: Bearer $CHRAFT_AGENT_API_KEY" \
-H "Content-Type: application/json" \
-d '{"message":"What is our brand color?","sessionKey":"conv-001"}'Generate your sk-oc-* key under EvoStudio → Instance → External API Keys.
Inspect and edit memory files
Open the Agent Memory Viewer from the Instance page to browse all QMD files, see their sizes, and view their content. You can also ask the agent directly to list or update a memory file:
# List what you remember about my brand
Show me everything in your memory about our brand guidelines.
# Update a memory entry
Update your brand memory: our tagline is now "Create faster, share smarter".QMD vs external MCP memory
Use QMD for structured, instance-specific data (characters, brands, series arcs). Use an MCP memory server like ByteRover when you need semantic retrieval across large fact databases or want to share memory across multiple instances. Both can run simultaneously.
Installing skills & agents
Extend your Engine with skills and agents from Agent Hub.
Skills add new capabilities (tools, workflows, prompts). Agents add new specialists to your team. You can install either one of two ways — one click in Agent Hub, or a plain-language prompt in chat with a URL.
Method 1 — Install from Agent Hub
one-click
- Open Agent Hub.
- Browse or search for the skill / agent you want.
- Click the Install button on its card.
- Your Engine picks it up automatically on the next turn — no restart needed.
Best when you're exploring what's available or want something officially curated.
Method 2 — Install by URL in chat
by prompt
In the EvoStudio chat input, paste a plain-language install prompt together with the URL of the skill or agent. The Engine resolves the URL, downloads the manifest, and registers it.
Examples
# Install a skill from a URL
install this skill: https://agent-hub.chraft.ai/skills/twitter-post-writer
# Install an agent from a URL
install this agent: https://agent-hub.chraft.ai/agents/film-director
# Works with any reachable manifest URL
please install the agent at https://example.com/my-agent.jsonBest for sharing custom or third-party items — anything with a reachable manifest URL works, not only Agent Hub.
Heads up: only install skills and agents from sources you trust. They can call tools and consume credits on your behalf once installed. You can review or remove any installed item from EvoStudio → Instance → Skills & Agents.
Building a knowledge base
Memory tools handle conversational context. For larger, structured knowledge — documentation, code references, company policies — you need a dedicated knowledge base.
Drop Markdown or plain-text files into your Engine's workspace. The agent reads them automatically and can reference them in any conversation. Best for static documentation that doesn't change often.
Connect a vector-store MCP (e.g. Chroma, Qdrant, Weaviate) to give your Engine semantic search over large document collections. The agent queries the store on demand rather than loading everything into context.
Tip: Start with knowledge files for simplicity. Migrate to a vector store only when your knowledge base exceeds ~50 documents or needs frequent updates.
Evolving your Engine's configuration
Beyond memory, the way you configure your Engine's identity, instructions, and tools directly determines how it evolves.
System prompt
Write a clear, specific system prompt that describes your Engine's role, constraints, and personality. Revisit and refine it as you discover gaps. Store previous versions in a changelog file so you can track what improved.
Tool selection
Only give your Engine tools it actually needs. A focused tool set reduces hallucination and improves response quality. Add tools incrementally — verify each one works correctly before adding the next.
Model selection
Use the model switcher in the chat header to try different models. For complex reasoning tasks, a more capable model pays off. For fast, routine tasks, a faster model keeps response times low and credit usage minimal.
Update regularly
Click Update Engine on the instance page whenever a new version is available. Updates bring improved base capabilities, bug fixes, and new built-in tools.
The evolution workflow
Think of evolving your Engine as a continuous improvement loop:
Observe
Use your Engine regularly. Note where it fails, forgets, or gives weak answers.
Diagnose
Is it a memory gap (install a memory tool), a knowledge gap (add documents), a prompt gap (refine the system prompt), or a tool gap (add a new MCP)?
Intervene
Make one change at a time. Install the memory tool, update the prompt, or add the knowledge file.
Validate
Test the same scenarios that previously failed. Confirm the change improved things without breaking anything.
Repeat
Go back to Observe. Each cycle makes your Engine measurably smarter.
Next up
Ready to scale beyond one agent?
Once your Engine is reliable, the next stage is specialization — split work across a team of focused agents, each bound to its own Telegram, Slack, or Feishu bot.
Build an agent team