A feedback loop system that teaches OpenClaw agents to improve their own skills and rules from real conversation corrections
Every OpenClaw user repeats the same corrections to their agent dozens of times. 'Stop using em dashes.' 'Always run tests first.' 'Never suggest manual steps.' MetaClaw showed that self-evolving agents are possible with 3.2K stars, but it requires a custom framework. This tool plugs into any existing OpenClaw or Claude Code setup, watches your conversations for corrections and feedback patterns, and automatically updates CLAUDE.md, skills, and agent rules so the agent never makes the same mistake twice.
Demand Breakdown
Social Proof 1 sources
Gap Assessment
2 tools exist (MetaClaw, Claude Memory (built-in)) but gaps remain: Requires its own framework, does not plug into existing OpenClaw/Claude Code setups, no CLAUDE.md integration, no A/B testing; Manual memory only, no auto-extraction from corrections, no performance scoring, no A/B testing, no skill-level learning.
Features4 agent-ready prompts
Competitive LandscapeFREE
| Product | Does | Missing |
|---|---|---|
| MetaClaw | Self-evolving agent framework that adapts behavior from live conversations | Requires its own framework, does not plug into existing OpenClaw/Claude Code setups, no CLAUDE.md integration, no A/B testing |
| Claude Memory (built-in) | Persists user preferences across sessions in CLAUDE.md | Manual memory only, no auto-extraction from corrections, no performance scoring, no A/B testing, no skill-level learning |
Sign in to unlock full access.