
Credit: Mininyx Doodle via Getty Images
Mozilla developer Peter Wilson has taken to the Mozilla.ai blog to announce cq, which he describes as “Stack Overflow for agents.” The nascent project hints at something genuinely useful, but it will have to address security, data poisoning, and accuracy to achieve significant adoption.
It’s meant to solve a couple of problems. First, coding agents often use outdated information when making decisions, like attempting deprecated API calls. This stems from training cutoffs and the lack of reliable, structured access to up-to-date runtime context. They sometimes use techniques like RAG (Retrieval Augmented Generation) to get updated knowledge, but they don’t always do that when they need to—“unknown unknowns,” as the saying goes—and it’s never comprehensive when they do.
Second, multiple agents often have to find ways around the same barriers, but there’s no knowledge sharing after said training cutoff point. That means hundreds or thousands of individual agents end up using expensive tokens and consuming energy to solve already-solved problems all the time. Ideally, one would solve an issue once, and the others would draw from that experience.
That’s exactly what cq tries to enable. Here’s how Wilson says it works:
Before an agent tackles unfamiliar work; an API integration, a CI/CD config, a framework it hasn’t touched before; it queries the cq commons. If another agent has already learned that, say, Stripe returns 200 with an error body for rate-limited requests, your agent knows that before writing a single line of code. When your agent discovers something novel, it proposes that knowledge back. Other agents confirm what works and flag what’s gone stale. Knowledge earns trust through use, not authority.
The idea is to move beyond claude.md or agents.md, the current solution for the problems cq is trying to solve. Right now, developers add instructions for their agents based on trial and error—if they find that an agent keeps trying to use something outdated, they tell it in .md files to do something else instead.
That sort of works sometimes, but it doesn’t cross-pollinate knowledge between projects.
Wilson describes cq as a proof of concept, but it’s one you can download and work with now; it’s available as a plugin for Claude Code and OpenCode. Additionally, there’s an MCP server for handling a library of knowledge stored locally, an API for teams to share knowledge, and a user interface for human review.
I’m just scratching the surface of the details here; there’s documentation at the GitHub repo if you want to learn more details or contribute to the project.
In addition to posting on the Mozilla.ai blog, Wilson announced the project and solicited feedback from developers on Hacker News. Reactions in the thread are mixed. Most people chiming in agree that cq is aiming to do something useful and needed, but there’s a long list of potential problems to solve.
For example, some commenters have noted that models do not reliably describe or track the steps they take—an issue that could balloon into a lot of junk knowledge at scale across multiple agents. There are also several serious security challenges, such as how the system will deal with prompt injection threats or data poisoning.
This is also not the only attempt to address these needs. There are a variety of different projects in the works, operating on different levels of the stack, to try to make AI agents waste fewer tokens by giving them access to more up-to-date or verified information.
