
Credit: Anthropic
Yesterday’s surprise leak of the source code for Anthropic’s Claude Code revealed a lot about the vibe-coding scaffolding the company has built around its proprietary Claude model. But observers digging through over 512,000 lines of code across more than 2,000 files have also discovered references to disabled, hidden, or inactive features that provide a peek into the potential roadmap for future features.
Chief among these features is Kairos, a persistent daemon that can operate in the background even when the Claude Code terminal window is closed. The system would use periodic “
Kairos makes use of a file-based “memory system” designed to allow for persistent operation across user sessions. A prompt hidden behind a disabled “KAIROS” flag in the code explains that the system is designed to “have a complete picture of who the user is, how they’d like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you.”
To organize and consolidate this memory system across sessions, the Claude Code source code includes references to an evocatively named AutoDream system. When a user goes idle or manually tells Anthropic to sleep at the end of a session, the AutoDream system would tell Claude Code that “you are performing a dream—a reflective pass over your memory files.”
This prompt describing this AI “dream” process asks Claude Code to scan the day’s transcripts for “new information worth persisting,” consolidate that new information in a way that avoids “near-duplicates” and “contradictions,” and prune existing memories that are overly verbose or newly outdated. Claude Code would also be instructed to watch out for “existing memories that drifted,” an issue we’ve seen previously when Claude users have tried to graft memory systems onto their harnesses. The overall goal would be to “synthesize what you’ve learned recently into durable, well-organized memories so that future sessions can orient quickly,” according to the prompt.
A portion of the “Undercover mode” prompt that has attracted some controversy in open source circles.
Credit: Anthropic
While the Kairos daemon doesn’t seem to have been fully implemented in code yet, a separate “Undercover mode” appears to be inactive, letting Anthropic employees contribute to public open source repositories without revealing themselves as AI agents. The reference prompts for this mode focus primarily on protecting “internal model codenames, project names, or other Anthropic-internal information” from becoming accidentally public through open source commits. But the prompt also explicitly tells the system that its commits should “never include… the phrase ‘Claude Code’ or any mention that you are an AI,” and to omit any “co-Authored-By lines or any other attribution.” That kind of obfuscation seems especially relevant given recent controversies surrounding AI coding tools being used on popular repositories.
A snippet from the “Buddy” code shows how Anthropic envisions its ASCII-art cat companions.
Credit: Anthropic
On the lighter side, the Claude Code source code also describes Buddy, a Clippy-like “separate watcher” that “sits beside the user’s input box and occasionally comments in a speech bubble.” These virtual creatures would come in 18 randomized “species” forms ranging from blob to axolotl and appear as five-line-by-12-column ASCII art animations with tiny hats. A comment suggests that Buddy was planned for a “teaser window” launch between April 1 and 7 before a full launch in May. It’s unclear how the source code leak has impacted those plans.
Other potential planned Claude Code features referenced in the source code leak include:
