
Credit: Muhammad Shabraiz via Getty Images / Benj Edwards
An open source AI assistant called Moltbot (formerly “Clawdbot”) recently crossed 69,000 stars on GitHub after a month, making it one of the fastest-growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through messaging apps they already use. While some say it feels like the AI assistant of the future, running the tool as currently designed comes with serious security risks.
Among the dozens of unofficial AI bot apps that never rise above the fray, Moltbot is perhaps most notable for its proactive communication with the user. The assistant works with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and other platforms. It can reach out to users with reminders, alerts, or morning briefings based on calendar events or other triggers. The project has drawn comparisons to Jarvis, the AI assistant from the Iron Man films, for its ability to actively attempt to manage tasks across a user’s digital life.
However, we’ll tell you up front that there are plenty of drawbacks to the still-hobbyist software: While the organizing assistant code runs on a local machine, the tool effectively requires a subscription to Anthropic or OpenAI for model access (or using an API key). Users can run local AI models with the bot, but they are currently less effective at carrying out tasks than the best commercial models. Claude Opus 4.5, which is Anthropic’s flagship large language model (LLM), is a popular choice.
A screenshot of talking with Clawdbot / Moltbot taken from its GitHub page.
Credit: Moltbot
Setting up Moltbot requires configuring a server, managing authentication, and understanding sandboxing for even a slice of security in a system that basically demands access to every facet of your digital life. Heavy use can rack up significant API costs, since agentic systems make many calls behind the scenes and use up a lot of tokens.
We’ve only just begun to try Moltbot ourselves (more on that soon), but we’ve seen plenty of discussions circulating in the AI community about the new assistant. The all-in approach has serious security drawbacks, since it requires access to messaging accounts, API keys, and in some configurations, shell commands. Also, an always-on agent with access to messaging channels and personal systems can quickly expand your attack surface.
Even with all the drawbacks listed above, people are still using Moltbot. MacStories editor Federico Viticci, who spent a week testing the tool, described it as “Claude with hands,” referring to how it connects a large language model (LLM) backend with real-world capabilities like browser control, email management, and file operations.
The project’s documentation describes it as a tool for users who want “a personal, single-user assistant that feels local, fast, and always-on.”
According to the project’s GitHub page, Steinberger designed the bot to retain long-term memory and execute commands directly on the user’s system, unlike current web-based chatbots from major AI labs. It’s closer to Claude Code and Codex CLI (which also operate on local files using a cloud AI model), but with more latitude to take local actions on the user’s behalf.
Moltbot stores memory as Markdown files and an SQLite database on the user’s machine. It auto-generates daily notes that log interactions and uses vector search to retrieve relevant context from past conversations. The memory persists across sessions because the bot runs as a background daemon.
Compared to Claude Code, which is session-based, Moltbot runs persistently (24/7) and maintains its stored memory indefinitely. It can reportedly recall what you discussed weeks ago. With Claude Code, when you close it, the conversation context is gone (unless you use CLAUDE.md files for project context).
The project’s rapid rise has not come without complications. On Monday, Anthropic asked Steinberger to change the project’s name due to trademark concerns (since “Clawd” sounds like “Claude”), prompting the rebrand from Clawdbot to Moltbot. “Clawdbot” was originally named after the ASCII art creature that appears when you launch Claude Code on a terminal.
However, the transition enabled bad actors to hijack Steinberger’s old social media and GitHub handles, reports The Register. Crypto scammers launched fake tokens using the project’s name, with one reaching a $16 million market cap before crashing. Steinberger responded on X: “Any project that lists me as a coin owner is a SCAM. No, I will not accept fees. You are actively damaging the project.”
Security researchers have also found vulnerabilities in misconfigured public deployments. Bitdefender reported that exposed dashboards allowed outsiders to view configuration data, retrieve API keys, and browse full conversation histories from private chats.
While there’s plenty of hype about Moltbot right now, be advised that any LLM that has access to your local machine is susceptible to prompt injection attacks that can “trick” the AI model to share your personal data with other people or remote servers. While Moltbot offers a glimpse at what future AI assistants from major vendors might look like, it’s still experimental and not yet ready for users who aren’t comfortable trading today’s AI convenience (such as it is) for major security risks.
