
Image Credits:Getty Images
The latest wave of AI excitement has brought us an unexpected mascot: a lobster. Clawdbot, a personal AI assistant, went viral within weeks of its launch, and will keep its crustacean theme despite having had to change its name to Moltbot after a legal challenge from Anthropic. But before you jump on the bandwagon, here’s what you’d need to know.
According to its tagline, Moltbot (formerly Clawdbot) is the “AI that actually does things” — whether it’s managing your calendar, sending messages through your favorite apps, or checking you in for flights. This promise has drawn thousands of users willing to tackle the technical setup required, even though it started as a scrappy personal project built by one developer for his own use.
That man is Peter Steinberger, an Austrian developer and founder who is known online as @steipete and actively blogs about his work. After stepping away from his previous project, PSPDFkit, Steinberger felt empty and barely touched his computer for three years, he explained on his blog. But he eventually found his spark again — which led to Moltbot.
While Moltbot is now much more than a solo project, the publicly available version still derives from Clawd, “Peter’s crusted assistant,” now called Molty, a tool he built to help him “manage his digital life” and “explore what human-AI collaboration can be.”
For Steinberger, this meant diving deeper into the momentum around AI that had reignited his builder spark. A self-confessed “Claudoholic”, he initially named his project after Anthropic’s AI flagship product, Claude. He revealed on X that Anthropic subsequently forced him to change the branding for copyright reasons. TechCrunch has reached out to Anthropic for comment. But the project’s “lobster soul” remains unchanged.
To its early adopters, Moltbot represents the vanguard of how helpful AI assistants could be. Those who were already excited at the prospect of using AI to quickly generate websites and apps are even more keen to have their personal AI assistant perform tasks for them. And just like Steinberger, they’re eager to tinker with it.
This explains how Moltbot amassed more than 44,200 stars on GitHub so quickly. So much viral attention attention has been paid Moltbot that it has even moved markets. Cloudflare’s stock surged 14% in premarket trading on Tuesday as social media buzz around the AI agent re-sparked investor enthusiasm for Cloudflare’s infrastructure, which developers use to run Moltbot locally on their devices.
Still, it’s a long way from breaking out of early adopter territory, and maybe that’s for the best. Installing Moltbot requires being tech savvy, and that also includes awareness of the inherent security risks that come with it.
On one hand, Moltbot is built with safety in mind: it is open source, meaning anyone can inspect its code for vulnerabilities, and it runs on your computer or server, not in the cloud. But on the other hand, its very premise is inherently risky. As entrepreneur and investor Rahul Sood pointed out on X, “‘actually doing things’ means ‘can execute arbitrary commands on your computer.’”
What keeps Sood up at night is “prompt injection through content” — where a malicious person could send you a WhatsApp message that could lead Moltbot to take unintended actions on your computer without your intervention or knowledge.
That risk can be mitigated partly by careful set-up. Since Moltbot supports various AI models, users may want to make setup choices based on their resistance to these kinds of attacks. But the only way to fully prevent it is to run Moltbot in a silo.
This may be obvious to experienced developers tinkering with a weeks-old project, but some of them have become more vocal in warning users attracted by the hype: things could turn ugly fast if they approach it as carelessly as ChatGPT.
Steinberger himself was served with a reminder that malicious actors exist when he “messed up” the renaming of his project. He complained on X that “crypto scammers” snatched his GitHub username and created fake cryptocurrency projects in his name, and he warned followers that “any project that lists [him] as coin owner is a SCAM.” He then posted that the GitHub issue had been fixed, but cautioned that the legitimate X account is @moltbot, “not any of the 20 scam variations of it.”
This doesn’t necessarily mean you should stay away from Moltbot at this stage if you are curious to test it. But if you have never heard of a VPS — a virtual private server, which is essentially a remote computer you rent to run software — you may want to wait your turn. (That’s where you may want to run Moltbot for now. “Not the laptop with your SSH keys, API credentials, and password manager,” Sood cautioned.)
Right now, running Moltbot safely means running it on a separate computer with throwaway accounts, which defeats the purpose of having a useful AI assistant. And fixing that security-versus-utility trade-off may require solutions that are beyond Steinberger’s control.
Still, by building a tool to solve his own problem, Steinberger showed the developer community what AI agents could actually accomplish, and how autonomous AI might finally become genuinely useful rather than just impressive.
