Moltbot, the AI agent that ‘actually does things,’ is tech’s new obsession

| 3 394


An open-source AI agent that “actually does things” is taking off, with people across the web sharing how they’re using the agent to do a whole bunch of things, like manage reminders, log health and fitness data, and even communicate with clients. The tool, called Moltbot (formerly Clawdbot), runs locally on a variety of devices, and you can ask it to perform tasks on your behalf by chatting with it through WhatsApp, Telegram, Signal, Discord, and iMessage.

Federico Viticci at MacStories highlighted how he installed Moltbot on his M4 Mac Mini and transformed it into a tool that delivers daily audio recaps based on his activity in his calendar, Notion, and Todoist apps. Another person prompted Moltbot to give itself an animated face, and said it added a sleep animation without prompting.

Moltbot routes your request through the AI provider of your choice, such as OpenAI, Anthropic, or Google. Like many of the AI agents we’ve seen so far, Moltbot can fill out forms inside your browser, send emails for you, and manage your calendar — but it does so a lot more efficiently, at least according to some of the people using the tool.

There are some caveats, though; you can also give Motlbot permission to access your entire computer system, allowing it to read and write files, run shell commands, and execute scripts. Combining admin-level access to your device and your app credentials could pose major security risks if you’re not careful.

“If your autonomous AI Agent (like MoltBot) has admin access to your computer and I can interact with it by DMing you on social media, well now I can attempt to hijack your computer in a simple direct message,” Rachel Tobac, the CEO of SocialProof Security, says in an email to The Verge. “When we grant admin access to autonomous AI agents, they can be hijacked through prompt injection, a well-documented and not yet solved vulnerability.” A prompt injection attack occurs when a bad actor manipulates AI using malicious prompts, which they can either pose to a chatbot directly or embed inside a file, email, or webpage fed to a large language model.

Jamieson O’Reilly, a security specialist and founder of the cybersecurity company Dvuln, discovered that private messages, account credentials, and API keys linked to Moltbot were left exposed on the web, potentially allowing hackers to steal this information or exploit it for other attacks. O’Reilly says he reported this issue to Moltbot’s developers, who have since issued a fix, according to The Register.

One of Moltbot’s developers said on X that the AI agent is “powerful software with a lot of sharp edges,” warning that users should “read the security docs carefully before you run it anywhere near the public internet.”