A commentary by Nash Borges, Vice President of Engineering and Data Science at Sophos
There is currently a flurry of assessments and news about the new AI agent OpenClaw/Moltbot. Nash Borges, Vice President of Engineering and Data Science at Sophos, took a closer look at the bot and has a strong opinion about the new AI super tool:
Who would have thought that we are just one open-source project away from the most significant paradigm shift in artificial intelligence (AI) since ChatGPT? OpenClaw has already been described as ‘what Siri and Alexa should have been,’ but that is a huge understatement and does not do justice to its creator, Peter Steinberger. It reminds me much more of Jarvis from the blockbuster ‘Iron Man’ or, perhaps even more disturbingly, Samantha from the science fiction film ‘Her’.
And indeed, at first glance, the whole thing seems almost too good to be true: as soon as OpenClaw is running, there is something magical in the air. You can hand over all those tedious routine tasks – setting up reminders, creating daily briefings, retrieving information from your calendar, drafting email replies, developing new services… the programme remembers everything that’s important to you and stores this information on a device of your choice.
The deadly trinity
Now comes the ‘but’. With every great AI superpower comes an even greater security risk. Six months ago, Simon Willison coined the term ‘deadly trinity’ for AI agents, and OpenClaw fulfils it in its entirety. If an AI assistant or agent has all three of these attributes – access to private data, access to untrusted content, and the ability to communicate externally – an attacker can use manipulated prompts to trick it into revealing sensitive information. While this problem can be mitigated with safeguards, it is far from solved. If the instructions (prompt) and the data (context) are ultimately transmitted to the LLM via the same channel, the LLM cannot distinguish between the two, and the data can act as instructions. OpenClaw combines so many of these three dangerous characteristics that every user should think very carefully about connecting it to important systems. My recommendation: Users who want to try OpenClaw should at least isolate the computer they are using and control the communication channels as much as possible to prevent an attacker from injecting malicious prompts. I know that Peter Steinberger is working hard to secure OpenClaw, but there is a high probability that a manipulated capability that exfiltrates sensitive information will spread unnoticed.
To be clear, users should not use OpenClaw on a corporate network or connect it to corporate systems under any circumstances. The tool currently has no enterprise-wide access controls, no role-based access control, no formal audit trail, and no data loss prevention capabilities. Without approval from the responsible security team, it should not process any customer data, personal data or intellectual property. The project’s security documentation itself makes this clear: ‘OpenClaw is both a product and an experiment: you’re wiring frontier-model behaviour into real messaging surfaces and real tools. There is no “perfectly secure” setup.’
Continuous learning and integrated programming
One of the most important aspects of OpenClaw is continuous learning. Various large model providers have approaches to this, but in my opinion, there is currently nothing comparable.
OpenClaw uses a series of Markdown files to build persistent memory across sessions. USER.md stores everything about the user: personal preferences, background information and communication style. SOUL.md defines the agent’s personality and behavioural guidelines. MEMORY.md serves as long-term memory managed by the agent itself. Daily memory files serve as raw logs of the sessions. The agent reads this data at the beginning of each session and updates it continuously. It’s amazingly simple. But combine persistent memory with seamless communication (e.g., Signal, WhatsApp, iMessage), and you have the recipe for an outstanding AI application.
The result is a system that actually feels like it has infinite memory. It seems to remember everything you tell it and uses this as context for the next task.
Running OpenClaw on your own hardware or in the cloud gives you the (perhaps false) feeling of control over your data and also allows you to switch model providers as easily as sending a text message. Although it is mostly middleware and the model provider handles the computationally intensive tasks and can see all your inputs and responses, it is surprisingly liberating to retain control over the data.
Another major advantage of OpenClaw, when given unrestricted access to a computer, is that it can improve itself indefinitely with the help of AI programming assistants. If a feature is needed that requires a new application that does not exist or that the user does not want to pay for, OpenClaw offers to create it. If the feature is sufficiently simple (as with many SaaS applications), Claude Code or Codex can often implement it with just one click and achieve a working result with a single input.
ChatGPT changed the world, OpenClaw is turning it upside down
I spent the weekend setting up OpenClaw on a Mac mini and connecting it to Signal, a calendar and an email programme. Soon, it was capturing tasks, creating daily summaries of the latest OpenClaw knowledge, and reminding me to go to bed because I have a flight to catch tomorrow. I’m writing this at midnight with its help. ChatGPT changed the world, but OpenClaw will completely transform it.

