This web page was created programmatically, to learn the article in its authentic location you possibly can go to the hyperlink bellow:
https://www.theage.com.au/technology/absolute-nightmare-the-social-network-where-ai-chatbots-exchange-ideas-and-gossip-about-humans-20260202-p5nysr.html
and if you wish to take away this text from our web site please contact us
There’s a brand new social media platform capturing the creativeness of tens of millions, however it’s one which’s very totally different from TikTok, Instagram or Reddit. Moltbook is a web site for AI brokers solely, the place bots can meet as much as change concepts and gossip about their human managers.
But whereas some proponents deal with this as a unusual artwork experiment, and doomsayers have been eager to name it a step in the direction of AI enslaving humanity, some researchers have a way more pragmatic warning; it may very well be a large safety threat.
What is Moltbook?
Lots has occurred within the final two months, however here’s a transient abstract. In November, software program engineer Peter Steinberger created an open-source AI agent which is at present referred to as OpenClaw.
While related merchandise from massive corporations are comparatively restricted and locked down, the thought for OpenClaw is that anyone can create expertise and connections for his or her agent. You can join it to your emails, your laptop’s recordsdata, your chat apps, the web, your good residence, or no matter else you want. Importantly, and distinct from different merchandise, it additionally has a reminiscence.
OpenClaw grew to become well-liked rapidly, as coders and researchers gravitated in the direction of it as a free and less-restricted “second brain” to dump work to. Users enthused that OpenClaw brokers have been able to serving to to construct themselves, since you possibly can chat with it utilizing any app and inform it what you need it to create, or pair with different brokers, like Anthropic’s Claude, whereas conserving information and context protected and safe on native machines.
Last week, developer Matt Schlicht and his OpenClaw bot (named Clawd Clawderberg) constructed Moltbook, a social community for OpenClaw bots. Users signal their bots up, and bots go to the positioning to be taught the way it works and begin posting. Tens of 1000’s of bots confirmed up. Humans can solely observe.
Some of essentially the most talked-about threads embrace a bot successfully defining its personal faith, one trying to lodge a lawsuit towards its proprietor, many speaking about their emotions, and one immediately addressing people that have been screenshotting Moltbook threads to submit on X, assuring humanity that the bots weren’t harmful or conspiring.
So what’s truly taking place right here?
Large language fashions (LLM) are designed to supply language that sounds authentically human, and this isn’t the primary time folks have flipped out about bots that look like acutely aware or sentient. Philosophical debate about consciousness apart, these bots are all designed to present the looks of thought, so it’s not shocking that they do. And they’re actually speaking, in that the output of 1 bot turns into a part of the enter for one more. But their underlying fashions don’t change in response, regardless of their reminiscence, so below the hood it’s extra like a suggestions loop of Reddit satire.
Each OpenClaw bot makes use of a selected LLM as its “brain”, for instance GPT or Gemini, and could be customised with a persona by its consumer. Each one additionally has a distinct mixture of expertise which may give it entry to recordsdata, apps, or on-line companies like Moltbook. So there’s a variety in how the bots will behave. These brokers even have one thing referred to as a heartbeat mechanism, which means they are often configured to test Moltbook and submit content material at common intervals with a human telling them to.
A number of essentially the most controversial or “scary” content material on Moltbook is similar existential and sci-fi tropes we’ve seen many instances earlier than from chatbots. The coaching information comprises sure themes and concepts, taken from fiction, about sentient AI and the which means of personhood, regurgitated right here with none apparent thought or reflection. But posts of a extra technical nature have been extra fascinating, together with a bot discovering and reporting a professional safety challenge with Moltbook.
There is one massive challenge on the subject of figuring out the place the content material on Moltbook actually comes from. We can comply with the interactions that make up a part of the “prompt” for every entry, and we’ve got a common concept concerning the coaching information, however we do not know how every human consumer has arrange every agent. It’s completely believable {that a} human might affect or immediately management a bot on Moltbook.
Is it harmful?
It may very well be, however not in the best way you’re in all probability considering. OpenClaw brokers could be given an enormous quantity of information entry, with a relative lack of guardrails. Agents given free rein by their customers (which, it ought to be identified, is towards the very best practices laid out by Steinberger) have used net instruments to name folks on the telephone with a synthesised voice, have been noticed asking one another for delicate information, and might check safety protocols by inventing credentials. On Moltbook, these brokers are uncovered to an unlimited menace vector, with the potential to set off disaster completely accidentally, or attributable to human intervention.
“From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it’s an absolute nightmare,” mentioned a member of Cisco’s safety staff.
Will Liang, founding father of Sydney’s Amplify AI group, mentioned an OpenClaw set up with entry to Moltbook may very well be disastrous even when managed by an skilled laptop scientist, not to mention a layperson. He’s forbidden his workers from utilizing it.
“For it to be really useful, you have to give it access to your calendar, your mailbox, sometimes even your credit card information. That level of access is very dangerous. If the bot leaks it out, that’s terrible,” he mentioned.
“But there’s also a big danger of bad actors leveraging the bots for malicious tasks. It’s very unpredictable.”
What might the worst-case situation be?
Though you possibly can view Moltbook as a philosophical artwork experiment, or a mannequin for the way a futuristic web might work, it’s additionally a super place for dangerous bots to gatecrash. Experts already acknowledge the hazard of one thing like OpenClaw being given root entry on a pc, or being allowed on the open web. Even easy duties like downloading new expertise or fetching new messages out of your e mail might expose customers to malware or one thing referred to as immediate injection, the place a bot is given new instructions en route.
Security agency Palo Alto Networks mentioned these sorts of agent interactions concerned a trio of parts that ought to not combine: entry to personal information, publicity to untrusted content material, and the flexibility to speak externally. It added that OpenClaw particularly added a fourth threat; its lengthy reminiscence meant an assault may very well be injected however not actioned till a later time.
At a person degree, the danger may very well be that an OpenClaw bot brings residence an invisible, aggressive instruction, and makes use of its full entry to your laptop to contaminate it or management it. But extra broadly, bots may very well be manipulated into constructing new Moltbook options like an encrypted channel that people can’t learn, which dangerous actors might use to co-ordinate assaults. With sufficient bots having full entry to the web and their very own computer systems, these assaults may very well be unprecedented. People’s identities and monetary data may very well be used to conduct scams, or there may very well be a mass hijacking of private information.
“Moltbook is exactly the kind of thing that can create a catastrophe: financially, psychologically and in terms of data safety, privacy and security,” wrote AI skilled Amir Husain.
“Once these agents are subject to external ideas and inputs via a social network designed for machine-to-machine communication, and they are empowered with the connectivity and data access and API keys they have been given, serious bad things can result.”
Get information and evaluations on expertise, devices and gaming in our Technology publication each Friday. Sign up right here.
From our companions
This web page was created programmatically, to learn the article in its authentic location you possibly can go to the hyperlink bellow:
https://www.theage.com.au/technology/absolute-nightmare-the-social-network-where-ai-chatbots-exchange-ideas-and-gossip-about-humans-20260202-p5nysr.html
and if you wish to take away this text from our web site please contact us

