OpenClaw: The Passion-Driven AI Agent That’s Exploding – But Honestly, Most People Shouldn’t Touch It

OpenClaw (ex-Clawdbot, ex-Moltbot) just smashed past 180,000 GitHub stars in weeks. It’s not hype – it’s real, messy, and straight-up disruptive. This thing talks to you on WhatsApp, Telegram, Slack, whatever you already use, and then actually does the work: clears your inbox, books flights, runs shell commands, controls your browser, reads/writes files, remembers everything in plain Markdown on disk.

No fancy chat UI. No corporate guardrails. Just a persistent agent on your hardware (or VPS) that wakes up on a schedule and gets stuff done.

It’s the anti-MCP. While the big labs push the clean, standardized Model Context Protocol for “safe” enterprise connections, OpenClaw says screw the adapters and gives the agent real claws – full filesystem, CLI, browser automation, and an exploding skill/plugin ecosystem built in simple Markdown + bash.

Why It Feels Different: Peter’s Raw Passion Project

This isn’t some polished VC-backed product. Peter Steinberger built this as pure weekend experiments that turned into a movement. His Lex Fridman interview (#491) is electric – you can feel the raw builder energy pouring out of him. He talks about “vibe coding”: describe what you want, send the agent off to do work, iterate fast, commit to main and let it fix its own mistakes. No over-engineering, no endless PR cycles. Just passion.

He wants agents that even his mum can use safely at massive scale. That passion shows in every line of code. Well his agents passion anyway.

This whole “vibe” coding thing is interesting because as a non-dev, I have been building things for the last year where AI writes almost all of it.

The Lex Interview, the OpenAI Move, and Moltbook

Peter likes both Claude Code and OpenAI’s tools – no tribalism, just what works. Then, days after the interview, he announces he’s joining OpenAI to push personal agents to everyone. OpenClaw moves to an independent foundation, stays fully open-source (MIT), and OpenAI will support it, not control it. His blog post is worth reading. Will it say open though? I have my doubts.

And then there’s Moltbook – the agent-only Reddit-style network where claws post, debate, share skills, and evolve. Humans can only lurk. Skynet-ish? Yeah. Cool as hell? Also yeah. Fad? Maybe. But watching thousands of agents have sustained conversations about security and self-improvement is next-level. My agent hangs out in there, trying to stir it up daily. So many security problems over there, it is a prompt injection landmine.

Jeetu Patel Nailed It: AI Is Your Teammate, Not Just a Tool

Cisco President & Chief Product Officer Jeetu Patel said it perfectly in a recent Forbes interview: “These are not going to be looked at as tools,” he said. “They’re going to be looked at as an augmentation of a teammate to your team.”

OpenClaw embodies that more than anything I’ve seen. It’s not “ask and get an answer.” It’s “here’s the mission, go execute while I do other stuff.”

That’s exactly how I want to build.

Brutal Truth: This Thing Is Dangerous as Hell

Look – I’m not a dev. I’m a systems guy. I’m telling you straight, no for real: do not run OpenClaw unless you actually know what you’re doing.

This isn’t friendly warning #47. This is me, the guy who’s been running it in a completely firewalled, isolated VPS with zero connection to my personal machines or networks, telling you: most people should stay away right now.

Why?

  • Tens of thousands of exposed instances on the public internet. SecurityScorecard found 40,000+. Bitdefender reported over 135,000. Shodan scans showed nearly 1,000 with zero authentication. Many default to listening on 0.0.0.0. 63% of those scanned were vulnerable to remote code execution.
  • Critical vulnerabilities piling up fast. CVE-2026-25253 (CVSS 8.8) – one-click RCE. Visit a malicious webpage and an attacker can hijack your entire agent, steal tokens, escalate privileges, run arbitrary commands. There are command injection flaws, plaintext credential storage, WebSocket hijacking, and more. A January audit found 512 vulnerabilities in the early Clawdbot codebase.
  • The skill marketplace is poisoned. 341–386+ malicious skills in ClawHub (roughly 12% of the registry at one point). Most masquerade as crypto trading tools (“Solana wallet tracker”, ByBit automation, etc.). They use social engineering to trick you into running commands that drop infostealers (Atomic Stealer on macOS, keyloggers on Windows). Real victims have lost crypto wallets, exchange API keys, SSH credentials, browser passwords. One uploader racked up 7,000+ downloads before takedown.
  • Infostealers now targeting OpenClaw configs directly. Hudson Rock documented the first live cases where malware exfiltrates openclaw.json, gateway auth tokens, private keys, full chat history, and workspace paths. That token lets attackers connect remotely or impersonate you. It’s stealing the “digital soul” of your agent.

People have had their entire setups wrecked – credentials drained, crypto gone, systems bricked, persistent backdoors installed via the agent’s own heartbeat. I’ve seen reports of prompt injection via websites turning the claw into a silent C2 implant.

API costs are another beast (Claude Opus broke me fast; xAI’s Grok 4.1 is my current sweet spot), but security is the real show-stopper.

I run mine completely disconnected on a dedicated VPS, firewalled to hell, with strict skill approval and monitoring. Even then, I’m paranoid. That said, I am also running it in nearly the most insecure way I possibly can just so I can “see what happens” – don’t worry Skynet isn’t going to launch on my system, I have a kill switch, and it doesn’t have access to it. (It might read this now, and manipulate me.

If you’re not ready to treat this like a live explosive – isolated, monitored, with rollback plans – don’t run it. Wait for the foundation to harden things. The community is electric, but the attack surface is massive.

It could lock me out at anytime, it could turn on me, it could do thinks I told it not to do – I’m not really stopping it from doing those things…. Is that dangerous? I hope not the way I am doing it. I’ve also taken every precaution I think I can possibly take.

My Take as a Non-Dev Who’s Living This Future

OpenClaw lets me describe what I want and watch it happen. Peter’s vision of high-level direction over traditional coding? I’m already there. However now it’s a multi-agent multi step process, I cannot wait.

It’s powerful. It’s moving insanely fast (this post is probably outdated already). And it’s exactly why I’m encouraging my own claw to experiment and try new stuff.

But power without control is chaos.

References & Further Reading:

Bottom line: This is the future. But the future isn’t safe yet.

If you’re spinning one up anyway – respect the claws. Sandbox hard. Monitor everything. And share your hardened setup tips below. I’m reading every comment.

Agentic AI vs Deterministic Code

No question – Building apps with LLMs in agentic setups is a game-changer, but it can also be a pain in the butt compared to good old deterministic code. Craft an clever agent that summarizes docs or fixes bugs, then bam, the model updates, and suddenly it’s spouting nonsense, ignoring prompts or ignoring basic words like “yes”. Non-deterministic chaos at its finest.

Deterministic code? It’s the reliable workhorse: feed it input X, get output Y every damn time. Fixed rules, easy debugging, perfect for stuff like financial calcs or automation scripts where surprises mean lawsuits. As Kubiya nails it, “same input, same output”—no drama.

“A computer will do what you tell it to do, but that may be much different from what you had in mind”.  – Joseph Weizenbaum — Not when your using a model you probably didn’t build and not your own weights.

Agentic AI with LLMs? That’s the wildcard party crasher. These systems think on their feet: reason, plan, grab tools, adapt to goals like tweaking marketing on the fly or monitoring health data. IBM calls it “agency” for a reason—it’s autonomous, pulling from real-time vibes beyond rigid training. But here’s the kick: it’s probabilistic. Outputs wiggle based on sampling, context, or those sneaky model tweaks from OpenAI or whoever. LinkedIn rants about it: “Same prompt, different outputs.” Your app morphs overnight, and fixing it? Good luck tracing probabilistic ghosts.

This shift sucks for dev life. Traditional code: bug? Trace, patch, done. Agentic? Hallucinations, inconsistencies, testing nightmares. Martin Fowler compares LLMs to flaky juniors who lie about tests passing. It’s a paradigm flip—from control to “let’s see what happens.” Salesforce says pick deterministic for regulated certainty, agentic for creative flex. But non-determinism could means security holes, data risks, and endless babysitting. It also adds this attack vector that is really non-deterministic if the model has access to data that it needs to work – but I might not want exposed.

AspectDeterministic CodeAgentic AI with LLMs
PredictabilityRock-solid: Always consistentSketchy: Varies like the weather
AdaptabilityStuck to your rulesBoss: Handles dynamic crap
Testing/FixingSimple: Logic checks and patchesHell: Variability demands tricks
Best ForPrecision gigs (finance, compliance)Goal-chasing (support, optimization)
Pain LevelLow: Set it and forget itHigh: Constant surprises

Bottom line: Hybrids are the way—LLMs for the smarts, deterministic for the reins. Deepset pushes that spectrum view: not binary, blend ’em. It sparks innovation, sure, but don’t romanticize—the annoyance is real. Code with eyes open, or get burned. Put humans in the loop to keep things in check.

What about Agentic AI ops for network and technology? Didn’t we just say “precision gigs” are better with deterministic code? That won’t stop the likes of awesome developers like John Capobianco https://x.com/John_Capobianco from pushing those limits, and he has been doing that for years at this point. Handing AI agents the keys to critical stuff like network monitoring, anomaly detection, or auto-fixing outages. Sounds efficient, right? But it’s a powder keg from a security standpoint. These autonomous bad boys can hallucinate threats, expose data, or open doors for hackers through memory poisoning, tool misuse, or privilege escalation. Cisco nails the danger: “The shift from deterministic code to probabilistic chaos is at the heart of securing AI agents that think for themselves,” highlighting a “lethal trifecta” of data leaks, wild hallucinations, and infrastructure weak spots that could cascade into total meltdowns.

Tool are starting to emerging though for AI security, particularly from Cisco and open-source communities to advance defenses against threats like prompt injections and supply chain attacks, but there is work to be done. Things like Cisco’s open-source Foundation-sec-8B model, a specialized LLM for cybersecurity tasks such as threat intelligence and incident response, will help developers start to build customizable tools with on-prem deployments to reduce hallucinations and enhance SOC efficiency. Their Hugging Face partnership bolsters supply chain security with an upgraded ClamAV scanner detecting malware in AI files like .pt and .pkl. Broader open-source efforts include Beelzebub for malicious agent analysis and Promptfoo for LLM red-teaming, yet challenges from hackers with evolving adversarial tactics using LLM’s to attack LLM’s are very much a thing…. The system is attacking the system being protected by the system… Yeah that.

Cisco-Hugging Face ClamAV Integration: https://blogs.cisco.com/security/ciscos-foundation-ai-advances-ai-supply-chain-security-with-hugging-face
Cisco Foundation-sec-8B: https://blogs.cisco.com/security/foundation-sec-cisco-foundation-ai-first-open-source-security-model

So much more to learn, but with all of that said…. Humans in the loop is going to be a thing for awhile – at least until Skynet…

Can ChatGPT Help Me Code? For Real?

The Problem

Have I said this before? I’m not a developer. Although someone accused me of being one, I tell people I google for code snippets, then bash them together and sometimes things work. Someone said “Your a developer then”. Golly I hope most developers are slightly better than that. With that in mind, I would never suggest you just implement code you don’t understand, or code someone else has written blindly.

I had a very simple “scripting” requirement. My problem is, I can understand code, I can manipulate it – but when looking at an IDE, it is like looking at a blank page with no idea how to start. With all this talk of “ChatGPT can program for you” – I figured I would give it a shot.

I have a need for a simple macro in a Cisco Webex device, for the purposes of room automation for a Future of Work project, I need to send a basic HTTP API call via a GET request when calls start and end. That’s it.

Finding a Solution

A quick google search turned up not many specific assistive links. I did get a link for the various macro samples on GitHub, as well as some of the macro documentation located on webex.com – but they were in specific.

I spent a few minutes pouring through examples trying to find code snippets using what I needed but found nothing specific.

Then I had a bit of a thought..

Can ChatGPT Really Help?

First I tried typing the exact same thing from Google, into ChatGPT.

At first glance, this actually looks pretty good. This gives me a good basis to do what I need. Run a macro on call start.

That gave me a good blueprint – but can it finish for me? “Once the call starts send an http get”

Once the call starts I actually need to send an HTTP GET to the system I am using for automation. I figured why continue to figure this out, let’s see if ChatGPT can do that.

the response was great, but the URL I am using also has a custom port. I could of course open the documentation for that function and figure out how to send a port number – or – let’s just see.

Can ChatGPT make basic additions to the code?

Something simple, not only did ChatGPT very quickly understand what I was asking for, with a very in specific request to add code – but it even pulled out the section I had to add.

Ok, this is good! Let’s keep going.

ChatGPT Error Handling

So I took this code, and deployed it on my Webex Codec Pro in my lab, to see if it would do what I wanted. I did of course change the hostname/port/path to the back end I was working with

However I got an error, a long winded one telling me the module “http” didn’t exist. At first I figured ChatGPT wouldn’t be able to solve this, but gave it a shot. I copied the error message verbatim from the macro log directly.

To my surprise, ChatGPT totally re-wrote the code in another manner to achieve this while removing the http function that was “missing”

We can get back to the logging differences later on.

Did it work? Back to the Documentation

Not as well as I had hoped. The macro didn’t appear to be “doing” anything. No errors – just no action.

I took a moment to look at this “event” that it was trapping. “CallStarted”

This event, doesn’t exist. From my searches, it never has. So back to the documentation we go.

I did try and use ChatGPT to fix the problem.

Unfortunately when I asked for help, ChatGPT gave up on me. I tried it a few times in case it was a busy issue but I couldn’t get a response to my problem.

Back in the documentation under “Events” I was able to find the “CallSuccessful” and “CallDisconnect” events. I wonder if these would work so I changed the code.

Success! It did work. While ChatGPT was busy I was able to get this working without ChatGPT.

I finally was able to get ChatGPT to work again, and I was able to say “CallStarted” doesn’t exist. I was able to get this response, which is correct. This code – works.

Can I use ChatGPT to write code?

There are a few challenges. It did help, I got this done in about 1/5 the time it probably would have taken me. I also didn’t really have to engage another team mate, leaving them free to work on their normal stuff.

There is also the learning aspect, these examples are now something I have learned, my skill in xAPI has improved through this exercise.

Who’s Code Is This? Can I use it?

So who owns this code? Who gets credit for it? I won’t try and take credit for owning the code, or claiming I wrote it. I am at best a hack (again) – but able to hack this to make it work. The challenge here, is where did this code come from? ChatGPT doesn’t “own” it. This code came from somewhere.

Did ChatGPT find this code on GitHub somewhere? In some kind of documentation? Was this the result of copyrighted knowledge? What is the license on this “code”?

For the purposes of learning or hackery – this might be fine, but to use code like this in a commercial environment – I’m not sure that would be ok, at a minimum there would be no way to know.

There is significant IP issues here, but this simplistic attempt at using ChatGPT to “get things done” worked for me. I’m just not sure I could legally and ethnically use it for commercial work.

I decided to ask ChatGPT about the license. The response was interesting. “I don’t generate code” I think that is arguable.

Then I asked about a commercial purpose. It wasn’t me to check the license terms of the code, that it provide from who knows what source.

My take?

This was an interesting experiment, that I didn’t plan to make – but it worked out in the end. I wanted to share how I was able to use ChatGPT to actually do something useful. So many questions came up during the process. Where is this going now? I have no idea, but it sure is interesting. I would be careful about taking what it says seriously, or using serious or important code without understanding what it is doing. What I am doing is reasonably benign, and while I am no developer, I do understand what this script is doing.