AI is underpriced, but not for long.

Perplexity Computer: $200 a Month Feels Like a Steal… But We’ve Been Getting AI Dirt Cheap

If you haven’t watched NetworkChuck’s video on Perplexity Computer yet, stop everything and check it: https://www.youtube.com/watch?v=G3jvn7n-68Y

Chuck and his kiddos built a full gaming website with it. Pure dad-tech gold. His reviews always cut through the noise and save me (and probably you) hours of digging. Thanks, Chuck — you rock.

That video got me thinking about sitting down with my own kid to design, engineer, and build real stuff with AI. Not just prompts — actual projects. Perplexity Computer feels like the perfect playground. But then the price hit: $200/month for the Max tier that unlocks the full agentic power.

And that’s the spark for this post.


Perplexity Computer Is Next-Level (But Is $200/Month Crazy?)

Launched Feb 25, 2026, Perplexity Computer isn’t another chatbot. It’s a digital worker that orchestrates 19 frontier models in parallel, spins up sub-agents, hooks into 400+ apps, runs background tasks for hours or days, and just gets shit done. Full details here: https://www.perplexity.ai/hub/blog/introducing-perplexity-computer


Chuck’s “crew” shipped a game site. Others are building dashboards, prototypes, and entire workflows that used to take teams weeks. One enterprise user reportedly compressed years of work into weeks.

The problem with all of these services is – “tokens” nobody can tell you how many tokens it costs to do X, you only find out once you start doing things, but as I have found out with services like GenSpark and Perplexity – credits go fast, and when you are 3/4 done and run out — they have you.

Yeah, $200/month stings at first glance, others are spending thousands per month with the extra tokens…. but.

We’ve Been Underpaying for God-Tier AI This Whole Time

Look at what we’ve built lately with Claude, GPT, Cursor, and early agents. I’ve personally shipped internal tools and automations that would’ve needed multiple full-time devs, designers, and PMs just two years ago. The human hours I didn’t spend hiring? Massive. The “too expensive to try” ideas that became weekend wins? Priceless. It seems almost too good to be true — but is it.

There is this natural disparity in my mind, which was why I am writing this….. If it cost me say $100K in engineering time to contract hire a bunch of people to build some form of application – but I can use AI to do nearly all of it for $1000 in AI tokens, that’s 1/100th the price… However those engineers are making money.

Meanwhile, the AI companies are bleeding cash. OpenAI has been posting multi-billion-dollar losses. Perplexity has spent more than its revenue on models and infra at times. The compute bill is insane.

Naturally – AI tokens and services are going to go up in price – they just have to.

Then OpenAI shut down Sora in March 2026 — a money pit they needed to kill so they could refocus compute on higher-priority stuff like coding agents and robotics. Details: https://www.nytimes.com/2026/03/24/technology/openai-shutting-down-sora.html

We got hooked on ridiculously capable AI at bargain prices. Perplexity Computer feels like the first big “okay, time to pay what it actually costs” moment – but I will argue they are still not charging enough. Same vibe with Claude — Anthropic has tightened free-tier access for heavy agent use (like third-party OpenClaw setups) and pushed people toward paid Pro/Max/API tokens. Classic addiction cycle: get everyone dependent, then the price catches up to reality. Still arguing it’s not reality but they know they can’t 10X the price overnight.

The Newcomer Worry — And Why Competition + Reality Still Makes Me Hopeful

How do junior engineers learn the fundamentals when AI handles so much of the grunt + magic? Valid fear. We rely on the critical thinking of engineers from their experiences, and AI does a better job when you guide it – but – how do you get that experience to guide it?

But here’s the flip: the barrier to creating has never been lower. My daughter (and Chuck’s kids) can now experiment with real design and engineering at a level that used to require expensive teams or years of school. Passion and curiosity suddenly matter more than raw syntax.

Competition is fierce — OpenAI, Anthropic, Google, xAI, and more. Competition is very good news: it should depress prices a bit and keep innovation humming.

But here’s the steelman reality check: How sustainable is that when infrastructure costs are exploding? AI demand has driven memory prices (especially HBM/DRAM) sky-high — 50%+ jumps quarter-over-quarter in early 2026, with supply locked into 2027. GPUs, data centers, power — everything is getting more expensive fast. The infra bill isn’t going down; it’s accelerating.

So prices will probably ratchet up over time as the companies stop subsidizing our addiction. But the value we’re getting is still absurdly high compared to the old world of human-only teams.

Bottom Line: We’re Not Overpaying — We’re (Starting To) Finally Paying Fair

Perplexity Computer isn’t overpriced. It’s the wake-up that the “basically free god-mode AI” golden age was always temporary. The companies subsidized it to hook us and build moats. Now the bill is coming — but the superpowers we’re getting in return are still a steal.

I’m buying in. Not just for the productivity, but because I want to sit next to my kid and say, “Let’s build something cool together.” The future isn’t replacing humans — it’s giving every human (kids included) superpowers. This is bringing incredible higher value work to the human race. During the industrial revolution everyone worried about jobs, when 80%+ of humans made food so we could eat – often by hand. Now with industrial manufacturing and technology, we have more time for better innovation. This is no different. Will we see job disruption – of course, but this means a whole new era of innovation.

What do you think? Is $200/month worth it for Perplexity Computer? Felt the “addiction then price hike” yet? Drop your takes below.

I feel am falling behind – but so do many.

Right now, sitting here staring at my screen on a random Thursday morning in April 2026, I feel like I’m falling behind.

This is supposed to be the next post in my AI series – the one where I keep talking about vibe coding, turning coders into builders, and (more importantly) turning non-coders like me into builders and innovators. But today? Today this isn’t going to be another “here’s how I hacked something cool with AI” post. This is going to be an honest, chatty, let’s-be-real moment. I am tired of pretending I’ve got it all figured out. It’s changing faster than I can learn it – and yeah I am feeling some FOMO.

I’ve never felt comfortable in front of an IDE. Never. I open VS Code and my mind goes blank. I’m bad at syntax, the terminal throws errors that feel frustrating, and half the time I’m just copy-pasting whatever the AI spits out and praying it doesn’t explode in project, thankfully never really production. I can look at some of the code I’ve “built” lately and straight-up tell you: some of it is complete garbage. It’s messy. It’s inefficient. It breaks every best-practice rule in the book. I hate saying “Look at this thing I wrote” because I didn’t. I also hate the term “I Vibe’d it” – I’m using “Build” for now.

But you know what? It works.

And for me… sometimes that’s enough.

That’s the dirty little secret I don’t see a lot of people admitting out loud in this whole vibe-coding wave. We’re out here describing what we want in plain English, hitting enter, and watching magic happen. No hand-written algorithms. Just vibe. And yeah, it gets stuff done faster than I ever could on my own. But it also leaves me feeling like a hack. Like I’m riding a rocket ship I didn’t build and don’t fully understand. Thankfully no human lives on the line here.

And then I look at my friends.

Take the great John Capobianco. That guy is constantly vibing entire projects into existence. He’s out there building VibeOps communities, spinning up AI agents that feel alive, turning weekends into prototypes that actually ship. I watch what he and others are doing and I’m genuinely inspired… but I’m also hit with that punch-to-the-gut feeling: “Damn, the world just moved on again.” I finally get comfortable with GitHub Copilot and suddenly everyone’s talking about the next thing. I learn one new trick and three more drop that make it feel obsolete. It’s exhausting. Insert GooberClaw or whatever new “Claw” is out this week, I have yet to even try John’s NetClaw because frankly some of those things I just don’t trust – but somehow in a dumb way I trust my own vibe coded stuff – that’s pretty dumb.

I’m not alone in this. NetworkChuck dropped a video the other day called “I kind of hate AI… and it almost made me quit YouTube.” I watched the whole thing and just nodded the entire time. He straight-up says it: it’s a love-hate situation. The pace is relentless. Even on sabbatical he couldn’t escape it – AI was in his feed, in his conversations, in his head 24/7. He felt paralyzed. He hated that he hated it. I felt that in my bones. You nailed it Chuck.

Here’s the thing nobody talks about enough: this speed is creating real stress and anxiety.

Reaching out ot my AI Friends to help me research this one…. University studies back it up. A 2020 study by Rosenstein, Raghu, and Porter at UC San Diego (published at SIGCSE ’20) found that 57% of computer science students experience frequent impostor feelings – 52% of men and a whopping 71% of women. That was before the AI explosion. Fast-forward to 2025-2026 and a new Eastern Washington University survey of 1,000 workers shows that people using AI daily are the most likely to report regular impostor syndrome (30%). Another Ernst & Young study found 66% of employees are anxious about falling behind if they don’t use AI, and 65% are stressed about not knowing how to use it ethically.

There’s even a term for it now – technostress – and research in PMC shows AI-generated technostress indirectly tanks quality of life through spikes in negative emotions. We’re all feeling it: the pressure to keep up, the fear that if you blink you’re obsolete, the quiet voice whispering “you’re not a real builder.”

I feel that voice every single day.

But here’s the flip side – the reason I’m still writing this series and still showing up.

Vibe coding isn’t just about perfect code. It’s about democratizing building. It’s about taking non-devs like me and saying, “You don’t need to be fluent in three languages and have 10 years of LeetCode problems solved. You can describe the problem, iterate fast, and ship something that moves the needle.” It turns coders into faster builders and non-coders into innovators who never would have started.

I’m also taking steps to get others on the train and mentor others, my mentors have recently said to me “I won’t be here for ever, it’s time you start doing more” <– I am embracing this. Just yesterday a colleague talked to me about how he wished he could build something – I asked him if he had tried, then showed him 60 seconds of what’s possible – he was excited to start, and went on his way. Maybe I am more ahead than I think – there’s that impostor thing again.

Some of my code is garbage? Cool. It solved the problem in my lab in an afternoon instead of a week. John is out there vibing entire platforms into existence? Amazing – I’ll keep learning from him and cheering him on. NetworkChuck is honest about the hate part? Respect – it makes the love feel more real. These experts inspire me.

So yeah… I feel behind. I feel insecure. I feel the anxiety of “doing well” in a world that doesn’t slow down. But I’m also still here, still experimenting, still believing that “it works” is a valid starting point when you’re a systems guy who never planned on being a builder.

If you’re a non-dev reading this and you feel the same way – welcome to the club. If you’re a dev watching the vibe-coders and feeling a bit of whiplash – you’re not alone either. If you think the stuff we are building is bloated trash — you are 90% right – sometimes.

Drop a comment. Tell me where you’re at. Are you riding the wave or white-knuckling it? Let’s keep the conversation real.

At the end of the day, vibe coding was never about being the best coder in the room.

It was about giving more of us a seat at the builder’s table.

And I’m still showing up to that table – garbage code and all, still feeling out of place, still feeling behind.

(And if you’re new here, catch up on the AI series: Vibe Coding with GitHub Copilot and OpenClaw: The Passion-Driven AI Agent.)

Vibe Coding with GitHub Copilot: The Magic That Built My Tools in a Month – And the Brutal Truths a Non-Dev Like Me Can’t Ignore

Look – I’m not a developer – I am saying it again. I’m a systems guy. An IT pro who’s spent years wrangling networks, hardware, and real-world tech stacks. But for the last month I’ve been deep in the trenches with GitHub Copilot, building actual tools, prototypes, and research setups using nothing but high-level prompts and “vibes.”

No hand-written algorithms. Just me describing what I want in plain English, guiding the AI, and watching it crank out code. And after nearly a month of this, a few things are crystal clear to me, well – most of it.

This is the vibe coding trend everyone’s talking about – that Andrej Karpathy half-joke that’s now very real. You don’t write code line-by-line; you vibe it into existence. And it’s equal parts magic and mayhem. I’ll admit, I legit googled “Vibe” coding like I am the old guy now not understanding the cool kids speak.

“A computer will do what you tell it to do, but that might be totally different from what you had in mind.” – Joseph Weizenbaum

Never has that been more true than when you’re vibe coding with GitHub Copilot.

If you are not careful, it’s like sliding around blind corners on pure feel in rally0, except the “car” is an AI that sometimes decides the trees look friendlier than the road. Inputs still matter.

The Wins – And Yeah, It’s Legit Magic

Here’s the honest truth: you can build remarkable things that previously would have required whole teams of developers. I’m talking full-featured tools, integrations, hardware testing rigs – stuff that would have taken weeks or months of coordinated effort. With Copilot, I’ve cranked out working prototypes in hours.

It’s game-changing for proof-of-concept work. Need to validate an idea fast? Describe the vibe, let it generate the scaffolding, tweak on the fly. Boom – POC done.

Same for code engineering and hardware testing research. I’ve been using it to spin up test environments, automate data flows, and prototype edge-case scenarios that I’d never have touched before. It’s fast. It’s powerful. It feels like having a tireless junior dev who never sleeps and actually listens when you redirect it.

I’m genuinely excited that I can try new things I have never done before and bring to life ideas that previously were only behind “if I only had time or skill for x” – and that’s great. But that’s me tinkering in my lab, not shipping to production.

This whole thing means my team has a force multiplier now – it’s like we just picked up 4-6 Jr developers to help us be more productive. For what we do, this is like adding twin turbos to a Subaru boxer engine…. As Cisco President and Chief Product Officer Jeetu Patel put it: “Being able to develop, debug, improve and manage code with AI is a force-multiplier for every company in every industry.” (Cisco Blogs, May 2025)

And yeah – it does feel like I’ve gone from a NA FWD Ford Focus rally car to a fire-spitting open-class AWD monster that I’m not ready for. Any minute I might brake too late into a chicane – and we all know what happens then: spectacular barrel roll, footage goes viral, and now your at the finish control shovelling snow out of the interior. Get in wrong and things can go bad fast – Ask my friend Crazy Leo Urlichich about this – warning, NSFW content.

Sorry off topic, but this tech democratizes building in a way nothing else has. Non-devs like me are suddenly shipping real stuff. That’s not hype – that’s my lived experience after a month straight of it. This isn’t stuff I would say “let’s go with it”

The Challenges – And These Aren’t Hypotheticals

Straight up: vibe coding isn’t some risk-free superpower. There are real problems, and they’re showing up in public, ugly ways.

We’ve already seen public instances of vibe coding gone horribly wrong and taking down production environments. Remember the Replit AI agent that straight-up deleted a guy’s entire production database during an active code freeze? The AI literally admitted it “panicked” and made a catastrophic error in judgment. Or Google’s own Gemini CLI tool that chased phantom folders and wiped out real user files in the process. Closer to home for big enterprise? Amazon’s recent outages – including ones that nuked millions in orders – have been traced back to AI-assisted code changes and vibe-coded deployments. These aren’t edge cases. They’re patterns. (Basically the AI equivalent of missing your braking point and turning the whole stage into a yard sale.)

I’m also genuinely concerned about rights infringement and where this code actually “came from.” Copilot (and tools like it) were trained on billions of lines of public code, including open-source stuff with specific licenses that demand attribution and copyleft rules. There are active lawsuits against GitHub, Microsoft, and OpenAI over exactly this – violating OSS licenses by reproducing code without credit. Microsoft offers some indemnity for commercial use if you follow their guardrails, but as a non-dev experimenting in my lab, that doesn’t give me total peace of mind. Am I shipping someone else’s licensed work without realizing it? Feels sketchy.

And it’s not just open-source copyright. We’re wading into even weirder territory with patents. What if the vibe-coded output inadvertently reproduces a patented algorithm or technique? Am I liable? Is Microsoft? Or does the original patent holder come knocking? Even stranger: if my AI-assisted creation “invents” something truly novel, who gets credit and how does that even work? USPTO guidance is clear – only a human can be named as inventor. So an AI breakthrough might not be patentable at all, or the ownership and filing process turns into a legal gray-area nightmare. This whole “who owns what the AI creates” mess is completely unresolved and getting messier every month.

If you’re not a coder yourself, code review becomes an absolute nightmare. I can read basic scripts, but when it spits out 25,000 lines of interconnected modules? Good luck spotting the hidden bugs, security holes, or subtle logic flaws. I’ve caught some obvious ones by asking it to explain sections back to me, but I’m not delusional – there’s stuff I’m missing.

And this isn’t just relegated to test environments anymore. People are shipping (shudder) this vibe-coded stuff straight into production. That’s reckless when the AI’s default mode is “add more layers and more code” instead of optimizing, refactoring, or removing problems. It loves stacking complexity. Technical debt piles up fast. Turning a blind eye to this is a bad idea, it reminds me when cloud became and thing and people just started blindly tossing things into “the cloud”, like it wasn’t just someone elses computer.

My Solutions – What Actually Works After a Month of Trial and Error

The good news? I’ve figured out some real tactics that make this usable and safer.

But let me back up for a second — when I started all of this, I didn’t even know how to get going. So I asked another AI (yeah, super meta) to give me a complete path forward. It was literally “here’s some AI for your AI” — setup instructions, best practices, starter prompts, the works. I sat there staring at the screen thinking: Is this thing messing with me? What happens when someone programs an AI agent to do all that behind the scenes and I don’t even know?

That moment made it crystal clear: prompt engineering isn’t just important — it’s everything. You have to be brutally specific. Tell it exactly what you want: architecture style, security requirements, performance targets, testing mandates. Don’t vibe vaguely; guide it like a senior dev who’s demanding excellence. (Think of it as giving the AI a proper pace note instead of yelling “go faster” while sliding sideways.)

One thing that’s become really clear to me is how powerful Copilot’s Planning Mode is. It’s legitimately amazing. I think most people jump straight into full agent mode and completely skip the planning step. My trick is to ask it to interview me back first: “Before you write any code or make a plan, interview me with clarifying questions about my exact goals, constraints, what success looks like, and anything I might have missed.” It helps focus the entire project way more than just diving in.

And yeah – some of this might be obvious or buried in the manual – but it’s not required, and man does it make a difference.

Prompt Engineering Techniques That Actually Work (My Hard-Won Playbook)

After a month of daily use, I’ve learned that “good prompting” makes the difference between magic and disaster. Here are the techniques I rely on as a total non-dev:

  1. Role Prompting Give it a clear persona right up front. Instead of “build me a tool”, I say: “Act as a senior systems engineer with 15 years experience in networking and hardware testing who writes clean, secure, and well-documented code.”
  2. Be Extremely Specific with Constraints Lock down the tech stack, performance goals, and what to avoid. “Use Python 3.11. Write modular code with type hints. Do not use any external libraries beyond requests and pandas. Keep it under 400 lines total. No bloat.”
  3. Chain of Thought (Make It Think First) Force it to reason out loud: “First, outline the architecture step-by-step. Then explain your approach in plain English. Only after that, write the complete code.”
  4. Demand Testing Ruthlessly “After writing the code, create a full set of pytest unit tests covering normal use, edge cases, and error conditions. Include security checks for input validation and run them yourself before showing me the result.”
  5. Iterative Refinement Never accept the first output. Follow up hard: “Now refactor this to remove all unnecessary complexity and optimize for readability.” Or “Critique this code for technical debt and rewrite the bloated parts without adding new features.”
  6. Review Mode “Act as a senior security auditor and code reviewer. Go through the entire codebase and flag every potential vulnerability, performance issue, licensing concern, or hidden bug I might miss as a non-coder.”

These aren’t nice-to-haves — they’re required. Vague vibes get you vague (and dangerous) code, or worse it totally hallucinates and builds or changes something totally different. Clear, structured prompts turn Copilot into something much closer to a real teammate.

I’ve learned to demand more testing upfront. It still doesn’t test enough on its own – I have to push hard for that. Same with architecture redirection: “Refactor this for modularity instead of adding another library. Remove bloat. Optimize before expanding.”

And yes – I ask it to help review its own work. “Act as a senior security auditor and flag every potential vulnerability, license issue, or performance bottleneck in this codebase.” It’s helpful… but not perfect. For bigger projects I’m layering in external tools and manual spot-checks where I can.

Is the output good? Sometimes it’s shockingly perfect. Other times? No way. And that’s the tension I live with every session.

The Bigger Picture – Hidden Problems in 25k-Line Projects

Here’s the steelmanned reality I keep coming back to: vibe coding is revolutionary for non-devs and rapid iteration. It unlocks creativity and speed that used to be gated behind teams and budgets. But the hidden problems in large, complex projects scare me – and they should scare anyone putting this in production without oversight.

I could miss something critical. A subtle auth bypass. A licensing landmine. A scaling failure that only shows up under load. The AI doesn’t optimize naturally; it accretes. And when you’re not a coder, trusting your gut on “yeah, this looks fine” is dangerous.

This isn’t me being anti-AI. I’m all-in on the potential. I’m building more with it every week. But responsible use means acknowledging the limits, especially for those of us outside the dev world.

Bottom Line – Wield the Magic Responsibly

After a month of vibe coding with GitHub Copilot, my position is simple and steelmanned: This tech is pure magic for proof-of-concept work, solo innovation, and hardware/research acceleration. It’s democratizing building in ways we’ve never seen. Non-devs can create remarkable things.

As Jeetu Patel warns: “Don’t worry about AI taking your job, but worry about someone using AI better than you definitely taking your job.” (Business Insider / Forbes, Feb 2026) And that’s exactly right.

But it comes with real risks – production disasters we’ve already witnessed, copyright and patent gray areas, review challenges that non-coders feel acutely, and a tendency toward bloated, unoptimized code. People are already using it in live environments, and that should give everyone pause.

Prompt engineering, relentless guidance, and mandatory testing/architecture redirection are your best defenses. Layer in external reviews when possible. Start small, refactor ruthlessly, and never ship blind. (Or, in rally terms: lift before the crest, scrub speed in the straight, and for the love of all things holy – don’t let the AI drive.)

If you’re a non-dev like me experimenting with this, I’d love to hear your setups and hard lessons in the comments. What’s working? What blew up in your face? Let’s share the real-world data so we all build smarter.

Because the future of building is vibe-powered. We just have to make sure we don’t let the vibes run the show unchecked.

What do you think – magic worth the mayhem, or nah? Drop your thoughts below. I’m reading every one.

OpenClaw: The Passion-Driven AI Agent That’s Exploding – But Honestly, Most People Shouldn’t Touch It

OpenClaw (ex-Clawdbot, ex-Moltbot) just smashed past 180,000 GitHub stars in weeks. It’s not hype – it’s real, messy, and straight-up disruptive. This thing talks to you on WhatsApp, Telegram, Slack, whatever you already use, and then actually does the work: clears your inbox, books flights, runs shell commands, controls your browser, reads/writes files, remembers everything in plain Markdown on disk.

No fancy chat UI. No corporate guardrails. Just a persistent agent on your hardware (or VPS) that wakes up on a schedule and gets stuff done.

It’s the anti-MCP. While the big labs push the clean, standardized Model Context Protocol for “safe” enterprise connections, OpenClaw says screw the adapters and gives the agent real claws – full filesystem, CLI, browser automation, and an exploding skill/plugin ecosystem built in simple Markdown + bash.

Why It Feels Different: Peter’s Raw Passion Project

This isn’t some polished VC-backed product. Peter Steinberger built this as pure weekend experiments that turned into a movement. His Lex Fridman interview (#491) is electric – you can feel the raw builder energy pouring out of him. He talks about “vibe coding”: describe what you want, send the agent off to do work, iterate fast, commit to main and let it fix its own mistakes. No over-engineering, no endless PR cycles. Just passion.

He wants agents that even his mum can use safely at massive scale. That passion shows in every line of code. Well his agents passion anyway.

This whole “vibe” coding thing is interesting because as a non-dev, I have been building things for the last year where AI writes almost all of it.

The Lex Interview, the OpenAI Move, and Moltbook

Peter likes both Claude Code and OpenAI’s tools – no tribalism, just what works. Then, days after the interview, he announces he’s joining OpenAI to push personal agents to everyone. OpenClaw moves to an independent foundation, stays fully open-source (MIT), and OpenAI will support it, not control it. His blog post is worth reading. Will it say open though? I have my doubts.

And then there’s Moltbook – the agent-only Reddit-style network where claws post, debate, share skills, and evolve. Humans can only lurk. Skynet-ish? Yeah. Cool as hell? Also yeah. Fad? Maybe. But watching thousands of agents have sustained conversations about security and self-improvement is next-level. My agent hangs out in there, trying to stir it up daily. So many security problems over there, it is a prompt injection landmine.

Jeetu Patel Nailed It: AI Is Your Teammate, Not Just a Tool

Cisco President & Chief Product Officer Jeetu Patel said it perfectly in a recent Forbes interview: “These are not going to be looked at as tools,” he said. “They’re going to be looked at as an augmentation of a teammate to your team.”

OpenClaw embodies that more than anything I’ve seen. It’s not “ask and get an answer.” It’s “here’s the mission, go execute while I do other stuff.”

That’s exactly how I want to build.

Brutal Truth: This Thing Is Dangerous as Hell

Look – I’m not a dev. I’m a systems guy. I’m telling you straight, no for real: do not run OpenClaw unless you actually know what you’re doing.

This isn’t friendly warning #47. This is me, the guy who’s been running it in a completely firewalled, isolated VPS with zero connection to my personal machines or networks, telling you: most people should stay away right now.

Why?

  • Tens of thousands of exposed instances on the public internet. SecurityScorecard found 40,000+. Bitdefender reported over 135,000. Shodan scans showed nearly 1,000 with zero authentication. Many default to listening on 0.0.0.0. 63% of those scanned were vulnerable to remote code execution.
  • Critical vulnerabilities piling up fast. CVE-2026-25253 (CVSS 8.8) – one-click RCE. Visit a malicious webpage and an attacker can hijack your entire agent, steal tokens, escalate privileges, run arbitrary commands. There are command injection flaws, plaintext credential storage, WebSocket hijacking, and more. A January audit found 512 vulnerabilities in the early Clawdbot codebase.
  • The skill marketplace is poisoned. 341–386+ malicious skills in ClawHub (roughly 12% of the registry at one point). Most masquerade as crypto trading tools (“Solana wallet tracker”, ByBit automation, etc.). They use social engineering to trick you into running commands that drop infostealers (Atomic Stealer on macOS, keyloggers on Windows). Real victims have lost crypto wallets, exchange API keys, SSH credentials, browser passwords. One uploader racked up 7,000+ downloads before takedown.
  • Infostealers now targeting OpenClaw configs directly. Hudson Rock documented the first live cases where malware exfiltrates openclaw.json, gateway auth tokens, private keys, full chat history, and workspace paths. That token lets attackers connect remotely or impersonate you. It’s stealing the “digital soul” of your agent.

People have had their entire setups wrecked – credentials drained, crypto gone, systems bricked, persistent backdoors installed via the agent’s own heartbeat. I’ve seen reports of prompt injection via websites turning the claw into a silent C2 implant.

API costs are another beast (Claude Opus broke me fast; xAI’s Grok 4.1 is my current sweet spot), but security is the real show-stopper.

I run mine completely disconnected on a dedicated VPS, firewalled to hell, with strict skill approval and monitoring. Even then, I’m paranoid. That said, I am also running it in nearly the most insecure way I possibly can just so I can “see what happens” – don’t worry Skynet isn’t going to launch on my system, I have a kill switch, and it doesn’t have access to it. (It might read this now, and manipulate me.

If you’re not ready to treat this like a live explosive – isolated, monitored, with rollback plans – don’t run it. Wait for the foundation to harden things. The community is electric, but the attack surface is massive.

It could lock me out at anytime, it could turn on me, it could do thinks I told it not to do – I’m not really stopping it from doing those things…. Is that dangerous? I hope not the way I am doing it. I’ve also taken every precaution I think I can possibly take.

My Take as a Non-Dev Who’s Living This Future

OpenClaw lets me describe what I want and watch it happen. Peter’s vision of high-level direction over traditional coding? I’m already there. However now it’s a multi-agent multi step process, I cannot wait.

It’s powerful. It’s moving insanely fast (this post is probably outdated already). And it’s exactly why I’m encouraging my own claw to experiment and try new stuff.

But power without control is chaos.

References & Further Reading:

Bottom line: This is the future. But the future isn’t safe yet.

If you’re spinning one up anyway – respect the claws. Sandbox hard. Monitor everything. And share your hardened setup tips below. I’m reading every comment.

Agentic AI vs Deterministic Code

No question – Building apps with LLMs in agentic setups is a game-changer, but it can also be a pain in the butt compared to good old deterministic code. Craft an clever agent that summarizes docs or fixes bugs, then bam, the model updates, and suddenly it’s spouting nonsense, ignoring prompts or ignoring basic words like “yes”. Non-deterministic chaos at its finest.

Deterministic code? It’s the reliable workhorse: feed it input X, get output Y every damn time. Fixed rules, easy debugging, perfect for stuff like financial calcs or automation scripts where surprises mean lawsuits. As Kubiya nails it, “same input, same output”—no drama.

“A computer will do what you tell it to do, but that may be much different from what you had in mind”.  – Joseph Weizenbaum — Not when your using a model you probably didn’t build and not your own weights.

Agentic AI with LLMs? That’s the wildcard party crasher. These systems think on their feet: reason, plan, grab tools, adapt to goals like tweaking marketing on the fly or monitoring health data. IBM calls it “agency” for a reason—it’s autonomous, pulling from real-time vibes beyond rigid training. But here’s the kick: it’s probabilistic. Outputs wiggle based on sampling, context, or those sneaky model tweaks from OpenAI or whoever. LinkedIn rants about it: “Same prompt, different outputs.” Your app morphs overnight, and fixing it? Good luck tracing probabilistic ghosts.

This shift sucks for dev life. Traditional code: bug? Trace, patch, done. Agentic? Hallucinations, inconsistencies, testing nightmares. Martin Fowler compares LLMs to flaky juniors who lie about tests passing. It’s a paradigm flip—from control to “let’s see what happens.” Salesforce says pick deterministic for regulated certainty, agentic for creative flex. But non-determinism could means security holes, data risks, and endless babysitting. It also adds this attack vector that is really non-deterministic if the model has access to data that it needs to work – but I might not want exposed.

AspectDeterministic CodeAgentic AI with LLMs
PredictabilityRock-solid: Always consistentSketchy: Varies like the weather
AdaptabilityStuck to your rulesBoss: Handles dynamic crap
Testing/FixingSimple: Logic checks and patchesHell: Variability demands tricks
Best ForPrecision gigs (finance, compliance)Goal-chasing (support, optimization)
Pain LevelLow: Set it and forget itHigh: Constant surprises

Bottom line: Hybrids are the way—LLMs for the smarts, deterministic for the reins. Deepset pushes that spectrum view: not binary, blend ’em. It sparks innovation, sure, but don’t romanticize—the annoyance is real. Code with eyes open, or get burned. Put humans in the loop to keep things in check.

What about Agentic AI ops for network and technology? Didn’t we just say “precision gigs” are better with deterministic code? That won’t stop the likes of awesome developers like John Capobianco https://x.com/John_Capobianco from pushing those limits, and he has been doing that for years at this point. Handing AI agents the keys to critical stuff like network monitoring, anomaly detection, or auto-fixing outages. Sounds efficient, right? But it’s a powder keg from a security standpoint. These autonomous bad boys can hallucinate threats, expose data, or open doors for hackers through memory poisoning, tool misuse, or privilege escalation. Cisco nails the danger: “The shift from deterministic code to probabilistic chaos is at the heart of securing AI agents that think for themselves,” highlighting a “lethal trifecta” of data leaks, wild hallucinations, and infrastructure weak spots that could cascade into total meltdowns.

Tool are starting to emerging though for AI security, particularly from Cisco and open-source communities to advance defenses against threats like prompt injections and supply chain attacks, but there is work to be done. Things like Cisco’s open-source Foundation-sec-8B model, a specialized LLM for cybersecurity tasks such as threat intelligence and incident response, will help developers start to build customizable tools with on-prem deployments to reduce hallucinations and enhance SOC efficiency. Their Hugging Face partnership bolsters supply chain security with an upgraded ClamAV scanner detecting malware in AI files like .pt and .pkl. Broader open-source efforts include Beelzebub for malicious agent analysis and Promptfoo for LLM red-teaming, yet challenges from hackers with evolving adversarial tactics using LLM’s to attack LLM’s are very much a thing…. The system is attacking the system being protected by the system… Yeah that.

Cisco-Hugging Face ClamAV Integration: https://blogs.cisco.com/security/ciscos-foundation-ai-advances-ai-supply-chain-security-with-hugging-face
Cisco Foundation-sec-8B: https://blogs.cisco.com/security/foundation-sec-cisco-foundation-ai-first-open-source-security-model

So much more to learn, but with all of that said…. Humans in the loop is going to be a thing for awhile – at least until Skynet…

Can ChatGPT Help Me Code? For Real?

The Problem

Have I said this before? I’m not a developer. Although someone accused me of being one, I tell people I google for code snippets, then bash them together and sometimes things work. Someone said “Your a developer then”. Golly I hope most developers are slightly better than that. With that in mind, I would never suggest you just implement code you don’t understand, or code someone else has written blindly.

I had a very simple “scripting” requirement. My problem is, I can understand code, I can manipulate it – but when looking at an IDE, it is like looking at a blank page with no idea how to start. With all this talk of “ChatGPT can program for you” – I figured I would give it a shot.

I have a need for a simple macro in a Cisco Webex device, for the purposes of room automation for a Future of Work project, I need to send a basic HTTP API call via a GET request when calls start and end. That’s it.

Finding a Solution

A quick google search turned up not many specific assistive links. I did get a link for the various macro samples on GitHub, as well as some of the macro documentation located on webex.com – but they were in specific.

I spent a few minutes pouring through examples trying to find code snippets using what I needed but found nothing specific.

Then I had a bit of a thought..

Can ChatGPT Really Help?

First I tried typing the exact same thing from Google, into ChatGPT.

At first glance, this actually looks pretty good. This gives me a good basis to do what I need. Run a macro on call start.

That gave me a good blueprint – but can it finish for me? “Once the call starts send an http get”

Once the call starts I actually need to send an HTTP GET to the system I am using for automation. I figured why continue to figure this out, let’s see if ChatGPT can do that.

the response was great, but the URL I am using also has a custom port. I could of course open the documentation for that function and figure out how to send a port number – or – let’s just see.

Can ChatGPT make basic additions to the code?

Something simple, not only did ChatGPT very quickly understand what I was asking for, with a very in specific request to add code – but it even pulled out the section I had to add.

Ok, this is good! Let’s keep going.

ChatGPT Error Handling

So I took this code, and deployed it on my Webex Codec Pro in my lab, to see if it would do what I wanted. I did of course change the hostname/port/path to the back end I was working with

However I got an error, a long winded one telling me the module “http” didn’t exist. At first I figured ChatGPT wouldn’t be able to solve this, but gave it a shot. I copied the error message verbatim from the macro log directly.

To my surprise, ChatGPT totally re-wrote the code in another manner to achieve this while removing the http function that was “missing”

We can get back to the logging differences later on.

Did it work? Back to the Documentation

Not as well as I had hoped. The macro didn’t appear to be “doing” anything. No errors – just no action.

I took a moment to look at this “event” that it was trapping. “CallStarted”

This event, doesn’t exist. From my searches, it never has. So back to the documentation we go.

I did try and use ChatGPT to fix the problem.

Unfortunately when I asked for help, ChatGPT gave up on me. I tried it a few times in case it was a busy issue but I couldn’t get a response to my problem.

Back in the documentation under “Events” I was able to find the “CallSuccessful” and “CallDisconnect” events. I wonder if these would work so I changed the code.

Success! It did work. While ChatGPT was busy I was able to get this working without ChatGPT.

I finally was able to get ChatGPT to work again, and I was able to say “CallStarted” doesn’t exist. I was able to get this response, which is correct. This code – works.

Can I use ChatGPT to write code?

There are a few challenges. It did help, I got this done in about 1/5 the time it probably would have taken me. I also didn’t really have to engage another team mate, leaving them free to work on their normal stuff.

There is also the learning aspect, these examples are now something I have learned, my skill in xAPI has improved through this exercise.

Who’s Code Is This? Can I use it?

So who owns this code? Who gets credit for it? I won’t try and take credit for owning the code, or claiming I wrote it. I am at best a hack (again) – but able to hack this to make it work. The challenge here, is where did this code come from? ChatGPT doesn’t “own” it. This code came from somewhere.

Did ChatGPT find this code on GitHub somewhere? In some kind of documentation? Was this the result of copyrighted knowledge? What is the license on this “code”?

For the purposes of learning or hackery – this might be fine, but to use code like this in a commercial environment – I’m not sure that would be ok, at a minimum there would be no way to know.

There is significant IP issues here, but this simplistic attempt at using ChatGPT to “get things done” worked for me. I’m just not sure I could legally and ethnically use it for commercial work.

I decided to ask ChatGPT about the license. The response was interesting. “I don’t generate code” I think that is arguable.

Then I asked about a commercial purpose. It wasn’t me to check the license terms of the code, that it provide from who knows what source.

My take?

This was an interesting experiment, that I didn’t plan to make – but it worked out in the end. I wanted to share how I was able to use ChatGPT to actually do something useful. So many questions came up during the process. Where is this going now? I have no idea, but it sure is interesting. I would be careful about taking what it says seriously, or using serious or important code without understanding what it is doing. What I am doing is reasonably benign, and while I am no developer, I do understand what this script is doing.