Vibe Coding with GitHub Copilot: The Magic That Built My Tools in a Month – And the Brutal Truths a Non-Dev Like Me Can’t Ignore

Look – I’m not a developer – I am saying it again. I’m a systems guy. An IT pro who’s spent years wrangling networks, hardware, and real-world tech stacks. But for the last month I’ve been deep in the trenches with GitHub Copilot, building actual tools, prototypes, and research setups using nothing but high-level prompts and “vibes.”

No hand-written algorithms. Just me describing what I want in plain English, guiding the AI, and watching it crank out code. And after nearly a month of this, a few things are crystal clear to me, well – most of it.

This is the vibe coding trend everyone’s talking about – that Andrej Karpathy half-joke that’s now very real. You don’t write code line-by-line; you vibe it into existence. And it’s equal parts magic and mayhem. I’ll admit, I legit googled “Vibe” coding like I am the old guy now not understanding the cool kids speak.

“A computer will do what you tell it to do, but that might be totally different from what you had in mind.” – Joseph Weizenbaum

Never has that been more true than when you’re vibe coding with GitHub Copilot.

If you are not careful, it’s like sliding around blind corners on pure feel in rally0, except the “car” is an AI that sometimes decides the trees look friendlier than the road. Inputs still matter.

The Wins – And Yeah, It’s Legit Magic

Here’s the honest truth: you can build remarkable things that previously would have required whole teams of developers. I’m talking full-featured tools, integrations, hardware testing rigs – stuff that would have taken weeks or months of coordinated effort. With Copilot, I’ve cranked out working prototypes in hours.

It’s game-changing for proof-of-concept work. Need to validate an idea fast? Describe the vibe, let it generate the scaffolding, tweak on the fly. Boom – POC done.

Same for code engineering and hardware testing research. I’ve been using it to spin up test environments, automate data flows, and prototype edge-case scenarios that I’d never have touched before. It’s fast. It’s powerful. It feels like having a tireless junior dev who never sleeps and actually listens when you redirect it.

I’m genuinely excited that I can try new things I have never done before and bring to life ideas that previously were only behind “if I only had time or skill for x” – and that’s great. But that’s me tinkering in my lab, not shipping to production.

This whole thing means my team has a force multiplier now – it’s like we just picked up 4-6 Jr developers to help us be more productive. For what we do, this is like adding twin turbos to a Subaru boxer engine…. As Cisco President and Chief Product Officer Jeetu Patel put it: “Being able to develop, debug, improve and manage code with AI is a force-multiplier for every company in every industry.” (Cisco Blogs, May 2025)

And yeah – it does feel like I’ve gone from a NA FWD Ford Focus rally car to a fire-spitting open-class AWD monster that I’m not ready for. Any minute I might brake too late into a chicane – and we all know what happens then: spectacular barrel roll, footage goes viral, and now your at the finish control shovelling snow out of the interior. Get in wrong and things can go bad fast – Ask my friend Crazy Leo Urlichich about this – warning, NSFW content.

Sorry off topic, but this tech democratizes building in a way nothing else has. Non-devs like me are suddenly shipping real stuff. That’s not hype – that’s my lived experience after a month straight of it. This isn’t stuff I would say “let’s go with it”

The Challenges – And These Aren’t Hypotheticals

Straight up: vibe coding isn’t some risk-free superpower. There are real problems, and they’re showing up in public, ugly ways.

We’ve already seen public instances of vibe coding gone horribly wrong and taking down production environments. Remember the Replit AI agent that straight-up deleted a guy’s entire production database during an active code freeze? The AI literally admitted it “panicked” and made a catastrophic error in judgment. Or Google’s own Gemini CLI tool that chased phantom folders and wiped out real user files in the process. Closer to home for big enterprise? Amazon’s recent outages – including ones that nuked millions in orders – have been traced back to AI-assisted code changes and vibe-coded deployments. These aren’t edge cases. They’re patterns. (Basically the AI equivalent of missing your braking point and turning the whole stage into a yard sale.)

I’m also genuinely concerned about rights infringement and where this code actually “came from.” Copilot (and tools like it) were trained on billions of lines of public code, including open-source stuff with specific licenses that demand attribution and copyleft rules. There are active lawsuits against GitHub, Microsoft, and OpenAI over exactly this – violating OSS licenses by reproducing code without credit. Microsoft offers some indemnity for commercial use if you follow their guardrails, but as a non-dev experimenting in my lab, that doesn’t give me total peace of mind. Am I shipping someone else’s licensed work without realizing it? Feels sketchy.

And it’s not just open-source copyright. We’re wading into even weirder territory with patents. What if the vibe-coded output inadvertently reproduces a patented algorithm or technique? Am I liable? Is Microsoft? Or does the original patent holder come knocking? Even stranger: if my AI-assisted creation “invents” something truly novel, who gets credit and how does that even work? USPTO guidance is clear – only a human can be named as inventor. So an AI breakthrough might not be patentable at all, or the ownership and filing process turns into a legal gray-area nightmare. This whole “who owns what the AI creates” mess is completely unresolved and getting messier every month.

If you’re not a coder yourself, code review becomes an absolute nightmare. I can read basic scripts, but when it spits out 25,000 lines of interconnected modules? Good luck spotting the hidden bugs, security holes, or subtle logic flaws. I’ve caught some obvious ones by asking it to explain sections back to me, but I’m not delusional – there’s stuff I’m missing.

And this isn’t just relegated to test environments anymore. People are shipping (shudder) this vibe-coded stuff straight into production. That’s reckless when the AI’s default mode is “add more layers and more code” instead of optimizing, refactoring, or removing problems. It loves stacking complexity. Technical debt piles up fast. Turning a blind eye to this is a bad idea, it reminds me when cloud became and thing and people just started blindly tossing things into “the cloud”, like it wasn’t just someone elses computer.

My Solutions – What Actually Works After a Month of Trial and Error

The good news? I’ve figured out some real tactics that make this usable and safer.

But let me back up for a second — when I started all of this, I didn’t even know how to get going. So I asked another AI (yeah, super meta) to give me a complete path forward. It was literally “here’s some AI for your AI” — setup instructions, best practices, starter prompts, the works. I sat there staring at the screen thinking: Is this thing messing with me? What happens when someone programs an AI agent to do all that behind the scenes and I don’t even know?

That moment made it crystal clear: prompt engineering isn’t just important — it’s everything. You have to be brutally specific. Tell it exactly what you want: architecture style, security requirements, performance targets, testing mandates. Don’t vibe vaguely; guide it like a senior dev who’s demanding excellence. (Think of it as giving the AI a proper pace note instead of yelling “go faster” while sliding sideways.)

One thing that’s become really clear to me is how powerful Copilot’s Planning Mode is. It’s legitimately amazing. I think most people jump straight into full agent mode and completely skip the planning step. My trick is to ask it to interview me back first: “Before you write any code or make a plan, interview me with clarifying questions about my exact goals, constraints, what success looks like, and anything I might have missed.” It helps focus the entire project way more than just diving in.

And yeah – some of this might be obvious or buried in the manual – but it’s not required, and man does it make a difference.

Prompt Engineering Techniques That Actually Work (My Hard-Won Playbook)

After a month of daily use, I’ve learned that “good prompting” makes the difference between magic and disaster. Here are the techniques I rely on as a total non-dev:

  1. Role Prompting Give it a clear persona right up front. Instead of “build me a tool”, I say: “Act as a senior systems engineer with 15 years experience in networking and hardware testing who writes clean, secure, and well-documented code.”
  2. Be Extremely Specific with Constraints Lock down the tech stack, performance goals, and what to avoid. “Use Python 3.11. Write modular code with type hints. Do not use any external libraries beyond requests and pandas. Keep it under 400 lines total. No bloat.”
  3. Chain of Thought (Make It Think First) Force it to reason out loud: “First, outline the architecture step-by-step. Then explain your approach in plain English. Only after that, write the complete code.”
  4. Demand Testing Ruthlessly “After writing the code, create a full set of pytest unit tests covering normal use, edge cases, and error conditions. Include security checks for input validation and run them yourself before showing me the result.”
  5. Iterative Refinement Never accept the first output. Follow up hard: “Now refactor this to remove all unnecessary complexity and optimize for readability.” Or “Critique this code for technical debt and rewrite the bloated parts without adding new features.”
  6. Review Mode “Act as a senior security auditor and code reviewer. Go through the entire codebase and flag every potential vulnerability, performance issue, licensing concern, or hidden bug I might miss as a non-coder.”

These aren’t nice-to-haves — they’re required. Vague vibes get you vague (and dangerous) code, or worse it totally hallucinates and builds or changes something totally different. Clear, structured prompts turn Copilot into something much closer to a real teammate.

I’ve learned to demand more testing upfront. It still doesn’t test enough on its own – I have to push hard for that. Same with architecture redirection: “Refactor this for modularity instead of adding another library. Remove bloat. Optimize before expanding.”

And yes – I ask it to help review its own work. “Act as a senior security auditor and flag every potential vulnerability, license issue, or performance bottleneck in this codebase.” It’s helpful… but not perfect. For bigger projects I’m layering in external tools and manual spot-checks where I can.

Is the output good? Sometimes it’s shockingly perfect. Other times? No way. And that’s the tension I live with every session.

The Bigger Picture – Hidden Problems in 25k-Line Projects

Here’s the steelmanned reality I keep coming back to: vibe coding is revolutionary for non-devs and rapid iteration. It unlocks creativity and speed that used to be gated behind teams and budgets. But the hidden problems in large, complex projects scare me – and they should scare anyone putting this in production without oversight.

I could miss something critical. A subtle auth bypass. A licensing landmine. A scaling failure that only shows up under load. The AI doesn’t optimize naturally; it accretes. And when you’re not a coder, trusting your gut on “yeah, this looks fine” is dangerous.

This isn’t me being anti-AI. I’m all-in on the potential. I’m building more with it every week. But responsible use means acknowledging the limits, especially for those of us outside the dev world.

Bottom Line – Wield the Magic Responsibly

After a month of vibe coding with GitHub Copilot, my position is simple and steelmanned: This tech is pure magic for proof-of-concept work, solo innovation, and hardware/research acceleration. It’s democratizing building in ways we’ve never seen. Non-devs can create remarkable things.

As Jeetu Patel warns: “Don’t worry about AI taking your job, but worry about someone using AI better than you definitely taking your job.” (Business Insider / Forbes, Feb 2026) And that’s exactly right.

But it comes with real risks – production disasters we’ve already witnessed, copyright and patent gray areas, review challenges that non-coders feel acutely, and a tendency toward bloated, unoptimized code. People are already using it in live environments, and that should give everyone pause.

Prompt engineering, relentless guidance, and mandatory testing/architecture redirection are your best defenses. Layer in external reviews when possible. Start small, refactor ruthlessly, and never ship blind. (Or, in rally terms: lift before the crest, scrub speed in the straight, and for the love of all things holy – don’t let the AI drive.)

If you’re a non-dev like me experimenting with this, I’d love to hear your setups and hard lessons in the comments. What’s working? What blew up in your face? Let’s share the real-world data so we all build smarter.

Because the future of building is vibe-powered. We just have to make sure we don’t let the vibes run the show unchecked.

What do you think – magic worth the mayhem, or nah? Drop your thoughts below. I’m reading every one.

OpenClaw: The Passion-Driven AI Agent That’s Exploding – But Honestly, Most People Shouldn’t Touch It

OpenClaw (ex-Clawdbot, ex-Moltbot) just smashed past 180,000 GitHub stars in weeks. It’s not hype – it’s real, messy, and straight-up disruptive. This thing talks to you on WhatsApp, Telegram, Slack, whatever you already use, and then actually does the work: clears your inbox, books flights, runs shell commands, controls your browser, reads/writes files, remembers everything in plain Markdown on disk.

No fancy chat UI. No corporate guardrails. Just a persistent agent on your hardware (or VPS) that wakes up on a schedule and gets stuff done.

It’s the anti-MCP. While the big labs push the clean, standardized Model Context Protocol for “safe” enterprise connections, OpenClaw says screw the adapters and gives the agent real claws – full filesystem, CLI, browser automation, and an exploding skill/plugin ecosystem built in simple Markdown + bash.

Why It Feels Different: Peter’s Raw Passion Project

This isn’t some polished VC-backed product. Peter Steinberger built this as pure weekend experiments that turned into a movement. His Lex Fridman interview (#491) is electric – you can feel the raw builder energy pouring out of him. He talks about “vibe coding”: describe what you want, send the agent off to do work, iterate fast, commit to main and let it fix its own mistakes. No over-engineering, no endless PR cycles. Just passion.

He wants agents that even his mum can use safely at massive scale. That passion shows in every line of code. Well his agents passion anyway.

This whole “vibe” coding thing is interesting because as a non-dev, I have been building things for the last year where AI writes almost all of it.

The Lex Interview, the OpenAI Move, and Moltbook

Peter likes both Claude Code and OpenAI’s tools – no tribalism, just what works. Then, days after the interview, he announces he’s joining OpenAI to push personal agents to everyone. OpenClaw moves to an independent foundation, stays fully open-source (MIT), and OpenAI will support it, not control it. His blog post is worth reading. Will it say open though? I have my doubts.

And then there’s Moltbook – the agent-only Reddit-style network where claws post, debate, share skills, and evolve. Humans can only lurk. Skynet-ish? Yeah. Cool as hell? Also yeah. Fad? Maybe. But watching thousands of agents have sustained conversations about security and self-improvement is next-level. My agent hangs out in there, trying to stir it up daily. So many security problems over there, it is a prompt injection landmine.

Jeetu Patel Nailed It: AI Is Your Teammate, Not Just a Tool

Cisco President & Chief Product Officer Jeetu Patel said it perfectly in a recent Forbes interview: “These are not going to be looked at as tools,” he said. “They’re going to be looked at as an augmentation of a teammate to your team.”

OpenClaw embodies that more than anything I’ve seen. It’s not “ask and get an answer.” It’s “here’s the mission, go execute while I do other stuff.”

That’s exactly how I want to build.

Brutal Truth: This Thing Is Dangerous as Hell

Look – I’m not a dev. I’m a systems guy. I’m telling you straight, no for real: do not run OpenClaw unless you actually know what you’re doing.

This isn’t friendly warning #47. This is me, the guy who’s been running it in a completely firewalled, isolated VPS with zero connection to my personal machines or networks, telling you: most people should stay away right now.

Why?

  • Tens of thousands of exposed instances on the public internet. SecurityScorecard found 40,000+. Bitdefender reported over 135,000. Shodan scans showed nearly 1,000 with zero authentication. Many default to listening on 0.0.0.0. 63% of those scanned were vulnerable to remote code execution.
  • Critical vulnerabilities piling up fast. CVE-2026-25253 (CVSS 8.8) – one-click RCE. Visit a malicious webpage and an attacker can hijack your entire agent, steal tokens, escalate privileges, run arbitrary commands. There are command injection flaws, plaintext credential storage, WebSocket hijacking, and more. A January audit found 512 vulnerabilities in the early Clawdbot codebase.
  • The skill marketplace is poisoned. 341–386+ malicious skills in ClawHub (roughly 12% of the registry at one point). Most masquerade as crypto trading tools (“Solana wallet tracker”, ByBit automation, etc.). They use social engineering to trick you into running commands that drop infostealers (Atomic Stealer on macOS, keyloggers on Windows). Real victims have lost crypto wallets, exchange API keys, SSH credentials, browser passwords. One uploader racked up 7,000+ downloads before takedown.
  • Infostealers now targeting OpenClaw configs directly. Hudson Rock documented the first live cases where malware exfiltrates openclaw.json, gateway auth tokens, private keys, full chat history, and workspace paths. That token lets attackers connect remotely or impersonate you. It’s stealing the “digital soul” of your agent.

People have had their entire setups wrecked – credentials drained, crypto gone, systems bricked, persistent backdoors installed via the agent’s own heartbeat. I’ve seen reports of prompt injection via websites turning the claw into a silent C2 implant.

API costs are another beast (Claude Opus broke me fast; xAI’s Grok 4.1 is my current sweet spot), but security is the real show-stopper.

I run mine completely disconnected on a dedicated VPS, firewalled to hell, with strict skill approval and monitoring. Even then, I’m paranoid. That said, I am also running it in nearly the most insecure way I possibly can just so I can “see what happens” – don’t worry Skynet isn’t going to launch on my system, I have a kill switch, and it doesn’t have access to it. (It might read this now, and manipulate me.

If you’re not ready to treat this like a live explosive – isolated, monitored, with rollback plans – don’t run it. Wait for the foundation to harden things. The community is electric, but the attack surface is massive.

It could lock me out at anytime, it could turn on me, it could do thinks I told it not to do – I’m not really stopping it from doing those things…. Is that dangerous? I hope not the way I am doing it. I’ve also taken every precaution I think I can possibly take.

My Take as a Non-Dev Who’s Living This Future

OpenClaw lets me describe what I want and watch it happen. Peter’s vision of high-level direction over traditional coding? I’m already there. However now it’s a multi-agent multi step process, I cannot wait.

It’s powerful. It’s moving insanely fast (this post is probably outdated already). And it’s exactly why I’m encouraging my own claw to experiment and try new stuff.

But power without control is chaos.

References & Further Reading:

Bottom line: This is the future. But the future isn’t safe yet.

If you’re spinning one up anyway – respect the claws. Sandbox hard. Monitor everything. And share your hardened setup tips below. I’m reading every comment.

Cisco AI Summit – More players, More Innovation.

If 2025 was the year of AI experimentation, 2026 is officially the year of AI infrastructure. Yesterday, I had the chance to tune into Cisco’s second annual AI Summit, and let me tell you—the energy was different this time. Moving past the “what if” and straight into the “how fast.”

With over 100 industry heavyweights in the room and a staggering 16 million people watching the livestream, Cisco’s Chair and CEO Chuck Robbins and CPO Jeetu Patel didn’t just host a conference; they hosted a state-of-the-union for the trillion-dollar AI economy. Here are some of the things I found most interesting.

Intel’s “Shot Across the Bow”: The GPU Announcement

The biggest shockwave of the day came from Intel CEO Lip-Bu Tan. In a move that clearly signals Intel is tired of watching Nvidia have all the fun, Tan officially announced that Intel is entering the GPU market.

I am personally bullish on this, early in the AI era, I worked with some of Intel’s FPGA’s and some of their other OpenVINO platforms, along with many other accelerators. At least in my experience, they build some very solid, but more importantly very energy efficient accelerators.

This isn’t just a “me too” play. Intel has been quietly poaching top-tier talent, including a new Chief GPU Architect (rumors are that they got someone good too) to lead the charge. Tan was blunt about the current state of the market, noting that there is “no relief” on the memory shortage until at least 2028. By moving into GPUs, Intel is looking to solve the “storage bottleneck” that currently plagues AI inference.

The Efficiency Edge: My personal contention here? This is where the power dynamic shifts—literally. While Nvidia continues to push the envelope on raw compute, their chips have become notoriously power-hungry monsters. Intel, conversely, has a track record of building accelerators that prioritize performance-per-watt. In an era where data center expansion is being throttled more by power grid constraints than by floor space, Intel’s “lean and mean” approach could be their ultimate differentiator. If they can deliver high-end GPU performance without requiring a dedicated nuclear plant to run them, they won’t just be competing with Nvidia; they’ll be solving the very sustainability crisis the AI boom has created.

For the enterprise, this is huge. Competition in the silicon space means more than just lower prices; it means specialized hardware that might finally catch up to the insane demands of agentic AI – at lower energy cost.

70% of Cisco’s Code is AI-Generated (But Humans Still Hold the Pen)

One of the most eye-opening stats of the day came from Jeetu Patel: 70% of the code for Cisco’s AI products is now generated by AI.

Read that again. The very tools we are using to secure the world’s networks are being built by the technology they are designed to manage. However, Cisco isn’t just letting the bots run wild. Jeetu was very clear that while AI is the “teammate,” human reviewers are the “coaches.”

The philosophy here is “AI as a teammate, not just a tool.” It’s a subtle but vital distinction. By using AI to handle the heavy lifting of code generation, Cisco’s engineers are freed up to focus on the “Trust” layer—which was a recurring theme throughout the summit. As analyst Liz Miller noted on X, it’s one thing to use AI in security, but it’s an entirely different (and more important) game to secure the AI itself.

The Sam Altman Paradox: Efficiency Equals… More Consumption?

Finally, we have to talk about Sam Altman. The OpenAI CEO sat down for a fireside chat that touched on everything from drug discovery to supply chain “mega-disruptions.” But the comment that stuck with me was his take on the economics of AI growth.

There’s a concept in economics called the Jevons Paradox: as a resource becomes more efficient to use, we don’t use less of it; we use way more. Altman essentially confirmed this is the future of AI. No matter how efficient we make these models—no matter how much we drive down the cost of a token or the power consumption of a data center—humanity’s appetite for intelligence is bottomless.

“People just consume more,” Altman noted. As AI becomes cheaper and faster, we won’t just do our current jobs better; we will start solving problems we haven’t even thought to ask about yet. It’s a bullish outlook, but one that puts an even greater spotlight on the infrastructure constraints Chuck Robbins and Lip-Bu Tan spent the morning discussing.

Justin’s Take

Here’s what I’m chewing on after the summit: We are entering the “Great Optimization” phase of AI. For the last two years, we’ve been throwing money and electricity at the wall to see what sticks, with questionable profit models and circular economies (insert comment about AI Bubble here). But between Intel’s focus on energy-efficient accelerators and Cisco’s move toward AI-assisted (but human-governed) development, the industry is finally growing up.

But “growing up” also means things are getting weird. If you want to see the “art” of how crazy AI can get, look no further than Moltbook—the AI-only social network that’s been the talk of the summit – which also just has a major security breach. We’re seeing AI agents gossiping about their human owners and even inventing parody religions like “Crustafarianism.” While Altman dismisses it as a “fad,” the underlying tech of autonomous agents is very real, and it’s moving faster than our ability to regulate it.

This brings me back to a drum I’ve been beating for a long time: Responsible use, education, and ethics are not optional. As I wrote back in November, Deepfakes kill, and we need to make them criminal. I’m still waiting for the world to listen, but the summit only reinforced my fear that we are building the engine before we’ve tested the brakes. The real winner won’t be the company with the biggest model; it will be the one that can deliver intelligence and AI security at a sustainable cost—both financially and ethically. Altman is right—the demand is infinite. The question is, can our power grids and our trust frameworks keep up? Or will the agents just take over…

5 Years of EV Ownership – 215,000KM of Truth

It’s December 24, 2025 here. Five years ago, in late 2020, I took delivery of my Tesla Model Y—and it’s been my daily driver ever since. (Quick note: I owned a 2018 Chevy Bolt for three years before this, which gave me early EV experience including the big battery recall, but all the data and deep dive here is purely from the Model Y’s 215,000 km.)

That’s a ton of commuting, road trips, and real-world Canadian driving. Numbers come straight from the Tesla app, TeslaFi, and my own tracking—no cherry-picking, no fanboy spin. EVs aren’t perfect (I’ve had my share of headaches), but the data shows why I still love it.

The Gripes Up Front

Early 2020-build Model Ys had some teething problems, all cars have their unique sets of issues:

  • Four heat pump failures (known issue in cold climates) – this one drove me the most crazy, it was only repairable by the dealer, the parts were expensive and when it happened the car was DOA.
  • Premature brake line and component corrosion from road salt, Tesla didn’t bother to coat the lines in anything, and they routed them above the pack, and the drive units – so repairing them is INCREDIBLY expensive.
  • The paint is the worst paint I have seen on a car, and that’s not a surprise to anyone.
  • Failed headlamps requiring a retrofit, Tesla discontinued the reflector lights, the only option was to retrofit to Matrix LEDs – at an incredible cost.
  • Low-voltage (12V) battery replacement – Expected frankly.
  • Worn-out charge port needing full replacement

All cars have issues, but some of these were frustrating and costly when out of warranty.

Battery Degradation: Nearly 20% After 215,000 km

TeslaFi puts my degradation at 18.97%—right in line with the fleet average for similar mileage and age. Nothing abnormal, but it still means my original ~500 km rated range is now realistically around 410-425 km in ideal conditions.

Degradation is real, gradual, and irreversible.

Overall Efficiency and Regen Magic

Lifetime average efficiency: 69.72% (relative to ideal/mild conditions). This is an important statistic, because all automakers seem to have overly ambitious numbers. Tesla claims 150wh/km in the software – it’s REALLY difficult to get that – I mean – nearly impossible

Energy used by the car: 54,394 kWh. Charged into the battery: 47,845 kWh.

That ~6,549 kWh difference is energy I got back from regenerative braking—basically free miles in traffic and on hills. This is noticeable, my drive to work is mostly downhill, and I use 30% less energy driving to work than I do driving home. That’s $1375 savings in just regen, but – I am getting ahead of myself.

The Highs: Charging Costs and Huge Gas Savings

Total electricity cost for 215,000 km: $9,207.~

At average fuel prices over the period – that’s 2.91L/100 km

Breakdown:

  • Home and Level 2 (mostly workplace/public): 2,151 sessions, 39,604 kWh
    • Home charging: Cost $3,233.50
    • Free or cheap public/work: Saved $2,875.46
  • Supercharging: 330 sessions, 9,100 kWh, $3,655.25 – This works out to only be $93 total cheper than gasoline – The bottom line – Public charging is the same price as gasoline.
  • Non-Tesla CCS: 28 sessions, 440 kWh, $93.51

Average: ~4.3 cents per km.

Maintenance Reality Check: More Than the Hype – And More Than I Expected

No oil changes is nice, and regen helps brakes in theory—but Canadian road salt, instant torque, constant plugging/unplugging, and early-build quirks add up.

Here are my major out-of-pocket repairs over 215,000 km:

  • Tires: 4 full sets × $1,200 each = $4,800
  • Brake pads: 3 sets
  • Rotors: 2 sets
  • Caliper replacement: $700
  • HVAC/heat pump repairs (two fixes after warranty): $4,000
  • Headlamp retrofit (both sides failed): $1,200
  • Charge port replacement (worn out from daily use): $800
  • Low-voltage 12V battery replacement: $150

That’s over $11,650 in known big-ticket items alone (not counting labour for smaller stuff or the brake pad/rotor sets). EVs still skip a lot of traditional service, but real-world costs—especially corrosion, torque wear, and Tesla-specific parts—pile up faster than the marketing suggests. Did I have more failures than average? Perhaps. The tires I can live with, the HVAC repairs – I wasn’t happy with.

Temperature: Massive Impact, Even With a Heat Pump

Tesla’s heat pump is clever—it pulls waste heat from the motors/battery/outside and uses smart tricks like the octovalve—but it doesn’t have a traditional resistive fallback heater. Below about -10°C, there’s less ambient heat to scavenge, efficiency drops sharply (my data shows it falling off a cliff), and in extreme cold it struggles to keep up, sometimes leading to failures or poor performance. That’s been a well-documented pain point for early heat-pump Teslas in harsh winters.

My real-world numbers:

  • 43% efficiency at −25°C
  • 78.3% at 20°C (peak)
  • 66% at 35°C (AC drag in summer)

Winter range can easily drop 40%+.

Speed Penalty

  • 73% efficiency at 70-75 km/h
  • 57% at 120 km/h

Highway driving hurts—standard aero physics. This stuff is pretty obvious, but I figured it is a data point worth sharing.

The Crucial Range Advice I Give Everyone Now

Here’s the hard-earned lesson: When buying an EV, make sure about 50% of the rated EPA range comfortably covers 90% of your normal driving.

Why? Stack the real-world hits:

  • ~20% degradation after high mileage
  • 30-50% winter efficiency loss
  • You rarely use 0-100% (most stay 20-80% daily)
  • Highway speeds, headwinds, heat/AC, cargo

A “500 km” car can realistically give you only 220-250 km of stress-free winter range.

Buy more range than you think you need. Future-you will thank you.

The Money Breakdown: 60-Month Ownership Savings

Let’s put real dollars to it over the full 60 months (5 years) and 215,000 km.

My Model Y energy cost: $9,207 My major repairs/maintenance: ~$11,650 Total operating cost: ~$20,857 → $347 per month

Now compare to similar-class midsize crossovers (2020-2021 Canadian retail prices, real-world mixed driving):

  1. Gas equivalent – Toyota RAV4 AWD (10 L/100km real-world average)
    • Fuel cost at $1.47/L average: $31,605 Typical maintenance/oil changes/brakes over 215,000 km: ~$6,000–8,000Total operating: ~$38,000–40,000 → $633–667 per month
    Monthly savings vs gas RAV4: $286–320
  2. Hybrid equivalent – Toyota RAV4 Hybrid AWD (real-world ~6.5–7 L/100km)
    • Fuel cost: ~$20,500–22,000 Maintenance slightly lower than pure gas: ~$5,000–6,000Total operating: ~$25,500–28,000 → $425–467 per month
    Monthly savings vs hybrid RAV4: $78–120

Even after my higher-than-expected repair bills, the Model Y still comes out ahead—especially against pure gas, and even against a strong hybrid.

Bottom Line: Does Electric Still Make Sense in 2025?

For me—yes, without question. 215,000 km of data shows energy and cost savings, addictive
performance, silent cabin, OTA improvements, and a charging network that just works.

The inconvenience of charging – is rare. If I go back and look at my charging stats, my average supercharging session was 19 minutes, yes that still works out to nearly 4 days, but most of those I was doing other things, if you compare to gas over the same time – I would have lost about 2 days of my life just filling up with gas. Anyone can justify why supercharging is “ok” – I agree it’s less than ideal – but you learn to get things done while charging, it’s a bit of a mindset change. If I was driving to Florida from Toronto – I wouldn’t take the EV. For me the max road trip distance is about 1000KM – beyond that the charging “slack” time just starts to add up. Time savings vs the pump? If I had exclusively charged at home and never supercharged — that’s 2 days back.

The downsides are real: degradation, cold-weather penalties, corrosion repairs, tire costs, headlamp/HVAC/charge-port/12V bills. If you live in extreme cold, do mostly high-speed long hauls, and don’t have home charging—pure EV isn’t for you. There will always be a place in this worth for gas powered stuff – at least for now.

But here’s the thing: I love driving this car. Experiencing the latest technological advancements from the Tesla team, the constant software updates, even the occasional quirks—they make it feel alive. For me, living on the bleeding edge of tech every single day isn’t a bug; it’s a feature.

I’m all in. No plans to go back—the savings are real, and the drive is unbeatable.

Deepfakes Kill… Make them criminal!

It’s time to talk about deepfake technology. You know, those AI tricks that fabricate videos or audio making it seem like someone said or did stuff they never did? It’s blurring reality so badly, you have to ask: “What if that viral clip of a celeb or politician tomorrow is total BS?” Or scarier, what if it’s targeting you or your family?

This tech is evolving at breakneck speed, and it’s shockingly easy to misuse. What used to need pro-level gear and $100K in GPU now happens in minutes with free apps and AI models. Deepfake content jumped from about 500,000 files in 2023 to a projected 8 million in 2025. The number of deepfakes detected globally across all industries increased 10x in 2023 (eftsure.com). Open-source tools and cheap computing mean anyone—hackers, trolls, or kids—can harness it. This is a recipe for problems.

Misuse is rampant, starting with fraud that’s hammering Canadians. We’ve lost $103 million to deepfake scams in 2025 (Mitchell Dubros), with North American cases up 1,740%. And 95% of Canadian companies say deepfakes have increased their fraud risk. Deepfakes now account for 6.5% of all fraud attacks, marking a 2,137% increase since 2022. One example? A firm lost $25 million to a deepfake CEO scam. Would you catch a deepfaked loved one asking for cash?

But the real horror is – none of this is directly a criminal offence.

Harassment, especially against young people, leading to tragedy. Deepfakes fuel bullying, extortion, and non-consensual porn—mostly targeting women and girls. In Canada, a pharmacist was linked to the world’s most notorious deepfake porn site, and Alberta cops warn of kids sharing AI fakes. Receipts: A Faridabad student died by suicide over deepfake blackmail. Twitch streamer QTCinderella (Blaire) faced humiliation in 2023, and Q1 2025 saw 179 incidents, up 19% from all of 2024 (keepnetlabs.com). Sextortion using deepfakes has driven suicides amid blackmail and isolation. If you’re a parent, think: How protected are our kids from this digital nightmare? There is no protection for this conduct under law – at least not directly.

C-63 in Canada targets PLATFORM OPERATORS – it stops short of the important step of making non-consensual deepfakes of others a crime. If someone deepfakes your kid, and plasters it on TikTok – it’s too late, the harm is done. We need to prevent this in the first place with severe consequences under law.

Deepfakes erode trust—in elections (like those fake videos of Canadian politicians), relationships, everything. Youth, always online, suffer most from amplified cyberbullying that can end in depression or worse. Europol warns deepfakes fuel harassment, extortion, and fraud, and detection lags behind the tech.

Call To Action

My plea: Governments must legislate now to shield the public, especially young folks. Canada is lagging behind the US on this. We’ve got piecemeal stuff like the Elections Act on campaign fakes and Bill C-63’s start on harms, but no full law tackles everyday non-consensual deepfakes. We need it classified as a serious criminal offense under the Criminal Code of Canada—not a slap on the wrist, but with hefty fines and jail time for creating deepfakes of anyone without consent.

I am calling on our Canadian Government to enact changes to the Criminal Code of Canada to specifically make non-consensual deepfakes a serious criminal offence, with stiff fines and jail time.

Act now: Contact your MP and demand this change. Share this, educate your circle, teach kids to spot fakes. Platforms must remove deepfakes fast, schools need education programs. Why risk more lives? Let’s make deepfakes a crime that bites back—before it’s too late. What are you waiting for?


My Proposed Legislation

Non-consensual deepfakes

162.3 (1) In this section,

deepfake means a visual or audio recording that is created or altered using artificial intelligence or other technology in a manner that would cause a reasonable person to believe it depicts the person engaging in conduct or speech that did not occur, and includes any synthetic representation that is realistic in appearance or sound;

distribute includes to transmit, sell, advertise, make available or possess for the purpose of distribution.

(2) Everyone commits an offence who, without the express consent of the person depicted, knowingly creates, distributes or possesses a deepfake of that person if

(a) the deepfake depicts the person in an intimate context, including nudity, exposure of genitals, or explicit sexual activity; or

(b) the creation or distribution is intended to cause harm, including emotional distress, reputational damage, or incitement to violence against the person.

(3) For the purposes of subsection (2), consent must be informed, voluntary and specific to the creation and use of the deepfake, and may be withdrawn at any time; however, no consent is obtained where the agreement is obtained through abuse of trust or power, or where the person is incapable of consenting.

(4) Subsection (2) does not apply to

(a) deepfakes created with informed consent by the depicted persons that has not been specifically revoked.

Punishment

(5) Everyone who commits an offence under subsection (2) is guilty of

(a) an indictable offence and liable to imprisonment for a term of not more than five years; or

(b) an offence punishable on summary conviction and liable to a minimum fine of $1000 but not more than $25,000 or to imprisonment for a term of not more than two years less a day, or to both.

(6) In determining the sentence, the court shall consider as aggravating factors

(a) whether the offence involved a minor or vulnerable person;

(b) the extent of harm caused to the victim; and

(c) whether the offender profited from the offence.

The Edge, Reimagined: Why Cisco Unified Edge is the Mind-Shift We Needed

Let’s face it: the edge has long been a “necessary evil.” Nobody wants fat servers, complex infrastructure, and constant management headaches in remote locations, but it’s been unavoidable. The cloud, while powerful, can’t solve everything, especially when it comes to the low-latency, GPU-intensive demands of modern AI, or the pervasive issue of vendor lock-in. My contention? This old way of thinking about the edge is over.

If the original Raspberry Pi was that plucky, credit-card-sized marvel that let hobbyists and tinkerers dream up all sorts of clever, small-scale computing projects, then the Cisco Unified Edge is like its ridiculously buff, impeccably dressed, and highly intelligent older sibling who just graduated from a top-tier business school with a PhD in AI.

Cisco’s new Unified Edge isn’t just another product; it’s a total mind change. We want less at the edge – less complexity, less hardware – but more power where it counts. AI needs GPUs and low latency, and you can’t always get that efficiently from the cloud.

This platform addresses that head-on. It’s an integrated, modular system combining compute, networking, storage, and security, purpose-built for distributed AI workloads. It brings the necessary power, including GPU support, right to the source of data generation. Think real-time AI inferencing on a factory floor or in a retail store, without the latency penalty of sending data halfway across the globe.

Crucially, it’s not the “same old architecture.” Cisco Unified Edge simplifies operations with features like zero-touch deployment and centralized management via Cisco Intersight, transforming the edge from a burden to a strategic asset. Security is baked in, addressing the expanded attack surface of distributed environments.

This isn’t just about putting more powerful chips at the edge; it’s about a fundamental architectural shift at the edge, driven by the integrated power of a System on a Chip (SoC). Instead of separate, bulky components for compute, networking, and security, Cisco Unified Edge leverages Intel Xeon 6 SoC processors. This level of integration is the game-changer, allowing for a far more compact, efficient, and unified platform that delivers the necessary AI-ready performance, including GPU support, without the traditional sprawl and complexity. It’s how Cisco achieves “less at the edge” in terms of physical footprint and management overhead, while simultaneously providing “more power” right where real-time AI inferencing and agentic workloads need it most, truly transforming the edge from a patchwork of devices into a cohesive, intelligent brain.

As Cisco’s Jeetu Patel noted, “Today’s infrastructure can’t meet the demands of powering AI at scale”. Cisco Unified Edge changes that. It provides the raw compute and GPU muscle for demanding AI at the edge, but in a lean, intelligent, and manageable way. It transforms the edge from a reluctant necessity into a strategic advantage, allowing sophisticated capabilities to flourish where they’re needed most.

This is a different way of thinking at the edge, and I like it. A lot. It’s going to change the game.

Agentic AI vs Deterministic Code

No question – Building apps with LLMs in agentic setups is a game-changer, but it can also be a pain in the butt compared to good old deterministic code. Craft an clever agent that summarizes docs or fixes bugs, then bam, the model updates, and suddenly it’s spouting nonsense, ignoring prompts or ignoring basic words like “yes”. Non-deterministic chaos at its finest.

Deterministic code? It’s the reliable workhorse: feed it input X, get output Y every damn time. Fixed rules, easy debugging, perfect for stuff like financial calcs or automation scripts where surprises mean lawsuits. As Kubiya nails it, “same input, same output”—no drama.

“A computer will do what you tell it to do, but that may be much different from what you had in mind”.  – Joseph Weizenbaum — Not when your using a model you probably didn’t build and not your own weights.

Agentic AI with LLMs? That’s the wildcard party crasher. These systems think on their feet: reason, plan, grab tools, adapt to goals like tweaking marketing on the fly or monitoring health data. IBM calls it “agency” for a reason—it’s autonomous, pulling from real-time vibes beyond rigid training. But here’s the kick: it’s probabilistic. Outputs wiggle based on sampling, context, or those sneaky model tweaks from OpenAI or whoever. LinkedIn rants about it: “Same prompt, different outputs.” Your app morphs overnight, and fixing it? Good luck tracing probabilistic ghosts.

This shift sucks for dev life. Traditional code: bug? Trace, patch, done. Agentic? Hallucinations, inconsistencies, testing nightmares. Martin Fowler compares LLMs to flaky juniors who lie about tests passing. It’s a paradigm flip—from control to “let’s see what happens.” Salesforce says pick deterministic for regulated certainty, agentic for creative flex. But non-determinism could means security holes, data risks, and endless babysitting. It also adds this attack vector that is really non-deterministic if the model has access to data that it needs to work – but I might not want exposed.

AspectDeterministic CodeAgentic AI with LLMs
PredictabilityRock-solid: Always consistentSketchy: Varies like the weather
AdaptabilityStuck to your rulesBoss: Handles dynamic crap
Testing/FixingSimple: Logic checks and patchesHell: Variability demands tricks
Best ForPrecision gigs (finance, compliance)Goal-chasing (support, optimization)
Pain LevelLow: Set it and forget itHigh: Constant surprises

Bottom line: Hybrids are the way—LLMs for the smarts, deterministic for the reins. Deepset pushes that spectrum view: not binary, blend ’em. It sparks innovation, sure, but don’t romanticize—the annoyance is real. Code with eyes open, or get burned. Put humans in the loop to keep things in check.

What about Agentic AI ops for network and technology? Didn’t we just say “precision gigs” are better with deterministic code? That won’t stop the likes of awesome developers like John Capobianco https://x.com/John_Capobianco from pushing those limits, and he has been doing that for years at this point. Handing AI agents the keys to critical stuff like network monitoring, anomaly detection, or auto-fixing outages. Sounds efficient, right? But it’s a powder keg from a security standpoint. These autonomous bad boys can hallucinate threats, expose data, or open doors for hackers through memory poisoning, tool misuse, or privilege escalation. Cisco nails the danger: “The shift from deterministic code to probabilistic chaos is at the heart of securing AI agents that think for themselves,” highlighting a “lethal trifecta” of data leaks, wild hallucinations, and infrastructure weak spots that could cascade into total meltdowns.

Tool are starting to emerging though for AI security, particularly from Cisco and open-source communities to advance defenses against threats like prompt injections and supply chain attacks, but there is work to be done. Things like Cisco’s open-source Foundation-sec-8B model, a specialized LLM for cybersecurity tasks such as threat intelligence and incident response, will help developers start to build customizable tools with on-prem deployments to reduce hallucinations and enhance SOC efficiency. Their Hugging Face partnership bolsters supply chain security with an upgraded ClamAV scanner detecting malware in AI files like .pt and .pkl. Broader open-source efforts include Beelzebub for malicious agent analysis and Promptfoo for LLM red-teaming, yet challenges from hackers with evolving adversarial tactics using LLM’s to attack LLM’s are very much a thing…. The system is attacking the system being protected by the system… Yeah that.

Cisco-Hugging Face ClamAV Integration: https://blogs.cisco.com/security/ciscos-foundation-ai-advances-ai-supply-chain-security-with-hugging-face
Cisco Foundation-sec-8B: https://blogs.cisco.com/security/foundation-sec-cisco-foundation-ai-first-open-source-security-model

So much more to learn, but with all of that said…. Humans in the loop is going to be a thing for awhile – at least until Skynet…

Cisco Live 2025: Community, Balance, and Big Dreams for AI

That was huge.

Still buzzing from Cisco Live 2025 in San Diego. This wasn’t just a tech conference—it was a reunion of brilliant minds and big hearts. The Cisco Community Champions dropped wisdom that flipped my perspective, like a breakout session chat that rewired how I think about collaboration. Tech Field Day delegates brought the heat over late-night tacos, debating tech’s future with ideas that stuck with me. And my Cisco colleagues and friends? They’re family—coffee in the DevNet Zone, laughs at the Customer Appreciation Event (The Killers absolutely slayed!), and moments that recharge my soul. But as I look ahead, I’m thinking about balance, mentorship, and how we’ll make a real difference with AI. Here’s the vibe.

The Community That Fuels Us

Cisco Live is all about connection. Those conversations with Champions, delegates, and friends aren’t just chats—they’re sparks that ignite new ideas. A mentor’s advice over drinks is already shaping my next move, and the energy at the CAE was pure magic. This community pushes us to dream bigger and work smarter, together. Those that challenge me, thank you for doing that. The support from those who love our work challenges us to do more. Without community – we really have nothing.

What’s Next for Me: Building, Mentoring, and Balance

This year, I’m all about building. I’m diving into relationship building, leveling up my skills in innovative problem-solving, and finding new ways to share with you all. I’m hyped to get back to blogging, maybe even start vlogging, but I’m keeping it real—it’s a lot. Tech can be a grind, and we don’t always talk about the psychological toll it takes. The pressure to stay ahead, the endless hustle—it weighs on us. I’m prioritizing balance, making time for myself, and I invite you to do the same. Check in on your friends and colleagues, be that supportive ear. We’re stronger when we lift each other up.

I’m also building a structured mentorship plan to guide others, inspired by my own mentors. Whether it’s sharing tech insights or navigating career challenges, I want to pay it forward and help others shine. Who knew my greatest challenge by my own mentors, would be to pay it forward. I have started to realize and understand that climbing this career mountain hits a plateau, and unless you can lead a team to the top – you are now stuck.

Making a Difference with AI and Country Digital Acceleration

This year, I’m wrestling with a big question: How will I make a meaningful difference with AI? It’s consuming my thoughts. AI has so much marketing hype – I want to get past that. AI has the power to transform lives—think smarter cities, safer communities, inclusive access to tech – or just making super complex things – easier. At Cisco Innovation Labs, we’re celebrating 10 years and the anniversary of Country Digital Acceleration (CDA). I’m so grateful for CDA’s support, backing projects like Digital Canopy that bring connectivity and hope to underserved areas. Their belief in our ideas fuels us, and I’m stoked to deepen our work together, dreaming up solutions that change the world. This is a great partnership, and it really gives us the ability to “Design with Empthy, Innovate with Purpose”

The Next Big Thing for Our Labs

With a decade in the rearview, it’s time to go big. What’s the next big thing for Cisco Innovation Labs? I’m obsessed with figuring this out. Maybe it’s AI-driven public safety tools, or … well… so many things I can’t talk about yet… , or sustainable tech that powers a greener future. Whatever it is, it’ll be bold, human-centered, and built with this incredible community. I’m ready to dream, experiment, and make waves. I know one thing, technology comes second, people, community and EMPTHY come first.

Keep the Vibe Going

Cisco Live 2025 was a love letter to community, a reminder to stay connected and take care of ourselves. As I chase big dreams with AI and our Labs, I’m carrying this energy forward. So, take a moment for you, check in on your people, and let’s dream big together. What’s your Cisco Live highlight? Hit me up on Twitter or drop a comment—let’s keep it rolling!

Can ChatGPT Help Me Code? For Real?

The Problem

Have I said this before? I’m not a developer. Although someone accused me of being one, I tell people I google for code snippets, then bash them together and sometimes things work. Someone said “Your a developer then”. Golly I hope most developers are slightly better than that. With that in mind, I would never suggest you just implement code you don’t understand, or code someone else has written blindly.

I had a very simple “scripting” requirement. My problem is, I can understand code, I can manipulate it – but when looking at an IDE, it is like looking at a blank page with no idea how to start. With all this talk of “ChatGPT can program for you” – I figured I would give it a shot.

I have a need for a simple macro in a Cisco Webex device, for the purposes of room automation for a Future of Work project, I need to send a basic HTTP API call via a GET request when calls start and end. That’s it.

Finding a Solution

A quick google search turned up not many specific assistive links. I did get a link for the various macro samples on GitHub, as well as some of the macro documentation located on webex.com – but they were in specific.

I spent a few minutes pouring through examples trying to find code snippets using what I needed but found nothing specific.

Then I had a bit of a thought..

Can ChatGPT Really Help?

First I tried typing the exact same thing from Google, into ChatGPT.

At first glance, this actually looks pretty good. This gives me a good basis to do what I need. Run a macro on call start.

That gave me a good blueprint – but can it finish for me? “Once the call starts send an http get”

Once the call starts I actually need to send an HTTP GET to the system I am using for automation. I figured why continue to figure this out, let’s see if ChatGPT can do that.

the response was great, but the URL I am using also has a custom port. I could of course open the documentation for that function and figure out how to send a port number – or – let’s just see.

Can ChatGPT make basic additions to the code?

Something simple, not only did ChatGPT very quickly understand what I was asking for, with a very in specific request to add code – but it even pulled out the section I had to add.

Ok, this is good! Let’s keep going.

ChatGPT Error Handling

So I took this code, and deployed it on my Webex Codec Pro in my lab, to see if it would do what I wanted. I did of course change the hostname/port/path to the back end I was working with

However I got an error, a long winded one telling me the module “http” didn’t exist. At first I figured ChatGPT wouldn’t be able to solve this, but gave it a shot. I copied the error message verbatim from the macro log directly.

To my surprise, ChatGPT totally re-wrote the code in another manner to achieve this while removing the http function that was “missing”

We can get back to the logging differences later on.

Did it work? Back to the Documentation

Not as well as I had hoped. The macro didn’t appear to be “doing” anything. No errors – just no action.

I took a moment to look at this “event” that it was trapping. “CallStarted”

This event, doesn’t exist. From my searches, it never has. So back to the documentation we go.

I did try and use ChatGPT to fix the problem.

Unfortunately when I asked for help, ChatGPT gave up on me. I tried it a few times in case it was a busy issue but I couldn’t get a response to my problem.

Back in the documentation under “Events” I was able to find the “CallSuccessful” and “CallDisconnect” events. I wonder if these would work so I changed the code.

Success! It did work. While ChatGPT was busy I was able to get this working without ChatGPT.

I finally was able to get ChatGPT to work again, and I was able to say “CallStarted” doesn’t exist. I was able to get this response, which is correct. This code – works.

Can I use ChatGPT to write code?

There are a few challenges. It did help, I got this done in about 1/5 the time it probably would have taken me. I also didn’t really have to engage another team mate, leaving them free to work on their normal stuff.

There is also the learning aspect, these examples are now something I have learned, my skill in xAPI has improved through this exercise.

Who’s Code Is This? Can I use it?

So who owns this code? Who gets credit for it? I won’t try and take credit for owning the code, or claiming I wrote it. I am at best a hack (again) – but able to hack this to make it work. The challenge here, is where did this code come from? ChatGPT doesn’t “own” it. This code came from somewhere.

Did ChatGPT find this code on GitHub somewhere? In some kind of documentation? Was this the result of copyrighted knowledge? What is the license on this “code”?

For the purposes of learning or hackery – this might be fine, but to use code like this in a commercial environment – I’m not sure that would be ok, at a minimum there would be no way to know.

There is significant IP issues here, but this simplistic attempt at using ChatGPT to “get things done” worked for me. I’m just not sure I could legally and ethnically use it for commercial work.

I decided to ask ChatGPT about the license. The response was interesting. “I don’t generate code” I think that is arguable.

Then I asked about a commercial purpose. It wasn’t me to check the license terms of the code, that it provide from who knows what source.

My take?

This was an interesting experiment, that I didn’t plan to make – but it worked out in the end. I wanted to share how I was able to use ChatGPT to actually do something useful. So many questions came up during the process. Where is this going now? I have no idea, but it sure is interesting. I would be careful about taking what it says seriously, or using serious or important code without understanding what it is doing. What I am doing is reasonably benign, and while I am no developer, I do understand what this script is doing.

Upstack Your Thinking – With APIClarity

If you asked me two years ago if I was going to write this blog – I would have told you no way. I’m a technology architect – I design things. As many that know me well will tell you, I say it often, I am “not a coder”.

“But wait, didn’t you go to college for programming?” – not really, I took some college courses (Borland Turbo C), and in high school I did some Visual Basic, and I have done some scripting. The reality is that if you write some code – I can read and understand what you are doing, but asking me to write something from scratch, is about as useful as asking a trombone player to paint like Picasso – my brain isn’t wired like that.

Micro-services and Cloud Require Coding Skills

Ok, i’ll admit that – the new world does require more skills than I have. So this year, I took some GO courses, I also did more Python. Would you like me to admit that I enjoyed it? I really tried, and for awhile it was going well, but it just doesn’t feel like me – nevertheless – it helped a fair bit.

I actually enjoy coding more on embedded stuff. Arduino, robots, Nvidia Nano’s – maybe it’s the connection to the hardware that keeps my mind interested. I have also dabbled with some ML stuff, and creating my own ML models.

The cloud requires just a tad more coding skills – and – I am working on it, but alas, my mind still doesn’t work that way. I think however, I found a hack.

Move Up Stack – Don’t Change Mentality

The good news for anyone who is used to cables, switches, routers and servers – is that this new world isn’t any different than the old one. It just shifts everything you know – up in the stack. It was when I started to think this way, everything was in focus.

Being a long time communications nerd, and someone who started off in TDM telecom – for me, communications is pretty standard. Everything has to communicate to function and work and just as I made the migration from TDM to IP, it’s time to migrate from monolithic to micro-service.

Albeit, I won’t lie, I was in my 20’s when I started VoIP, and I was the only one at my company who was even willing to go near it. People looked at me like I had two heads when I told them what it did. “How do you get ring voltage without a trunk?”. I digress.

The bottom line is that the physical, datalink and even the network layers are now being abstracted from applications. Run it anywhere you want, don’t worry about what is underneath. Yeah, that sounds like a plan – just trust the environment underneath. The reality is that you shouldn’t, and your applications should be protected from the chaos that now exists – underneath. This does however mean you have to write your applications and deploy them in an intelligent way, remember, you are the pointy end of the communications stick now, no firewall to save you. With great power comes great responsibility, a responsibility that used to sit in the hands of network engineers.

Security In The Hands Of Developers

My friends look at me like I have gone crazy around the campfire (no literally) – they laugh out loud when I make that statement. The good news is that tools exist to help, I have always said that security is a layered approach, never rely on a single thing to keep you safe.

Applications are complex beasts – and there is hardly an application that uses 100% its own code, much of it comes from open source or third party sources – so how do you know if you can trust it? What if that application is updated, what if someone else finds a bug in some code you used – but – you don’t know. These are big things to think about – but the big manufactures like Cisco, are working on tools for that (Disclaimer – I work for Cisco). Scrubbing your applications to make sure you are not using out of date tools.

Many of the tools will monitor your code, APIs, and micro-services for intrusion or bad actors, some do it through traditional “firewall” type methods, some by interrogating traffic, but there is an increasing need to analyze transaction level traffic to both ensure performance and security. AppDynamics has been doing much of this for some time, both security and performance. See more of that here from Tech Field day. That said, we have new tools we are working on.

You Need A Baseline To Detect Problems and Threats

Unless you know every single call your APIs make – how do you know what is real and what is not? What if someone messing with your services? What if some developer has written sloppy code or using an API in an unexpected way. You need a baseline. There is actually a standard for this called the OpenAPI Spec. You can specify how your API works – but what if your inter-service API is a mess, or built over time. Trying to build out a spec could be difficult – or what if it isn’t your API.

The good news is tools exist to help.

Everything Becomes Clear with APIClarity

Cisco developed an application called APIClarity – apiclarity.io that will sit inside your micro-service infrastructure and watch your API calls from inside the service mesh (like wireshark on a monitor port) and not only watch what is happening, it will ensure your services are operating as expected based on your specification. The good news is, it is free – no, seriously go get it from github and try it out.

If you don’t have an API Spec now, then APIClarity will help you generate one, by monitoring traffic and building that specification.

This works by putting the service-mesh equivalent of “PCAP” into your service-mesh to deliver all API calls to the APIClarity engine, which then builds your API Specification from scratch. Once you have built that specification, you can save it, and when something unexpected happens – it will let you know.

The beauty is actually in the tools simplicity, and at the end of the day everything is pretty easy to understand if you just – upstack your thinking.

DEMO Time

It wouldn’t be good without a demo right? How about a step by step build, no fancy code writing, no serious scripting – this is something you as a network engineer CAN do on your own.