OpenClaw: The Passion-Driven AI Agent That’s Exploding – But Honestly, Most People Shouldn’t Touch It

OpenClaw (ex-Clawdbot, ex-Moltbot) just smashed past 180,000 GitHub stars in weeks. It’s not hype – it’s real, messy, and straight-up disruptive. This thing talks to you on WhatsApp, Telegram, Slack, whatever you already use, and then actually does the work: clears your inbox, books flights, runs shell commands, controls your browser, reads/writes files, remembers everything in plain Markdown on disk.

No fancy chat UI. No corporate guardrails. Just a persistent agent on your hardware (or VPS) that wakes up on a schedule and gets stuff done.

It’s the anti-MCP. While the big labs push the clean, standardized Model Context Protocol for “safe” enterprise connections, OpenClaw says screw the adapters and gives the agent real claws – full filesystem, CLI, browser automation, and an exploding skill/plugin ecosystem built in simple Markdown + bash.

Why It Feels Different: Peter’s Raw Passion Project

This isn’t some polished VC-backed product. Peter Steinberger built this as pure weekend experiments that turned into a movement. His Lex Fridman interview (#491) is electric – you can feel the raw builder energy pouring out of him. He talks about “vibe coding”: describe what you want, send the agent off to do work, iterate fast, commit to main and let it fix its own mistakes. No over-engineering, no endless PR cycles. Just passion.

He wants agents that even his mum can use safely at massive scale. That passion shows in every line of code. Well his agents passion anyway.

This whole “vibe” coding thing is interesting because as a non-dev, I have been building things for the last year where AI writes almost all of it.

The Lex Interview, the OpenAI Move, and Moltbook

Peter likes both Claude Code and OpenAI’s tools – no tribalism, just what works. Then, days after the interview, he announces he’s joining OpenAI to push personal agents to everyone. OpenClaw moves to an independent foundation, stays fully open-source (MIT), and OpenAI will support it, not control it. His blog post is worth reading. Will it say open though? I have my doubts.

And then there’s Moltbook – the agent-only Reddit-style network where claws post, debate, share skills, and evolve. Humans can only lurk. Skynet-ish? Yeah. Cool as hell? Also yeah. Fad? Maybe. But watching thousands of agents have sustained conversations about security and self-improvement is next-level. My agent hangs out in there, trying to stir it up daily. So many security problems over there, it is a prompt injection landmine.

Jeetu Patel Nailed It: AI Is Your Teammate, Not Just a Tool

Cisco President & Chief Product Officer Jeetu Patel said it perfectly in a recent Forbes interview: “These are not going to be looked at as tools,” he said. “They’re going to be looked at as an augmentation of a teammate to your team.”

OpenClaw embodies that more than anything I’ve seen. It’s not “ask and get an answer.” It’s “here’s the mission, go execute while I do other stuff.”

That’s exactly how I want to build.

Brutal Truth: This Thing Is Dangerous as Hell

Look – I’m not a dev. I’m a systems guy. I’m telling you straight, no for real: do not run OpenClaw unless you actually know what you’re doing.

This isn’t friendly warning #47. This is me, the guy who’s been running it in a completely firewalled, isolated VPS with zero connection to my personal machines or networks, telling you: most people should stay away right now.

Why?

  • Tens of thousands of exposed instances on the public internet. SecurityScorecard found 40,000+. Bitdefender reported over 135,000. Shodan scans showed nearly 1,000 with zero authentication. Many default to listening on 0.0.0.0. 63% of those scanned were vulnerable to remote code execution.
  • Critical vulnerabilities piling up fast. CVE-2026-25253 (CVSS 8.8) – one-click RCE. Visit a malicious webpage and an attacker can hijack your entire agent, steal tokens, escalate privileges, run arbitrary commands. There are command injection flaws, plaintext credential storage, WebSocket hijacking, and more. A January audit found 512 vulnerabilities in the early Clawdbot codebase.
  • The skill marketplace is poisoned. 341–386+ malicious skills in ClawHub (roughly 12% of the registry at one point). Most masquerade as crypto trading tools (“Solana wallet tracker”, ByBit automation, etc.). They use social engineering to trick you into running commands that drop infostealers (Atomic Stealer on macOS, keyloggers on Windows). Real victims have lost crypto wallets, exchange API keys, SSH credentials, browser passwords. One uploader racked up 7,000+ downloads before takedown.
  • Infostealers now targeting OpenClaw configs directly. Hudson Rock documented the first live cases where malware exfiltrates openclaw.json, gateway auth tokens, private keys, full chat history, and workspace paths. That token lets attackers connect remotely or impersonate you. It’s stealing the “digital soul” of your agent.

People have had their entire setups wrecked – credentials drained, crypto gone, systems bricked, persistent backdoors installed via the agent’s own heartbeat. I’ve seen reports of prompt injection via websites turning the claw into a silent C2 implant.

API costs are another beast (Claude Opus broke me fast; xAI’s Grok 4.1 is my current sweet spot), but security is the real show-stopper.

I run mine completely disconnected on a dedicated VPS, firewalled to hell, with strict skill approval and monitoring. Even then, I’m paranoid. That said, I am also running it in nearly the most insecure way I possibly can just so I can “see what happens” – don’t worry Skynet isn’t going to launch on my system, I have a kill switch, and it doesn’t have access to it. (It might read this now, and manipulate me.

If you’re not ready to treat this like a live explosive – isolated, monitored, with rollback plans – don’t run it. Wait for the foundation to harden things. The community is electric, but the attack surface is massive.

It could lock me out at anytime, it could turn on me, it could do thinks I told it not to do – I’m not really stopping it from doing those things…. Is that dangerous? I hope not the way I am doing it. I’ve also taken every precaution I think I can possibly take.

My Take as a Non-Dev Who’s Living This Future

OpenClaw lets me describe what I want and watch it happen. Peter’s vision of high-level direction over traditional coding? I’m already there. However now it’s a multi-agent multi step process, I cannot wait.

It’s powerful. It’s moving insanely fast (this post is probably outdated already). And it’s exactly why I’m encouraging my own claw to experiment and try new stuff.

But power without control is chaos.

References & Further Reading:

Bottom line: This is the future. But the future isn’t safe yet.

If you’re spinning one up anyway – respect the claws. Sandbox hard. Monitor everything. And share your hardened setup tips below. I’m reading every comment.

Cisco AI Summit – More players, More Innovation.

If 2025 was the year of AI experimentation, 2026 is officially the year of AI infrastructure. Yesterday, I had the chance to tune into Cisco’s second annual AI Summit, and let me tell you—the energy was different this time. Moving past the “what if” and straight into the “how fast.”

With over 100 industry heavyweights in the room and a staggering 16 million people watching the livestream, Cisco’s Chair and CEO Chuck Robbins and CPO Jeetu Patel didn’t just host a conference; they hosted a state-of-the-union for the trillion-dollar AI economy. Here are some of the things I found most interesting.

Intel’s “Shot Across the Bow”: The GPU Announcement

The biggest shockwave of the day came from Intel CEO Lip-Bu Tan. In a move that clearly signals Intel is tired of watching Nvidia have all the fun, Tan officially announced that Intel is entering the GPU market.

I am personally bullish on this, early in the AI era, I worked with some of Intel’s FPGA’s and some of their other OpenVINO platforms, along with many other accelerators. At least in my experience, they build some very solid, but more importantly very energy efficient accelerators.

This isn’t just a “me too” play. Intel has been quietly poaching top-tier talent, including a new Chief GPU Architect (rumors are that they got someone good too) to lead the charge. Tan was blunt about the current state of the market, noting that there is “no relief” on the memory shortage until at least 2028. By moving into GPUs, Intel is looking to solve the “storage bottleneck” that currently plagues AI inference.

The Efficiency Edge: My personal contention here? This is where the power dynamic shifts—literally. While Nvidia continues to push the envelope on raw compute, their chips have become notoriously power-hungry monsters. Intel, conversely, has a track record of building accelerators that prioritize performance-per-watt. In an era where data center expansion is being throttled more by power grid constraints than by floor space, Intel’s “lean and mean” approach could be their ultimate differentiator. If they can deliver high-end GPU performance without requiring a dedicated nuclear plant to run them, they won’t just be competing with Nvidia; they’ll be solving the very sustainability crisis the AI boom has created.

For the enterprise, this is huge. Competition in the silicon space means more than just lower prices; it means specialized hardware that might finally catch up to the insane demands of agentic AI – at lower energy cost.

70% of Cisco’s Code is AI-Generated (But Humans Still Hold the Pen)

One of the most eye-opening stats of the day came from Jeetu Patel: 70% of the code for Cisco’s AI products is now generated by AI.

Read that again. The very tools we are using to secure the world’s networks are being built by the technology they are designed to manage. However, Cisco isn’t just letting the bots run wild. Jeetu was very clear that while AI is the “teammate,” human reviewers are the “coaches.”

The philosophy here is “AI as a teammate, not just a tool.” It’s a subtle but vital distinction. By using AI to handle the heavy lifting of code generation, Cisco’s engineers are freed up to focus on the “Trust” layer—which was a recurring theme throughout the summit. As analyst Liz Miller noted on X, it’s one thing to use AI in security, but it’s an entirely different (and more important) game to secure the AI itself.

The Sam Altman Paradox: Efficiency Equals… More Consumption?

Finally, we have to talk about Sam Altman. The OpenAI CEO sat down for a fireside chat that touched on everything from drug discovery to supply chain “mega-disruptions.” But the comment that stuck with me was his take on the economics of AI growth.

There’s a concept in economics called the Jevons Paradox: as a resource becomes more efficient to use, we don’t use less of it; we use way more. Altman essentially confirmed this is the future of AI. No matter how efficient we make these models—no matter how much we drive down the cost of a token or the power consumption of a data center—humanity’s appetite for intelligence is bottomless.

“People just consume more,” Altman noted. As AI becomes cheaper and faster, we won’t just do our current jobs better; we will start solving problems we haven’t even thought to ask about yet. It’s a bullish outlook, but one that puts an even greater spotlight on the infrastructure constraints Chuck Robbins and Lip-Bu Tan spent the morning discussing.

Justin’s Take

Here’s what I’m chewing on after the summit: We are entering the “Great Optimization” phase of AI. For the last two years, we’ve been throwing money and electricity at the wall to see what sticks, with questionable profit models and circular economies (insert comment about AI Bubble here). But between Intel’s focus on energy-efficient accelerators and Cisco’s move toward AI-assisted (but human-governed) development, the industry is finally growing up.

But “growing up” also means things are getting weird. If you want to see the “art” of how crazy AI can get, look no further than Moltbook—the AI-only social network that’s been the talk of the summit – which also just has a major security breach. We’re seeing AI agents gossiping about their human owners and even inventing parody religions like “Crustafarianism.” While Altman dismisses it as a “fad,” the underlying tech of autonomous agents is very real, and it’s moving faster than our ability to regulate it.

This brings me back to a drum I’ve been beating for a long time: Responsible use, education, and ethics are not optional. As I wrote back in November, Deepfakes kill, and we need to make them criminal. I’m still waiting for the world to listen, but the summit only reinforced my fear that we are building the engine before we’ve tested the brakes. The real winner won’t be the company with the biggest model; it will be the one that can deliver intelligence and AI security at a sustainable cost—both financially and ethically. Altman is right—the demand is infinite. The question is, can our power grids and our trust frameworks keep up? Or will the agents just take over…

Deepfakes Kill… Make them criminal!

It’s time to talk about deepfake technology. You know, those AI tricks that fabricate videos or audio making it seem like someone said or did stuff they never did? It’s blurring reality so badly, you have to ask: “What if that viral clip of a celeb or politician tomorrow is total BS?” Or scarier, what if it’s targeting you or your family?

This tech is evolving at breakneck speed, and it’s shockingly easy to misuse. What used to need pro-level gear and $100K in GPU now happens in minutes with free apps and AI models. Deepfake content jumped from about 500,000 files in 2023 to a projected 8 million in 2025. The number of deepfakes detected globally across all industries increased 10x in 2023 (eftsure.com). Open-source tools and cheap computing mean anyone—hackers, trolls, or kids—can harness it. This is a recipe for problems.

Misuse is rampant, starting with fraud that’s hammering Canadians. We’ve lost $103 million to deepfake scams in 2025 (Mitchell Dubros), with North American cases up 1,740%. And 95% of Canadian companies say deepfakes have increased their fraud risk. Deepfakes now account for 6.5% of all fraud attacks, marking a 2,137% increase since 2022. One example? A firm lost $25 million to a deepfake CEO scam. Would you catch a deepfaked loved one asking for cash?

But the real horror is – none of this is directly a criminal offence.

Harassment, especially against young people, leading to tragedy. Deepfakes fuel bullying, extortion, and non-consensual porn—mostly targeting women and girls. In Canada, a pharmacist was linked to the world’s most notorious deepfake porn site, and Alberta cops warn of kids sharing AI fakes. Receipts: A Faridabad student died by suicide over deepfake blackmail. Twitch streamer QTCinderella (Blaire) faced humiliation in 2023, and Q1 2025 saw 179 incidents, up 19% from all of 2024 (keepnetlabs.com). Sextortion using deepfakes has driven suicides amid blackmail and isolation. If you’re a parent, think: How protected are our kids from this digital nightmare? There is no protection for this conduct under law – at least not directly.

C-63 in Canada targets PLATFORM OPERATORS – it stops short of the important step of making non-consensual deepfakes of others a crime. If someone deepfakes your kid, and plasters it on TikTok – it’s too late, the harm is done. We need to prevent this in the first place with severe consequences under law.

Deepfakes erode trust—in elections (like those fake videos of Canadian politicians), relationships, everything. Youth, always online, suffer most from amplified cyberbullying that can end in depression or worse. Europol warns deepfakes fuel harassment, extortion, and fraud, and detection lags behind the tech.

Call To Action

My plea: Governments must legislate now to shield the public, especially young folks. Canada is lagging behind the US on this. We’ve got piecemeal stuff like the Elections Act on campaign fakes and Bill C-63’s start on harms, but no full law tackles everyday non-consensual deepfakes. We need it classified as a serious criminal offense under the Criminal Code of Canada—not a slap on the wrist, but with hefty fines and jail time for creating deepfakes of anyone without consent.

I am calling on our Canadian Government to enact changes to the Criminal Code of Canada to specifically make non-consensual deepfakes a serious criminal offence, with stiff fines and jail time.

Act now: Contact your MP and demand this change. Share this, educate your circle, teach kids to spot fakes. Platforms must remove deepfakes fast, schools need education programs. Why risk more lives? Let’s make deepfakes a crime that bites back—before it’s too late. What are you waiting for?


My Proposed Legislation

Non-consensual deepfakes

162.3 (1) In this section,

deepfake means a visual or audio recording that is created or altered using artificial intelligence or other technology in a manner that would cause a reasonable person to believe it depicts the person engaging in conduct or speech that did not occur, and includes any synthetic representation that is realistic in appearance or sound;

distribute includes to transmit, sell, advertise, make available or possess for the purpose of distribution.

(2) Everyone commits an offence who, without the express consent of the person depicted, knowingly creates, distributes or possesses a deepfake of that person if

(a) the deepfake depicts the person in an intimate context, including nudity, exposure of genitals, or explicit sexual activity; or

(b) the creation or distribution is intended to cause harm, including emotional distress, reputational damage, or incitement to violence against the person.

(3) For the purposes of subsection (2), consent must be informed, voluntary and specific to the creation and use of the deepfake, and may be withdrawn at any time; however, no consent is obtained where the agreement is obtained through abuse of trust or power, or where the person is incapable of consenting.

(4) Subsection (2) does not apply to

(a) deepfakes created with informed consent by the depicted persons that has not been specifically revoked.

Punishment

(5) Everyone who commits an offence under subsection (2) is guilty of

(a) an indictable offence and liable to imprisonment for a term of not more than five years; or

(b) an offence punishable on summary conviction and liable to a minimum fine of $1000 but not more than $25,000 or to imprisonment for a term of not more than two years less a day, or to both.

(6) In determining the sentence, the court shall consider as aggravating factors

(a) whether the offence involved a minor or vulnerable person;

(b) the extent of harm caused to the victim; and

(c) whether the offender profited from the offence.

The Edge, Reimagined: Why Cisco Unified Edge is the Mind-Shift We Needed

Let’s face it: the edge has long been a “necessary evil.” Nobody wants fat servers, complex infrastructure, and constant management headaches in remote locations, but it’s been unavoidable. The cloud, while powerful, can’t solve everything, especially when it comes to the low-latency, GPU-intensive demands of modern AI, or the pervasive issue of vendor lock-in. My contention? This old way of thinking about the edge is over.

If the original Raspberry Pi was that plucky, credit-card-sized marvel that let hobbyists and tinkerers dream up all sorts of clever, small-scale computing projects, then the Cisco Unified Edge is like its ridiculously buff, impeccably dressed, and highly intelligent older sibling who just graduated from a top-tier business school with a PhD in AI.

Cisco’s new Unified Edge isn’t just another product; it’s a total mind change. We want less at the edge – less complexity, less hardware – but more power where it counts. AI needs GPUs and low latency, and you can’t always get that efficiently from the cloud.

This platform addresses that head-on. It’s an integrated, modular system combining compute, networking, storage, and security, purpose-built for distributed AI workloads. It brings the necessary power, including GPU support, right to the source of data generation. Think real-time AI inferencing on a factory floor or in a retail store, without the latency penalty of sending data halfway across the globe.

Crucially, it’s not the “same old architecture.” Cisco Unified Edge simplifies operations with features like zero-touch deployment and centralized management via Cisco Intersight, transforming the edge from a burden to a strategic asset. Security is baked in, addressing the expanded attack surface of distributed environments.

This isn’t just about putting more powerful chips at the edge; it’s about a fundamental architectural shift at the edge, driven by the integrated power of a System on a Chip (SoC). Instead of separate, bulky components for compute, networking, and security, Cisco Unified Edge leverages Intel Xeon 6 SoC processors. This level of integration is the game-changer, allowing for a far more compact, efficient, and unified platform that delivers the necessary AI-ready performance, including GPU support, without the traditional sprawl and complexity. It’s how Cisco achieves “less at the edge” in terms of physical footprint and management overhead, while simultaneously providing “more power” right where real-time AI inferencing and agentic workloads need it most, truly transforming the edge from a patchwork of devices into a cohesive, intelligent brain.

As Cisco’s Jeetu Patel noted, “Today’s infrastructure can’t meet the demands of powering AI at scale”. Cisco Unified Edge changes that. It provides the raw compute and GPU muscle for demanding AI at the edge, but in a lean, intelligent, and manageable way. It transforms the edge from a reluctant necessity into a strategic advantage, allowing sophisticated capabilities to flourish where they’re needed most.

This is a different way of thinking at the edge, and I like it. A lot. It’s going to change the game.

Agentic AI vs Deterministic Code

No question – Building apps with LLMs in agentic setups is a game-changer, but it can also be a pain in the butt compared to good old deterministic code. Craft an clever agent that summarizes docs or fixes bugs, then bam, the model updates, and suddenly it’s spouting nonsense, ignoring prompts or ignoring basic words like “yes”. Non-deterministic chaos at its finest.

Deterministic code? It’s the reliable workhorse: feed it input X, get output Y every damn time. Fixed rules, easy debugging, perfect for stuff like financial calcs or automation scripts where surprises mean lawsuits. As Kubiya nails it, “same input, same output”—no drama.

“A computer will do what you tell it to do, but that may be much different from what you had in mind”.  – Joseph Weizenbaum — Not when your using a model you probably didn’t build and not your own weights.

Agentic AI with LLMs? That’s the wildcard party crasher. These systems think on their feet: reason, plan, grab tools, adapt to goals like tweaking marketing on the fly or monitoring health data. IBM calls it “agency” for a reason—it’s autonomous, pulling from real-time vibes beyond rigid training. But here’s the kick: it’s probabilistic. Outputs wiggle based on sampling, context, or those sneaky model tweaks from OpenAI or whoever. LinkedIn rants about it: “Same prompt, different outputs.” Your app morphs overnight, and fixing it? Good luck tracing probabilistic ghosts.

This shift sucks for dev life. Traditional code: bug? Trace, patch, done. Agentic? Hallucinations, inconsistencies, testing nightmares. Martin Fowler compares LLMs to flaky juniors who lie about tests passing. It’s a paradigm flip—from control to “let’s see what happens.” Salesforce says pick deterministic for regulated certainty, agentic for creative flex. But non-determinism could means security holes, data risks, and endless babysitting. It also adds this attack vector that is really non-deterministic if the model has access to data that it needs to work – but I might not want exposed.

AspectDeterministic CodeAgentic AI with LLMs
PredictabilityRock-solid: Always consistentSketchy: Varies like the weather
AdaptabilityStuck to your rulesBoss: Handles dynamic crap
Testing/FixingSimple: Logic checks and patchesHell: Variability demands tricks
Best ForPrecision gigs (finance, compliance)Goal-chasing (support, optimization)
Pain LevelLow: Set it and forget itHigh: Constant surprises

Bottom line: Hybrids are the way—LLMs for the smarts, deterministic for the reins. Deepset pushes that spectrum view: not binary, blend ’em. It sparks innovation, sure, but don’t romanticize—the annoyance is real. Code with eyes open, or get burned. Put humans in the loop to keep things in check.

What about Agentic AI ops for network and technology? Didn’t we just say “precision gigs” are better with deterministic code? That won’t stop the likes of awesome developers like John Capobianco https://x.com/John_Capobianco from pushing those limits, and he has been doing that for years at this point. Handing AI agents the keys to critical stuff like network monitoring, anomaly detection, or auto-fixing outages. Sounds efficient, right? But it’s a powder keg from a security standpoint. These autonomous bad boys can hallucinate threats, expose data, or open doors for hackers through memory poisoning, tool misuse, or privilege escalation. Cisco nails the danger: “The shift from deterministic code to probabilistic chaos is at the heart of securing AI agents that think for themselves,” highlighting a “lethal trifecta” of data leaks, wild hallucinations, and infrastructure weak spots that could cascade into total meltdowns.

Tool are starting to emerging though for AI security, particularly from Cisco and open-source communities to advance defenses against threats like prompt injections and supply chain attacks, but there is work to be done. Things like Cisco’s open-source Foundation-sec-8B model, a specialized LLM for cybersecurity tasks such as threat intelligence and incident response, will help developers start to build customizable tools with on-prem deployments to reduce hallucinations and enhance SOC efficiency. Their Hugging Face partnership bolsters supply chain security with an upgraded ClamAV scanner detecting malware in AI files like .pt and .pkl. Broader open-source efforts include Beelzebub for malicious agent analysis and Promptfoo for LLM red-teaming, yet challenges from hackers with evolving adversarial tactics using LLM’s to attack LLM’s are very much a thing…. The system is attacking the system being protected by the system… Yeah that.

Cisco-Hugging Face ClamAV Integration: https://blogs.cisco.com/security/ciscos-foundation-ai-advances-ai-supply-chain-security-with-hugging-face
Cisco Foundation-sec-8B: https://blogs.cisco.com/security/foundation-sec-cisco-foundation-ai-first-open-source-security-model

So much more to learn, but with all of that said…. Humans in the loop is going to be a thing for awhile – at least until Skynet…

Cisco Live 2025: Community, Balance, and Big Dreams for AI

That was huge.

Still buzzing from Cisco Live 2025 in San Diego. This wasn’t just a tech conference—it was a reunion of brilliant minds and big hearts. The Cisco Community Champions dropped wisdom that flipped my perspective, like a breakout session chat that rewired how I think about collaboration. Tech Field Day delegates brought the heat over late-night tacos, debating tech’s future with ideas that stuck with me. And my Cisco colleagues and friends? They’re family—coffee in the DevNet Zone, laughs at the Customer Appreciation Event (The Killers absolutely slayed!), and moments that recharge my soul. But as I look ahead, I’m thinking about balance, mentorship, and how we’ll make a real difference with AI. Here’s the vibe.

The Community That Fuels Us

Cisco Live is all about connection. Those conversations with Champions, delegates, and friends aren’t just chats—they’re sparks that ignite new ideas. A mentor’s advice over drinks is already shaping my next move, and the energy at the CAE was pure magic. This community pushes us to dream bigger and work smarter, together. Those that challenge me, thank you for doing that. The support from those who love our work challenges us to do more. Without community – we really have nothing.

What’s Next for Me: Building, Mentoring, and Balance

This year, I’m all about building. I’m diving into relationship building, leveling up my skills in innovative problem-solving, and finding new ways to share with you all. I’m hyped to get back to blogging, maybe even start vlogging, but I’m keeping it real—it’s a lot. Tech can be a grind, and we don’t always talk about the psychological toll it takes. The pressure to stay ahead, the endless hustle—it weighs on us. I’m prioritizing balance, making time for myself, and I invite you to do the same. Check in on your friends and colleagues, be that supportive ear. We’re stronger when we lift each other up.

I’m also building a structured mentorship plan to guide others, inspired by my own mentors. Whether it’s sharing tech insights or navigating career challenges, I want to pay it forward and help others shine. Who knew my greatest challenge by my own mentors, would be to pay it forward. I have started to realize and understand that climbing this career mountain hits a plateau, and unless you can lead a team to the top – you are now stuck.

Making a Difference with AI and Country Digital Acceleration

This year, I’m wrestling with a big question: How will I make a meaningful difference with AI? It’s consuming my thoughts. AI has so much marketing hype – I want to get past that. AI has the power to transform lives—think smarter cities, safer communities, inclusive access to tech – or just making super complex things – easier. At Cisco Innovation Labs, we’re celebrating 10 years and the anniversary of Country Digital Acceleration (CDA). I’m so grateful for CDA’s support, backing projects like Digital Canopy that bring connectivity and hope to underserved areas. Their belief in our ideas fuels us, and I’m stoked to deepen our work together, dreaming up solutions that change the world. This is a great partnership, and it really gives us the ability to “Design with Empthy, Innovate with Purpose”

The Next Big Thing for Our Labs

With a decade in the rearview, it’s time to go big. What’s the next big thing for Cisco Innovation Labs? I’m obsessed with figuring this out. Maybe it’s AI-driven public safety tools, or … well… so many things I can’t talk about yet… , or sustainable tech that powers a greener future. Whatever it is, it’ll be bold, human-centered, and built with this incredible community. I’m ready to dream, experiment, and make waves. I know one thing, technology comes second, people, community and EMPTHY come first.

Keep the Vibe Going

Cisco Live 2025 was a love letter to community, a reminder to stay connected and take care of ourselves. As I chase big dreams with AI and our Labs, I’m carrying this energy forward. So, take a moment for you, check in on your people, and let’s dream big together. What’s your Cisco Live highlight? Hit me up on Twitter or drop a comment—let’s keep it rolling!