Cisco AI Summit – More players, More Innovation.

If 2025 was the year of AI experimentation, 2026 is officially the year of AI infrastructure. Yesterday, I had the chance to tune into Cisco’s second annual AI Summit, and let me tell you—the energy was different this time. Moving past the “what if” and straight into the “how fast.”

With over 100 industry heavyweights in the room and a staggering 16 million people watching the livestream, Cisco’s Chair and CEO Chuck Robbins and CPO Jeetu Patel didn’t just host a conference; they hosted a state-of-the-union for the trillion-dollar AI economy. Here are some of the things I found most interesting.

Intel’s “Shot Across the Bow”: The GPU Announcement

The biggest shockwave of the day came from Intel CEO Lip-Bu Tan. In a move that clearly signals Intel is tired of watching Nvidia have all the fun, Tan officially announced that Intel is entering the GPU market.

I am personally bullish on this, early in the AI era, I worked with some of Intel’s FPGA’s and some of their other OpenVINO platforms, along with many other accelerators. At least in my experience, they build some very solid, but more importantly very energy efficient accelerators.

This isn’t just a “me too” play. Intel has been quietly poaching top-tier talent, including a new Chief GPU Architect (rumors are that they got someone good too) to lead the charge. Tan was blunt about the current state of the market, noting that there is “no relief” on the memory shortage until at least 2028. By moving into GPUs, Intel is looking to solve the “storage bottleneck” that currently plagues AI inference.

The Efficiency Edge: My personal contention here? This is where the power dynamic shifts—literally. While Nvidia continues to push the envelope on raw compute, their chips have become notoriously power-hungry monsters. Intel, conversely, has a track record of building accelerators that prioritize performance-per-watt. In an era where data center expansion is being throttled more by power grid constraints than by floor space, Intel’s “lean and mean” approach could be their ultimate differentiator. If they can deliver high-end GPU performance without requiring a dedicated nuclear plant to run them, they won’t just be competing with Nvidia; they’ll be solving the very sustainability crisis the AI boom has created.

For the enterprise, this is huge. Competition in the silicon space means more than just lower prices; it means specialized hardware that might finally catch up to the insane demands of agentic AI – at lower energy cost.

70% of Cisco’s Code is AI-Generated (But Humans Still Hold the Pen)

One of the most eye-opening stats of the day came from Jeetu Patel: 70% of the code for Cisco’s AI products is now generated by AI.

Read that again. The very tools we are using to secure the world’s networks are being built by the technology they are designed to manage. However, Cisco isn’t just letting the bots run wild. Jeetu was very clear that while AI is the “teammate,” human reviewers are the “coaches.”

The philosophy here is “AI as a teammate, not just a tool.” It’s a subtle but vital distinction. By using AI to handle the heavy lifting of code generation, Cisco’s engineers are freed up to focus on the “Trust” layer—which was a recurring theme throughout the summit. As analyst Liz Miller noted on X, it’s one thing to use AI in security, but it’s an entirely different (and more important) game to secure the AI itself.

The Sam Altman Paradox: Efficiency Equals… More Consumption?

Finally, we have to talk about Sam Altman. The OpenAI CEO sat down for a fireside chat that touched on everything from drug discovery to supply chain “mega-disruptions.” But the comment that stuck with me was his take on the economics of AI growth.

There’s a concept in economics called the Jevons Paradox: as a resource becomes more efficient to use, we don’t use less of it; we use way more. Altman essentially confirmed this is the future of AI. No matter how efficient we make these models—no matter how much we drive down the cost of a token or the power consumption of a data center—humanity’s appetite for intelligence is bottomless.

“People just consume more,” Altman noted. As AI becomes cheaper and faster, we won’t just do our current jobs better; we will start solving problems we haven’t even thought to ask about yet. It’s a bullish outlook, but one that puts an even greater spotlight on the infrastructure constraints Chuck Robbins and Lip-Bu Tan spent the morning discussing.

Justin’s Take

Here’s what I’m chewing on after the summit: We are entering the “Great Optimization” phase of AI. For the last two years, we’ve been throwing money and electricity at the wall to see what sticks, with questionable profit models and circular economies (insert comment about AI Bubble here). But between Intel’s focus on energy-efficient accelerators and Cisco’s move toward AI-assisted (but human-governed) development, the industry is finally growing up.

But “growing up” also means things are getting weird. If you want to see the “art” of how crazy AI can get, look no further than Moltbook—the AI-only social network that’s been the talk of the summit – which also just has a major security breach. We’re seeing AI agents gossiping about their human owners and even inventing parody religions like “Crustafarianism.” While Altman dismisses it as a “fad,” the underlying tech of autonomous agents is very real, and it’s moving faster than our ability to regulate it.

This brings me back to a drum I’ve been beating for a long time: Responsible use, education, and ethics are not optional. As I wrote back in November, Deepfakes kill, and we need to make them criminal. I’m still waiting for the world to listen, but the summit only reinforced my fear that we are building the engine before we’ve tested the brakes. The real winner won’t be the company with the biggest model; it will be the one that can deliver intelligence and AI security at a sustainable cost—both financially and ethically. Altman is right—the demand is infinite. The question is, can our power grids and our trust frameworks keep up? Or will the agents just take over…

Agentic AI vs Deterministic Code

No question – Building apps with LLMs in agentic setups is a game-changer, but it can also be a pain in the butt compared to good old deterministic code. Craft an clever agent that summarizes docs or fixes bugs, then bam, the model updates, and suddenly it’s spouting nonsense, ignoring prompts or ignoring basic words like “yes”. Non-deterministic chaos at its finest.

Deterministic code? It’s the reliable workhorse: feed it input X, get output Y every damn time. Fixed rules, easy debugging, perfect for stuff like financial calcs or automation scripts where surprises mean lawsuits. As Kubiya nails it, “same input, same output”—no drama.

“A computer will do what you tell it to do, but that may be much different from what you had in mind”.  – Joseph Weizenbaum — Not when your using a model you probably didn’t build and not your own weights.

Agentic AI with LLMs? That’s the wildcard party crasher. These systems think on their feet: reason, plan, grab tools, adapt to goals like tweaking marketing on the fly or monitoring health data. IBM calls it “agency” for a reason—it’s autonomous, pulling from real-time vibes beyond rigid training. But here’s the kick: it’s probabilistic. Outputs wiggle based on sampling, context, or those sneaky model tweaks from OpenAI or whoever. LinkedIn rants about it: “Same prompt, different outputs.” Your app morphs overnight, and fixing it? Good luck tracing probabilistic ghosts.

This shift sucks for dev life. Traditional code: bug? Trace, patch, done. Agentic? Hallucinations, inconsistencies, testing nightmares. Martin Fowler compares LLMs to flaky juniors who lie about tests passing. It’s a paradigm flip—from control to “let’s see what happens.” Salesforce says pick deterministic for regulated certainty, agentic for creative flex. But non-determinism could means security holes, data risks, and endless babysitting. It also adds this attack vector that is really non-deterministic if the model has access to data that it needs to work – but I might not want exposed.

AspectDeterministic CodeAgentic AI with LLMs
PredictabilityRock-solid: Always consistentSketchy: Varies like the weather
AdaptabilityStuck to your rulesBoss: Handles dynamic crap
Testing/FixingSimple: Logic checks and patchesHell: Variability demands tricks
Best ForPrecision gigs (finance, compliance)Goal-chasing (support, optimization)
Pain LevelLow: Set it and forget itHigh: Constant surprises

Bottom line: Hybrids are the way—LLMs for the smarts, deterministic for the reins. Deepset pushes that spectrum view: not binary, blend ’em. It sparks innovation, sure, but don’t romanticize—the annoyance is real. Code with eyes open, or get burned. Put humans in the loop to keep things in check.

What about Agentic AI ops for network and technology? Didn’t we just say “precision gigs” are better with deterministic code? That won’t stop the likes of awesome developers like John Capobianco https://x.com/John_Capobianco from pushing those limits, and he has been doing that for years at this point. Handing AI agents the keys to critical stuff like network monitoring, anomaly detection, or auto-fixing outages. Sounds efficient, right? But it’s a powder keg from a security standpoint. These autonomous bad boys can hallucinate threats, expose data, or open doors for hackers through memory poisoning, tool misuse, or privilege escalation. Cisco nails the danger: “The shift from deterministic code to probabilistic chaos is at the heart of securing AI agents that think for themselves,” highlighting a “lethal trifecta” of data leaks, wild hallucinations, and infrastructure weak spots that could cascade into total meltdowns.

Tool are starting to emerging though for AI security, particularly from Cisco and open-source communities to advance defenses against threats like prompt injections and supply chain attacks, but there is work to be done. Things like Cisco’s open-source Foundation-sec-8B model, a specialized LLM for cybersecurity tasks such as threat intelligence and incident response, will help developers start to build customizable tools with on-prem deployments to reduce hallucinations and enhance SOC efficiency. Their Hugging Face partnership bolsters supply chain security with an upgraded ClamAV scanner detecting malware in AI files like .pt and .pkl. Broader open-source efforts include Beelzebub for malicious agent analysis and Promptfoo for LLM red-teaming, yet challenges from hackers with evolving adversarial tactics using LLM’s to attack LLM’s are very much a thing…. The system is attacking the system being protected by the system… Yeah that.

Cisco-Hugging Face ClamAV Integration: https://blogs.cisco.com/security/ciscos-foundation-ai-advances-ai-supply-chain-security-with-hugging-face
Cisco Foundation-sec-8B: https://blogs.cisco.com/security/foundation-sec-cisco-foundation-ai-first-open-source-security-model

So much more to learn, but with all of that said…. Humans in the loop is going to be a thing for awhile – at least until Skynet…