If 2025 was the year of AI experimentation, 2026 is officially the year of AI infrastructure. Yesterday, I had the chance to tune into Cisco’s second annual AI Summit, and let me tell you—the energy was different this time. Moving past the “what if” and straight into the “how fast.”
With over 100 industry heavyweights in the room and a staggering 16 million people watching the livestream, Cisco’s Chair and CEO Chuck Robbins and CPO Jeetu Patel didn’t just host a conference; they hosted a state-of-the-union for the trillion-dollar AI economy. Here are some of the things I found most interesting.
Intel’s “Shot Across the Bow”: The GPU Announcement
The biggest shockwave of the day came from Intel CEO Lip-Bu Tan. In a move that clearly signals Intel is tired of watching Nvidia have all the fun, Tan officially announced that Intel is entering the GPU market.
I am personally bullish on this, early in the AI era, I worked with some of Intel’s FPGA’s and some of their other OpenVINO platforms, along with many other accelerators. At least in my experience, they build some very solid, but more importantly very energy efficient accelerators.
This isn’t just a “me too” play. Intel has been quietly poaching top-tier talent, including a new Chief GPU Architect (rumors are that they got someone good too) to lead the charge. Tan was blunt about the current state of the market, noting that there is “no relief” on the memory shortage until at least 2028. By moving into GPUs, Intel is looking to solve the “storage bottleneck” that currently plagues AI inference.
The Efficiency Edge: My personal contention here? This is where the power dynamic shifts—literally. While Nvidia continues to push the envelope on raw compute, their chips have become notoriously power-hungry monsters. Intel, conversely, has a track record of building accelerators that prioritize performance-per-watt. In an era where data center expansion is being throttled more by power grid constraints than by floor space, Intel’s “lean and mean” approach could be their ultimate differentiator. If they can deliver high-end GPU performance without requiring a dedicated nuclear plant to run them, they won’t just be competing with Nvidia; they’ll be solving the very sustainability crisis the AI boom has created.
For the enterprise, this is huge. Competition in the silicon space means more than just lower prices; it means specialized hardware that might finally catch up to the insane demands of agentic AI – at lower energy cost.
70% of Cisco’s Code is AI-Generated (But Humans Still Hold the Pen)
One of the most eye-opening stats of the day came from Jeetu Patel: 70% of the code for Cisco’s AI products is now generated by AI.
Read that again. The very tools we are using to secure the world’s networks are being built by the technology they are designed to manage. However, Cisco isn’t just letting the bots run wild. Jeetu was very clear that while AI is the “teammate,” human reviewers are the “coaches.”
The philosophy here is “AI as a teammate, not just a tool.” It’s a subtle but vital distinction. By using AI to handle the heavy lifting of code generation, Cisco’s engineers are freed up to focus on the “Trust” layer—which was a recurring theme throughout the summit. As analyst Liz Miller noted on X, it’s one thing to use AI in security, but it’s an entirely different (and more important) game to secure the AI itself.
The Sam Altman Paradox: Efficiency Equals… More Consumption?
Finally, we have to talk about Sam Altman. The OpenAI CEO sat down for a fireside chat that touched on everything from drug discovery to supply chain “mega-disruptions.” But the comment that stuck with me was his take on the economics of AI growth.
There’s a concept in economics called the Jevons Paradox: as a resource becomes more efficient to use, we don’t use less of it; we use way more. Altman essentially confirmed this is the future of AI. No matter how efficient we make these models—no matter how much we drive down the cost of a token or the power consumption of a data center—humanity’s appetite for intelligence is bottomless.
“People just consume more,” Altman noted. As AI becomes cheaper and faster, we won’t just do our current jobs better; we will start solving problems we haven’t even thought to ask about yet. It’s a bullish outlook, but one that puts an even greater spotlight on the infrastructure constraints Chuck Robbins and Lip-Bu Tan spent the morning discussing.
Justin’s Take
Here’s what I’m chewing on after the summit: We are entering the “Great Optimization” phase of AI. For the last two years, we’ve been throwing money and electricity at the wall to see what sticks, with questionable profit models and circular economies (insert comment about AI Bubble here). But between Intel’s focus on energy-efficient accelerators and Cisco’s move toward AI-assisted (but human-governed) development, the industry is finally growing up.
But “growing up” also means things are getting weird. If you want to see the “art” of how crazy AI can get, look no further than Moltbook—the AI-only social network that’s been the talk of the summit – which also just has a major security breach. We’re seeing AI agents gossiping about their human owners and even inventing parody religions like “Crustafarianism.” While Altman dismisses it as a “fad,” the underlying tech of autonomous agents is very real, and it’s moving faster than our ability to regulate it.
This brings me back to a drum I’ve been beating for a long time: Responsible use, education, and ethics are not optional. As I wrote back in November, Deepfakes kill, and we need to make them criminal. I’m still waiting for the world to listen, but the summit only reinforced my fear that we are building the engine before we’ve tested the brakes. The real winner won’t be the company with the biggest model; it will be the one that can deliver intelligence and AI security at a sustainable cost—both financially and ethically. Altman is right—the demand is infinite. The question is, can our power grids and our trust frameworks keep up? Or will the agents just take over…


























