Cisco AI Summit – More players, More Innovation.

If 2025 was the year of AI experimentation, 2026 is officially the year of AI infrastructure. Yesterday, I had the chance to tune into Cisco’s second annual AI Summit, and let me tell you—the energy was different this time. Moving past the “what if” and straight into the “how fast.”

With over 100 industry heavyweights in the room and a staggering 16 million people watching the livestream, Cisco’s Chair and CEO Chuck Robbins and CPO Jeetu Patel didn’t just host a conference; they hosted a state-of-the-union for the trillion-dollar AI economy. Here are some of the things I found most interesting.

Intel’s “Shot Across the Bow”: The GPU Announcement

The biggest shockwave of the day came from Intel CEO Lip-Bu Tan. In a move that clearly signals Intel is tired of watching Nvidia have all the fun, Tan officially announced that Intel is entering the GPU market.

I am personally bullish on this, early in the AI era, I worked with some of Intel’s FPGA’s and some of their other OpenVINO platforms, along with many other accelerators. At least in my experience, they build some very solid, but more importantly very energy efficient accelerators.

This isn’t just a “me too” play. Intel has been quietly poaching top-tier talent, including a new Chief GPU Architect (rumors are that they got someone good too) to lead the charge. Tan was blunt about the current state of the market, noting that there is “no relief” on the memory shortage until at least 2028. By moving into GPUs, Intel is looking to solve the “storage bottleneck” that currently plagues AI inference.

The Efficiency Edge: My personal contention here? This is where the power dynamic shifts—literally. While Nvidia continues to push the envelope on raw compute, their chips have become notoriously power-hungry monsters. Intel, conversely, has a track record of building accelerators that prioritize performance-per-watt. In an era where data center expansion is being throttled more by power grid constraints than by floor space, Intel’s “lean and mean” approach could be their ultimate differentiator. If they can deliver high-end GPU performance without requiring a dedicated nuclear plant to run them, they won’t just be competing with Nvidia; they’ll be solving the very sustainability crisis the AI boom has created.

For the enterprise, this is huge. Competition in the silicon space means more than just lower prices; it means specialized hardware that might finally catch up to the insane demands of agentic AI – at lower energy cost.

70% of Cisco’s Code is AI-Generated (But Humans Still Hold the Pen)

One of the most eye-opening stats of the day came from Jeetu Patel: 70% of the code for Cisco’s AI products is now generated by AI.

Read that again. The very tools we are using to secure the world’s networks are being built by the technology they are designed to manage. However, Cisco isn’t just letting the bots run wild. Jeetu was very clear that while AI is the “teammate,” human reviewers are the “coaches.”

The philosophy here is “AI as a teammate, not just a tool.” It’s a subtle but vital distinction. By using AI to handle the heavy lifting of code generation, Cisco’s engineers are freed up to focus on the “Trust” layer—which was a recurring theme throughout the summit. As analyst Liz Miller noted on X, it’s one thing to use AI in security, but it’s an entirely different (and more important) game to secure the AI itself.

The Sam Altman Paradox: Efficiency Equals… More Consumption?

Finally, we have to talk about Sam Altman. The OpenAI CEO sat down for a fireside chat that touched on everything from drug discovery to supply chain “mega-disruptions.” But the comment that stuck with me was his take on the economics of AI growth.

There’s a concept in economics called the Jevons Paradox: as a resource becomes more efficient to use, we don’t use less of it; we use way more. Altman essentially confirmed this is the future of AI. No matter how efficient we make these models—no matter how much we drive down the cost of a token or the power consumption of a data center—humanity’s appetite for intelligence is bottomless.

“People just consume more,” Altman noted. As AI becomes cheaper and faster, we won’t just do our current jobs better; we will start solving problems we haven’t even thought to ask about yet. It’s a bullish outlook, but one that puts an even greater spotlight on the infrastructure constraints Chuck Robbins and Lip-Bu Tan spent the morning discussing.

Justin’s Take

Here’s what I’m chewing on after the summit: We are entering the “Great Optimization” phase of AI. For the last two years, we’ve been throwing money and electricity at the wall to see what sticks, with questionable profit models and circular economies (insert comment about AI Bubble here). But between Intel’s focus on energy-efficient accelerators and Cisco’s move toward AI-assisted (but human-governed) development, the industry is finally growing up.

But “growing up” also means things are getting weird. If you want to see the “art” of how crazy AI can get, look no further than Moltbook—the AI-only social network that’s been the talk of the summit – which also just has a major security breach. We’re seeing AI agents gossiping about their human owners and even inventing parody religions like “Crustafarianism.” While Altman dismisses it as a “fad,” the underlying tech of autonomous agents is very real, and it’s moving faster than our ability to regulate it.

This brings me back to a drum I’ve been beating for a long time: Responsible use, education, and ethics are not optional. As I wrote back in November, Deepfakes kill, and we need to make them criminal. I’m still waiting for the world to listen, but the summit only reinforced my fear that we are building the engine before we’ve tested the brakes. The real winner won’t be the company with the biggest model; it will be the one that can deliver intelligence and AI security at a sustainable cost—both financially and ethically. Altman is right—the demand is infinite. The question is, can our power grids and our trust frameworks keep up? Or will the agents just take over…

Cisco dCloud – Eating your own candy and off label use

Update:   Rumor updated!

See this room?

IMG_0025

That’s 30 separate collaboration training environments, with over 60 partners learning about the new collaboration platform products.  The best part?   Not a single server in sight.    Each desk has a brand new 720p Camera attached Cisco 8865,  and a DX70 Telepresence device.      The performance of all pods is PERFECT, you would never know that all of this is being virtualized thousands of miles away in Cisco’s high end dCloud data centre.

IMG_0026

Yesterday, a few had lab issues, which is common in training environments.  One quick web site visit, and those pod’s were reset.   Back up and running in minutes, saving valuable training time and money for partners.

Rumors of Upgrade

Rumors online and from people in the know suggest Cisco is not slowing down with dCloud either, there is the hint of major investment in the program.  With more and more uptake on dCloud services, and Cisco continuing to tell us it is – and will continue to be a free offering for partners, no other technology company in the world is putting this kind of investment into a learning, development, demo and training platform.

UPDATE:   After the posting of this blog entry, we received confirmation from @briancsco at the Cisco dCloud team that expansion of the program with “Major Investment” is  “imminent” and that #Cisco is “#allin”

brian

Off Label Use Soaring

People are finding new and innovative ways to use the platform, including Cisco internally.   These Collaboration Training sessions are now being hosted via dCloud.   New and innovative internal use cases are starting to bloom.

Some of the off label use cases I have seen are things like

  •   Prep of RFP and Documentation when screen shots are require
  •   “I just need to try something” — logging into a lab for a few minutes just to try something
  •   Running a lab guide — You have a lab guide from previous training, and you need systems to run scenarios on
  •  POC – Proof of Concept – proving that something works the way you thought, or of course proving it to a customer
  •  Development – You have written some new integration software, or code and want to sandbox it.
  •  Practice, Testing, Break/Fix – You want to test out a solution to a problem, but are worried about damaging something.
  •  Self Training – There is no better way to learn new product, with (as of this writing) 27 specific LAB offerings

The best part is, unlike your lab, it’s never broken, and if you do break something, a quick switch and it is all reset.

Cisco is encouraging this off label use for the platform, and people are finding new and innovative ways to leverage dCloud.

Sound off in the comments — what do you use dCloud for?

dCloud Momentum

How much is dCloud being used?  Well, check this one out – at 9:30AM Eastern Time…  Over 1200 active sessions!

Screen Shot 2015-10-21 at 8.11.22 AM

What is #DCloud and the new DCloud @Splunk Lab

I have not written much on the blog about DCloud, and I spend days and then not days on DCloud testing and learning.  It is currently one of my favorite tools from Cisco, and something that no other vendor in the industry is doing.   Cisco is spending a ton of dough on this, and for good reason.

What is DCloud?

What is the worst thing about your lab, assuming in this day in age you even have one?    Unless you are extremely vigilant, it is always broken.   Someone is in a rush, they do something in the lab which almost always involves changing something or breaking something and then when you need it, it’s broken.

The other problem is that your lab is really only setup one way.   Do you have 3 versions of UCCX?  How about 3 different management tools.    I am sure Justin Chin-You @jchinyou does over at Cisco, but for many of us it does not work that way.

What DCloud does is give you the ability to test, demonstrate and run 143 (as of this writing, they are constantly adding more) different labs, demos and sandboxes.   On everything from iWAN, ACI, Voice, Video, Routing, Switching, Management tools, SDN and many more.    Instantly.

Checkout this quick Youtube video from #DCloud Steve

They even have real hardware for some demos, and if you want you can connect real telephones to it — Wait.. how?   They have a slick VPN setup, with pre made configs that you can use to extend the lab right into your office.

It really is that good.    Labs turn up in moments,  everything is just setup and ready to go – and you can either follow their lab guides for demos or learning — or just login and mess around.   Don’t worry you cannot damage anything when you are done the lab resets automatically.   This is no simulator, this is the real deal and you are more than welcome to hack around and learn.   They even have traffic simulators so that when you do firewall and security labs, there is actual traffic in there.    You get full admin access – passwords for god access into everything.   Build your own demo or lab scripts based on their hardware setup if you want.   This is not just for demo.    Ever wanted to play with a new technology like iWAN or SDN and just do not know where to start?   They include a full PDF lab guide for you with step by step instructions if you want.

Here is a quick video posted by the #DCloud team showing one of my favorite labs

Hot off the E-Mail presses – #DCloud Rolls Splunk

One tool I just have not had enough time with – is Splunk.  Did you know Splunk made software – they make more than t-shirts.     Splunk does an amazing job of visualizing and analyzing security products in a consolidated way.    Now you can actually get your hands on it, in DCloud and try it your for yourself without the pressure of a timeline.

Here is the descriptor right from the DCloud site.

Splunk and Cisco have collaborated to deliver out-of-the-box visibility across Cisco-centric security environments using ASA/PIX/FWSM firewalls, Identity Services Engine (ISE), pxGrid, FirePOWER IDS, Advanced Malware Protection (AMP), Web Security Appliance (WSA) and Email Security Applicance (ESA). The scenarios in this solution illustrate how the Cisco Splunk Security Suite delivers unified visibility across Cisco devices to help:

  • Protect you before an attack happens
  • Enable you to respond quickly during an attack
  • Enable you to perform a rapid forensics investigation after an attack

Splunk Enterprise 6.2 with Cisco Security Suite v1 provides a consolidated view of your organizational posture across the entire Cisco security environment, with the ability to drill down into specific areas, including:

  • Email security using ESA.
  • Web security categorizes web traffic coming from the proxy using the WSA.
  • Network security presents data from Cisco ASA pix, Next Generation Firewall with FirePOWER IPS, and new detection data.
  • Identity services present user and device information from the ISE policy management platform.

Ranges of trigger alert thresholds can be set to queue events, leveraging data from multiple security routes and sources. Using this solution, it is possible to combine Cisco AMP data with device information from ISE in order to identify infected devices and classify events.

Scenarios

Scenario 1: Dashboard Overview

Scenario 2: Service Impact Analysis