No question – Building apps with LLMs in agentic setups is a game-changer, but it can also be a pain in the butt compared to good old deterministic code. Craft an clever agent that summarizes docs or fixes bugs, then bam, the model updates, and suddenly it’s spouting nonsense, ignoring prompts or ignoring basic words like “yes”. Non-deterministic chaos at its finest.
Deterministic code? It’s the reliable workhorse: feed it input X, get output Y every damn time. Fixed rules, easy debugging, perfect for stuff like financial calcs or automation scripts where surprises mean lawsuits. As Kubiya nails it, “same input, same output”—no drama.
“A computer will do what you tell it to do, but that may be much different from what you had in mind”. – Joseph Weizenbaum — Not when your using a model you probably didn’t build and not your own weights.
Agentic AI with LLMs? That’s the wildcard party crasher. These systems think on their feet: reason, plan, grab tools, adapt to goals like tweaking marketing on the fly or monitoring health data. IBM calls it “agency” for a reason—it’s autonomous, pulling from real-time vibes beyond rigid training. But here’s the kick: it’s probabilistic. Outputs wiggle based on sampling, context, or those sneaky model tweaks from OpenAI or whoever. LinkedIn rants about it: “Same prompt, different outputs.” Your app morphs overnight, and fixing it? Good luck tracing probabilistic ghosts.
This shift sucks for dev life. Traditional code: bug? Trace, patch, done. Agentic? Hallucinations, inconsistencies, testing nightmares. Martin Fowler compares LLMs to flaky juniors who lie about tests passing. It’s a paradigm flip—from control to “let’s see what happens.” Salesforce says pick deterministic for regulated certainty, agentic for creative flex. But non-determinism could means security holes, data risks, and endless babysitting. It also adds this attack vector that is really non-deterministic if the model has access to data that it needs to work – but I might not want exposed.
| Aspect | Deterministic Code | Agentic AI with LLMs |
|---|---|---|
| Predictability | Rock-solid: Always consistent | Sketchy: Varies like the weather |
| Adaptability | Stuck to your rules | Boss: Handles dynamic crap |
| Testing/Fixing | Simple: Logic checks and patches | Hell: Variability demands tricks |
| Best For | Precision gigs (finance, compliance) | Goal-chasing (support, optimization) |
| Pain Level | Low: Set it and forget it | High: Constant surprises |
Bottom line: Hybrids are the way—LLMs for the smarts, deterministic for the reins. Deepset pushes that spectrum view: not binary, blend ’em. It sparks innovation, sure, but don’t romanticize—the annoyance is real. Code with eyes open, or get burned. Put humans in the loop to keep things in check.
What about Agentic AI ops for network and technology? Didn’t we just say “precision gigs” are better with deterministic code? That won’t stop the likes of awesome developers like John Capobianco https://x.com/John_Capobianco from pushing those limits, and he has been doing that for years at this point. Handing AI agents the keys to critical stuff like network monitoring, anomaly detection, or auto-fixing outages. Sounds efficient, right? But it’s a powder keg from a security standpoint. These autonomous bad boys can hallucinate threats, expose data, or open doors for hackers through memory poisoning, tool misuse, or privilege escalation. Cisco nails the danger: “The shift from deterministic code to probabilistic chaos is at the heart of securing AI agents that think for themselves,” highlighting a “lethal trifecta” of data leaks, wild hallucinations, and infrastructure weak spots that could cascade into total meltdowns.
Tool are starting to emerging though for AI security, particularly from Cisco and open-source communities to advance defenses against threats like prompt injections and supply chain attacks, but there is work to be done. Things like Cisco’s open-source Foundation-sec-8B model, a specialized LLM for cybersecurity tasks such as threat intelligence and incident response, will help developers start to build customizable tools with on-prem deployments to reduce hallucinations and enhance SOC efficiency. Their Hugging Face partnership bolsters supply chain security with an upgraded ClamAV scanner detecting malware in AI files like .pt and .pkl. Broader open-source efforts include Beelzebub for malicious agent analysis and Promptfoo for LLM red-teaming, yet challenges from hackers with evolving adversarial tactics using LLM’s to attack LLM’s are very much a thing…. The system is attacking the system being protected by the system… Yeah that.
Cisco-Hugging Face ClamAV Integration: https://blogs.cisco.com/security/ciscos-foundation-ai-advances-ai-supply-chain-security-with-hugging-face
Cisco Foundation-sec-8B: https://blogs.cisco.com/security/foundation-sec-cisco-foundation-ai-first-open-source-security-model
So much more to learn, but with all of that said…. Humans in the loop is going to be a thing for awhile – at least until Skynet…











