MY TAKE: Dispatch from OktoberTech 2025 -- AI adoption is racing ahead, far outpacing control

By Jack Poller

MY TAKE: Dispatch from OktoberTech 2025  --  AI adoption is racing ahead, far outpacing control

MOUNTAIN VIEW, Calif. -- The crowd stirred when Ameca blinked.

We were gathered inside the Computer History Museum at Infineon Technologies OktoberTech 2025 conference. Onstage, marketing chief Andreas Urschitz carried on a light chat with a humanoid robot whose gray face tilted and scrunched in near-human expression.

The effect was uncanny. Ameca nodded, reacted, and responded. The face moved subtly. Eyes shifted. The mouth tracked speech. There was even something like micro-expression happening -- that ephemeral moment between thought and response expressed via physical hardware.

Granted, Ameca's body movements are limited and battery-hungry. And if you listen closely, the actual content of her replies -- while grammatically flawless -- isn't all that different from what ChatGPT and Gemini routinely spit out. In other words: the physical hardware is slick, although still rudimentary. And the AI brain behind it all delivers the illusion of confidence -- fluent, but often ungrounded.

The robot drew the clicks. But what came next is what stuck with me: two speakers, from opposite ends of the AI ecosystem, laying out -- with impressive clarity -- just how unfinished the underlying systems remain.

We've grown used to AI putting on a show. But at OktoberTech, the deeper story was unfolding in the candid remarks of two insiders, each describing, in different ways, a field sprinting ahead of its own guardrails.

Beneath the shiny surface

The first was Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA, who spoke with the clear authority of someone deep in the trenches.

"There are two technologies that have reached a tipping point in the last 12 months," he said. "One is, of course, the whole large language model... and the second is simulation."

Talla pointed to simulation as the next big step forward -- the missing piece that could eventually make robots like Ameca more capable. For years, training robots meant working slowly with real hardware. Now, engineers can do much of that training in digital form -- letting machines "practice" in virtual environments before they ever touch the real world.

Add large language models to the mix, he said, and something new becomes possible: instead of hardwiring robots for one task at a time, we can begin to give them a flexible, conversational interface -- one that can draw on a library of learned behaviors and adapt on the fly. That's exactly what Ameca attempted to do on stage -- not flawlessly, but impressively.

Talla wasn't just talking about humanoids. He noted how tools like ChatGPT are already embedded in everyday workflows -- routinely helping with tasks like summarizing or drafting email.

But then came the grounding pivot.

"Much of the AI that's been deployed today," Talla said, "is extremely brittle and very specific."

Talla was flagging a structural reality: today's AI tools, for all their promise, tend to function only under narrowly defined conditions.

To be sure there are a lot of narrowly defined conditions in the modern enterprise. And that's the hidden story here -- the one that isn't getting full credit. Large language models are already serving as quiet accelerators across dozens of everyday workstreams.

Talla's comment about email landed because I'd seen the same pattern elsewhere -- and lived it myself. I use GenAI daily to accelerate research, distill complex material, and tighten early drafts -- all under my close direction. It's a practical tool, not a replacement for judgment, voice, or final edits.

I saw the same arc at Black Hat USA 2025, in a conversation with software architect Brandon Hamric, CTO at 360 Privacy. His workplace setup was telling: multiple live terminals open, GenAI prompting in near-constant use. "Typing is the bottleneck now," he told me. Hamric made this point matter-of-factly -- and he's not alone. This way of working has quietly become standard practice across the software field.

As powerful as it is, GenAi does not yet handle nuance very well. And when deployed in physical environments -- robots, cars, warehouses -- the stakes are far higher than overstepping while autocorrecting a clumsy email.

Even the low-risk use cases, he noted, come with caveats.

"We all use ChatGPT or something equivalent for summarizing emails or writing an email," Talla observed. "Even if it is 90 to 95% accurate, that saves us time. And as a human, we finish, close the loop, and make it perfect."

That's fine for inbox triage. But not so much for surgery, or self-driving forklifts.

Talla's message was clear: we are moving fast. And we are nowhere near done solving the core engineering problems. The "general purpose brain" remains a frontier. The data needed to train it doesn't yet exist. The performance requirements -- especially for embodied intelligence -- remain orders of magnitude above what today's AI can reliably achieve.

The second speaker, Raj Hazra, brought a different vantage point. As CEO of Quantinuum, he has a front-row seat to what's happening not just in applied AI, but in foundational compute architecture -- namely, the fusion of quantum computing and artificial intelligence.

Hazra opened with a telling metaphor. "Objects in mirror are closer than they appear," he said, referencing a slide used often in his briefings. "That's really the state with quantum today... it's not a stay-tuned, but a hurry-up-and-listen."

In Hazra's telling, quantum computing is no longer theoretical. It is, in fact, being quietly folded into the long-range planning of global companies across sectors -- finance, pharma, materials science, energy, cybersecurity. These firms aren't just watching; they're investing.

"Quantum... is transformational to businesses," Hazra said. "This is not about the next 10 or 15 percent of efficiency. This is about fundamentally recharging and rejuvenating the process of discovery."

But as with Talla, the deeper truth emerged between the lines.

Hazra described quantum computing as entering the "age of acceleration," with reliable million-qubit systems within reach in the next five years. But he was blunt about the uncertainties.

On any given day, he said, the process can feel "exhilarating," "frustrating," or "depressing." Why? Because this isn't mature tech being refined -- it's nascent infrastructure being discovered in real time, often without clear business models or technical guarantees.

And like Talla, Hazra made the critical distinction between confidence in potential and clarity in reliability.

"In classical computing, the hardware was always reliable," he said. "But in quantum, you can have lots of qubits... and then when the result comes out, you don't know whether it's right or wrong."

He wasn't trying to raise alarms. But for those listening closely, he was acknowledging the same tension Talla flagged from the robotics edge: we are embedding systems we can't fully validate -- and building businesses around their presumed infallibility.

Familiar cycle -- amplified

Watching these two talks, I couldn't help but reflect on the last 25 years of covering infotech -- the dot-com boom, the mobile wave, the rise of cloud, the platform wars, the cybersecurity reckoning.

Each of those waves came with promises of revolution. Each left behind both transformed landscapes and unaddressed consequences.

This one feels different. Not because it's overhyped -- but because the hype is being matched by real adoption, and real dependency, far faster than the systems can mature.

In a sense, what we're seeing now is BYOD in cognitive form. The bring-your-own-device revolution snuck up on enterprises in the early 2010s -- employees bringing smartphones and consumer apps into work before IT departments had policies or protections in place.

Now it's bring-your-own-AI. Knowledge workers using ChatGPT and Gemini to write memos. Software engineers using Claude and Mistral to prototype. Knowledge workers feeding confidential data into Microsoft copilots to prep board decks -- often without approvals, or awareness of the risks.

From the consumer and eager employee side, that's already normalized. From the infrastructure side, as Talla and Hazra quietly acknowledged, we are building the airplane while it's in the air.

The missing trust layer

Both speakers also hinted at -- but did not directly dwell on -- the missing layer of trust and governance.

Talla gestured at the complexity of training general-purpose models for physical AI, noting that high-quality training data simply doesn't exist for many of the situations robots will face. He noted that efforts are underway to generate synthetic data and simulate scenarios -- but that the gap remains wide.

Hazra, for his part, took on the post-quantum security challenge, stating plainly that "quantum computing... can be used to decrypt and destroy existing security infrastructure." He underscored the importance of post-quantum cryptography, as well as quantum-hardened key management, but acknowledged that these, too, are in early stages.

In both cases, the message was clear: we are racing toward dependency before durability.

That's not to say the effort is misplaced. Innovation at the edge always outpaces the handbook. But for those of us tracking the long arc of digital transformation, it's worth pausing on the stakes.

Where this leaves us

Ameca's blinking, blinking face was charming. It got the clicks and the applause. And rightly so -- the sophistication of physical design, the timing of eye motion, the integration of an LLM to simulate dialogue... these are not small feats.

But the story behind the curtain is more sobering. New architectures need to be fashioned. Infrastructure needs to become stable and predictable. At this moment, public adoption is far outpacing institutional readiness.

Encouragingly, Infineon's own executives spent much of OcktoberTech 2025 demonstrating how they're actively enabling this shift -- with semiconductors built for ultra-efficient edge AI, with Ethernet-based backbones for EVs, with power infrastructure tailored for energy-hungry data centers.

The framing throughout the day -- from edge inference to grid readiness -- acknowledged the complexity ahead. There was visible recognition that what's coming is not just a race for bottom-line gains, but a wholesale reconfiguration of how systems work together.

At a time when Google, Amazon, and Microsoft seem content to bolt generative AI onto everything they already control, it was notable to see the semiconductor ecosystem -- the segment that powers the AI stack -- actively thinking in terms of real-world constraints, interdependencies, and systemic design.

Where will things go from here? I'll keep watch and keep reporting.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

Previous articleNext article

POPULAR CATEGORY

misc

16570

entertainment

17676

corporate

14679

research

8965

wellness

14512

athletics

18528