
The last few Prompt Economy Weekly features have focused on trust and technical security for agentic AI. Seeing as how security is a prerequisite for its consistent usage, that focus was spot-on and will continue to be an issue. But this week was something of a litmus test for the Prompt Economy.
It marked the beginning of the holiday shopping season, and time will tell whether agentic AI was a factor.
In the meantime, we did have several new cases come to the forefront over the past week. That’s where we will focus as more companies develop the trust and security necessary to fulfill agentic AI’s promise. The first one comes from Harvard Business Review, who put out a report last week that spelled out agentic AI’s promise as an internal enterprise workhorse.
The report takes the stance that while companies are eager to apply agentic AI to customer-facing operations, those environments are too variable and error-sensitive for current systems. It argues that the real near-term value lies in internal workflows where tasks are structured, repetitive, and supported by humans in the loop. Agentic AI is progressing through a clear maturity curve, from prompting, to retrieval-augmented generation, to multi-agent architectures that divide work into small, supervised steps. These systems can meaningfully raise accuracy and efficiency, but only when deployed in controlled settings with defined inputs and strong guardrails.
HBR highlights case evidence showing that multi-agent systems can reduce resolution times, improve data quality, and save costs when embedded in back-office processes such as technical field operations. Still, the authors stress that building and scaling these systems requires significant organizational effort: deep process literacy, cross-functional governance, integration with legacy systems, and ongoing experimentation. True autonomy remains distant; in the near term, value comes from augmenting workers rather than replacing them. Companies that develop internal capabilities—data engineers, context designers, and what the authors call “gen AI black belts”—will be best positioned to capture the next decade of AI-driven operational gains.
“Customer-facing contexts are a bad fit for the current capabilities of AI agents,” the article states. “They’re messy and unpredictable… Backend and operational processes are fertile ground because they are structured and repetitive—much better suited for agentic workflow automation.”
Insurance, Agentic Style
But apparently the insurance business didn’t get the memo. It is zooming ahead in the agentic revolution, with a major trade publication carrying a warning about adopting it and detailing some use cases. Insurance Business reports that major global insurers are accelerating their shift toward agentic AI, moving from controlled pilots to real operational deployment. While early adopters such as Allianz are beginning with highly specific tasks—like automating food spoilage claims—insurers across the industry are now exploring how autonomous agents can reshape customer interactions, underwriting, and claims workflows. Competitive pressure is rising as insurtechs test AI agents capable of handling live customer conversations, pushing traditional carriers to evaluate where and how agentic systems should fit within their technology stacks. Early gains are compelling: analysis cited in the article shows that insurers deploying agentic AI across dynamic workflows may achieve productivity improvements of 20% to 30%.
The article emphasizes that the long-term transformation will depend as much on people and process as on technology. Zurich’s Tim Kane argues that insurers must rethink distribution models, redesign workflow orchestration, and adopt hybrid architectures that blend customer-facing automation with deeper “core” decisioning systems. But successful rollout demands a workforce trained not only to use agentic AI but also to supervise, refine, and co-manage it. Even after deployment, significant effort goes into continuously training and calibrating agents, ensuring compliance, and preserving human judgment where empathy or nuance is required. The insurers that adapt fastest—both technologically and organizationally—are poised to lead as agentic AI becomes embedded in the industry’s operational core.
Financial Services
Insurance also figured heavily in CapGemini’s prospective use cases for agentic AI in financial services. It argues that agentic AI represents a major shift for financial services, enabling systems that can plan, act, and adapt across complex workflows in banking and insurance. Unlike generative AI, which assists with narrow tasks, agentic AI is designed to make autonomous decisions and manage end-to-end processes such as claims triage, fraud checks, loan onboarding, underwriting, and personalized customer engagement.
Yet most financial institutions struggle to move beyond pilots. Only 26% have the capabilities to scale AI effectively, with many stalling due to project complexity, regulatory demands, and the challenge of integrating governance, data, and model controls from day one. Capgemini stresses that the opportunity is meaningful—cycle-time reductions, higher straight-through processing, and consistent decisioning—but firms need structured methods and experienced partners to avoid stalled programs and unrealized ROI.
The article highlights that agentic AI is already improving performance across the financial services value chain. Insurers are using agents to accelerate claims, enhance underwriting accuracy, personalize distribution, and improve servicing. Banks are deploying agentic systems in retail engagement, wealth management, investment research, cards, and payments, with one Capgemini client reporting a 20–30% increase in developer throughput using agentic workflows. Capgemini also details how agents are reshaping cloud modernization by autonomously assessing legacy systems, assisting production teams, and orchestrating hybrid environments. Strong governance—explainability, auditability, human-in-the-loop design, and model risk controls—is essential as EU and U.S. regulators tighten oversight.
Ultimately, Capgemini concludes that firms win not by flashy demonstrations, but by disciplined engineering, clear guardrails, and measurable outcomes that scale responsibly. “Agentic AI isn’t magic – it’s disciplined engineering and change management,” it states. “The winners… deploy with strong guardrails, prove impact, and scale responsibly.”
Source: https://www.pymnts.com/
