Meet 4 developers leading the way with AI agents

Employees trying to answer questions as part of their jobs, researchers wading through medical records, developers analyzing client needs – everybody wants information at their fingertips. AI has made that possible in many ways.

Now, AI agents are helping people take the next step in delivering business value. Agents use AI to automate and execute business processes, working alongside or on behalf of a person, team or organization. Companies are asking developers to create entire teams of AI agents specialized in certain tasks, and developers are using agents themselves to work faster.

According to Microsoft’s 2025 Work Trend Index, 46 percent of leaders say their companies are using agents to automate workflows or processes. Some 43 percent of global leaders already use multi-agent systems that collaborate to achieve a goal or execute a complex workflow, and 82 percent of leaders expect their organization to adopt an agentic workforce with agents as digital team members over the next 12 to 18 months.

At Microsoft Build today, Microsoft unveiled new agents, tools and features to help developers work more efficiently and build capable and secure AI agents.

Here’s how four developers at the forefront of this transformation are using agents to code faster as well as using Microsoft’s agent-building and orchestration tools to solve business problems.

Timothy Keyes: Using agents to put cancer patient info at clinicians’ fingertips

A man with an asymmetrical haircut, wearing a black T-shirt that says Stanford Medicine, stands with arms crossed in a glass office space
From left, Anurang Revri, vice president and chief enterprise architect, Stanford Health Care, Timothy Keyes, a data scientist at Stanford Health Care and a combined MD and PhD candidate in cancer biology and biomedical informatics at Stanford University School of Medicine, and Nerissa Ambers, director, Health Information Transformation, Stanford Health Care. Photo by John Brecher for Microsoft.

Cancer care has made huge progress in recent years. But some cases don’t fit the tried-and-true treatments. These patients’ cases are sent to a “tumor board,” a team that includes such specialists as a radiologist, a pathologist, the oncologist, a surgeon and others who bring their expertise together to provide the best care.

Tumor boards are “high-stakes, high-cost meetings for high-risk patients,” says Timothy Keyes, a data scientist at Stanford Health Care and a combined MD and PhD candidate in cancer biology and biomedical informatics at Stanford University School of Medicine.

As a medical student, Keyes helped attending oncologists prepare cases to be presented to the tumor board. “They take a ton of time to prep,” he says. Information must be culled from many different sources, from electronic health records to imaging scans to medical literature, and it might not be easy to find. It all has to be summarized for presentation to the tumor board.

Now, a new tool developed by Microsoft is enabling Stanford data scientists and developers to build and test AI agents to help alleviate this administrative burden and speed up the workflow for tumor board preparation. Microsoft’s new healthcare agent orchestrator is now available to others in the Agent Catalog in Azure AI Foundry.

The healthcare agent orchestrator has helped the Stanford team build and test autonomous AI agents that consult disparate data sources and collaborate on tasks that might otherwise take hours – building a chronological patient timeline, synthesizing current literature, referencing treatment guidelines, sourcing clinical trials and generating reports – using clinically grounded knowledge to deliver accurate and reliable results. Stanford Health Care is still testing its application of the healthcare agent orchestrator in a research setting and has not yet put it to real-time clinical use.

All the agents work with Microsoft 365 Copilot so that busy clinicians don’t have to spend precious time onboarding to use the agents – they can simply type what they want in natural language in apps like Teams or Word without having to add another application to their workflow. Stanford Health Care is just one institution putting Microsoft’s healthcare agent orchestrator in Foundry through its paces.

Agents can get past the fragmentation of data that come from clinician notes, notes from the staff that deals with insurance, notes from nurses, images such as CT scans that are very different from pathology slides, and more, Keyes says.

“It’s really hard to get a chat model to do this,” he says. But agents can focus on a specialized task, with the healthcare agent orchestrator directing requests to the appropriate agent. Getting started is really easy. Stanford Health Care set up the initial agents from Azure AI Foundry Agent Catalog and deployed into Microsoft Teams for testing in about 10 minutes, Keyes says.

The data organizer brings in clinical notes, labs, medications and genomic data, all of which come in different formats, and structures the information into a succinct abstract, with citations so the clinician can quickly verify it or go to see the relevant section in depth.

Keyes recalls being with other medical trainees and his attending physician asking for a radiology report in the electronic health record. “And it’s like, click, click, click, click, click, click – 100 different clicks versus ‘oh, it’s right here in front of me.’’’ When he checked the agent’s citations against the actual notes, they were correct.

The radiology agent reads radiology images using the leading specialized AI models on Azure AI Foundry, and the pathology agent analyzes the whole-slide images and provides relevant pathology findings. Another agent identifies which clinical trials the patient is eligible for.

The medical research agent uses reasoning models to search over scientific papers on cancer, again giving links for quick retrieval of the full documents.

At the end of the process, a report creation agent summarizes the key components of the patient’s case to be discussed at the tumor board, turning it into a Word document or PowerPoint.

Preparing a single patient’s case for a tumor board could take Keyes several hours; in testing, AI agents might make the work 10 times faster, he says. Stanford Health Care has more than a dozen tumor boards serving about 4,000 patients, so the time savings would multiply quickly.

“The agents will enable the work to be done easier, faster and more efficiently, which really matters when you’re talking about meetings with 10 clinicians in them, where time is really precious,” Keyes says. Time is precious, too, for the patients.

“I think in a lot of industries when they think agentic, they get very excited about, ‘it’s going to work very autonomously. It’s going to be making decisions, and I can just look at what it’s doing every once in a while.’ That is not really what we’re envisioning. We do want the clinicians in charge of a patient’s care. We always want them to be able to check.”

“I would be excited at the idea of AI helping my doctors to be the best version of themselves and to liberate them from some of the time-consuming components of documentation so they can spend more time with me the patient,” he says.

Xavier Portilla Edo: Cutting the time to proof of concept

A man with dark, curly hair and a short beard, wearing a khaki shirt open over a white shirt, works on a laptop.
Xavier Portilla Edo, head of cloud infrastructure at Voiceflow. Photo by Borja Merino for Microsoft.

“Everything is possible, but with AI agents it’s just faster and simpler,” says Xavier Portilla Edo, head of cloud infrastructure at Voiceflow, a platform that allows customers to create AI agents and conversational experiences without coding.

Voiceflow’s customers range from big international brands to agencies that develop bespoke AI agents for small companies or local businesses, such as restaurants. These types of AI agents automate business tasks for enterprises via oral or written conversations.

A no-code platform requires a lot of coding to be created. That’s why Portilla has been taking advantage of agent mode with GitHub Copilot – which frees up developers to focus other tasks while an agent works in the background – to speed his development work.

Agent mode in Visual Studio Code provides developers with an editing experience where GitHub Copilot gathers context across multiple files, external systems, and data sources to apply code changes, suggest commands, and iterate to resolve issues. And Copilot Edits makes inline changes in a developer’s workspace, across multiple files, using natural language. With both agent mode and Copilot Edits, the developer remains in control – reviewing changes, accepting ones that work, and iterating with follow-up asks – while staying in the flow.  

The new agentic features within GitHub Copilot have helped developers working on Voiceflow’s platform create and iterate on proofs of concept much faster, Portilla says. Several times, Voiceflow developers have wanted to validate a proof of concept and rather than building that validation process from scratch themselves, they have agent mode take a first crack at it. “And it went really well,” Portilla says.

Another benefit of using AI agents, he says, is that they allow developers to work outside their field of knowledge. “Let’s say that you are a back-end engineer and you have a solution that you have built in the back end and you don’t have a user interface to test it,” Portilla says. “We usually have the GitHub Copilot agent build that UI. And the other way around – let’s say that you are a front-end developer and you need a back end to test your new UI features. We have used an agent to help create those simple back ends.”

As with any new process, it takes time to understand AI agents so that requests and questions are presented in a way that will generate the desired results, he says. Like colleagues, every agent tool is different. “If you are using a specific agent, then you know how to interact with that agent, but probably when you use another agent, you will need to learn how to interact with that agent,” he says.

However, overall, the learning curve for the GitHub agentic tools was smooth. Proofs of concept that used to take a full morning or even an entire day now can be done in a couple of hours, he says.

Amit Sethi: Agents drive efficiency for JM Family’s business and quality analysts

A man in a black shirt types on a keyboard at a desk crowded with four computer screens.
Amit Sethi, principal, AI and ML research scientist at JM Family. Photo by Nathan Lindstrom.

Like many large companies, JM Family Enterprises’ subsidiaries engage in a wide range of business activities. The privately held, diversified company – home to the world’s largest independent distributor of Toyotas – also operates in vehicle processing and parts distribution, financial services, retail automotive sales, home services and more.

Developers working for JM Family could be asked to develop solutions or modernize processes across any part of the business. When tackling a new project, the company’s business analysts gather information from those teams and create a concise “user story” that explains what they want to achieve and why. This informs more detailed requirements describing the technical functionality that the software needs to have, which developers use to write code. Quality assurance experts then generate tests to verify the results.

JM Family introduced AI agents that can work collectively with users to help standardize and speed this software development lifecycle, from writing stories to designing test plans and documentations, says Amit Sethi, principal, AI and ML research scientist at JM Family.

JM Family has developed a multi-agent solution that has shrunk the process of writing requirements to a few days from many, Sethi says. The company found a 40 percent time savings for business analysts and a 60 percent time savings in designing test cases for quality assurance. Even if they have to refine the work manually, business analysts prefer starting with AI-generated cases that can pull together all the datapoints that might be necessary to, say, forecast delivery of cars, rather than starting from scratch, Sethi says.

Another benefit from the multi-agent solution they have named the BAQA Genie (for business analyst/quality assurance) is standardization, “because everyone has their own way of doing things. When you have a large project, this becomes an issue,” Sethi says.

The “crystal-clear” requirements resulting from the AI agents make the entire development cycle much faster. “It’s an overall process efficiency gain for us,” he says.

JM Family began its AI agent journey in February 2024 when it demonstrated Microsoft’s open source AutoGen tool to senior management. “It was an ‘aha’ moment, that agents can communicate with each other and then take action on your behalf,” Sethi says.

At first, management of the AI agents was complicated, he acknowledges. However, Azure AI Foundry Agent Service and its multi-agent workflows now “takes care of all these issues. And since we are backed on the Microsoft platform, it integrates very well with all the touchpoints.”

JM Family has tapped a suite of Microsoft tools in Azure AI Foundry to build their agents with different specialties: a requirement agent, a story writer agent, a coding agent, a documenter agent and others. An orchestrator agent helps them all work together.

While the current agents handle different processes and then hand off to a person to do other steps manually, JM Family wants to evolve toward agents that will do more of the work and rely on people to solidify or verify requirements. “As we are fully committed to responsible AI, one of the principles we always want to have is a human in the loop,” Sethi says.

JM Family has had such success with its multi-agent solution so far that it is planning to commercialize its BAQA Genie. “We have seen the value directly unlocked by this capability,” Sethi says. “Since this is a challenge in any enterprise technology effort, we wanted to offer it to other clients to share in that value.”

Rob Bos: Life in agent mode is a feast of new possibilities

Rob Bos, a DevOps consultant and GitHub trainer at Xebia. Courtesy of Microsoft. A man with his shirt sleeves rolled up works on a laptop.
Rob Bos, a DevOps consultant and GitHub trainer at Xebia. Photo courtesy of Microsoft.

“Basically, I live in agent mode,” says Rob Bos, a DevOps consultant and GitHub trainer at Xebia, a global provider of software engineering, IT consulting, training and managed services to corporate clients. “These days, it’s not that common that I even turn it off.”

Bos, a Microsoft MVP and GitHub Star who helps clients stay on top of the latest development trends, has had a front-row seat in witnessing the recent evolution of AI. “We started off with regular chats – that is interacting and copying and pasting. Then we got to apply the current chat to your current code. That already helped,” he says. “Then we had edit mode where it goes faster and immediately starts to change the things and files that you’re trying to add functionality to.  And for agent mode, it’s just to focus on speed.”

After asking agent mode to do certain things, it will make those changes, just like edits. “The best thing is that it can start validating if the changes actually make sense. It can actually execute the script you are working on and validate the output. If you have something like a unit test or regression test, it can execute those tests and then learn from the results and continue until the task is completed,” he says.

With the advent of Model Context Protocol (MCP) servers, actions are possible across software boundaries, he adds.

In one of the pipelines he was building, Bos gathered data into GitHub repositories then asked agent mode to take over and create a report based on that data, or told the agent to create a new script based on what was already done, knowing it would infer the next steps based on past patterns in his repository. “In my opinion, it works best if you give it a lot of context of existing code in your repository, or similar setups that you used elsewhere.”

For example, he can tell an agent to look at all the failing availability tests that are running on his Azure Web Application and to gather as much information as possible about bugs from the logs and then translate it into a good set of issues.

AI agents are empowering everyone who works with software engineers to start contributing as well. “If you’re, for example, in operations and you’re looking at these errors, now you can bring in a lot of extra information and bring that work back to the start of your software development lifecycle, enabling product owners to really write user stories of ‘this is what I want to achieve and these are the things that you need to think of in the context of this application,’” Bos says. “That’s a revolution in enabling way more people to contribute to the software development process and changing the way that we as engineers operate on a daily basis as well.”

Bos tells his trainees to start small when using AI agents because if the original instructions are too broad, the agent might make assumptions that go in a different direction than desired.

Instead, he advises starting a conversation with the agent. “You get a response back. You see if that actually works and with that you start building up a whole story line until it makes sense and then you want to act on certain things there,” he says. “This allows you to course correct along the way.”  

With so many agent features coming out, Bos encourages his clients to stay curious and not just stick to what’s familiar. “There is just a constant feast of extra things that are possible right now.”

Top image: Timothy Keyes, a data scientist at Stanford Health Care and a combined MD and PhD candidate in cancer biology and biomedical informatics at Stanford University School of Medicine. Photo by John Brecher for Microsoft.

Source: https://news.microsoft.com/