LAS VEGAS, United States, Jan. 7 (Xinhua) — Tech giants’ CEOs and experts gathering here for the 2026 Consumer Electronics Show (CES) focused their discussions on the reliability and safety of artificial intelligence (AI) in a panel session on Monday.
While AI is rapidly spreading into daily work and operations, its wider adoption will depend on whether AI systems can be trusted to behave reliably, protect sensitive data and operate within clear guardrails in high-stakes environments, they said at the panel session on AI’s real-world challenges, held on the sidelines of the CES, the world’s largest and most influential technology trade show, which runs from Tuesday to Friday.
The panel brought together industry leaders, including Ola Kallenius, CEO of the Mercedes-Benz Group, Harjot Gill, CEO of CodeRabbit, Deepak Pathak of Skild AI, Sridhar Ramaswamy of Snowflake, and Shiv Rao of Abridge.
Panelists agreed that AI is no longer confined to chat-based applications but is increasingly used in areas that directly affect business decisions, medical documentation, software development and physical systems in motion.
Speakers described “guardrails” as practical controls that define what AI systems can do, how they access data and when human oversight is required. In the discussion, trust was closely linked to whether AI systems can be measured, monitored and constrained in real-world settings.
In healthcare, Rao, founder and CEO of Abridge, described trust as a foundational requirement in a sector governed by high risk and strict regulation. Abridge, which provides ambient listening technology for clinical documentation to more than 150 health systems, has seen its valuation reach 5.3 billion U.S. dollars.
He believed the broader adoption of AI-powered clinical transcription software has raised concerns about patients’ privacy.
The challenge also extends beyond privacy and security compliance to factors such as latency and the medical artifacts generated during clinical encounters, he said.
In enterprise data systems, Ramaswamy, CEO of Snowflake, described trust in more institutional terms, focusing on data ownership and where information is processed, as questions of data sovereignty grow more prominent.
He said customers want to know what happens to their data and who controls it. Ramaswamy said broader geographic deployment of computing infrastructure could help address sovereignty expectations, especially when customers prefer data to remain within specific regions.
In physical systems, Kallenius said safety requirements are significantly higher than those for consumer AI tools. He said achieving a “99 percent demonstrator” is relatively straightforward compared to managing the long tail of rare and dangerous scenarios, making safety a central concern.
Kallenius also discussed safety and reliability in industrial settings where AI is deployed before vehicles reach the road. He said manufacturers can build factories in virtual environments, simulate production digitally and use AI to debug processes before physical construction begins.
In robotics, Pathak, co-founder and CEO of Skild AI, described data challenges with direct safety implications. He said there is no “magic bullet” for robotics data, outlining a training approach that starts with human videos, moves to simulation where large-scale failure is possible, and gradually incorporates real robot data.
The panel discussion took place against a backdrop of strong AI investment and ongoing debate about whether the market is experiencing a “bubble.”
The moderator cited a Bloomberg reference that 12,000 articles in November used the word “bubble,” arguing that the current infrastructure cycle differs from past ones due to seamless adoption, heavy utilization of computing resources and funding supported by companies with strong free cash flow.■
Source: https://english.news.cn/20260107/87884fd7271448aaa0f660abf601bcbe/c.html
