Ahead of CES 2026 next week in Las Vegas, several consumer electronics companies are showing off hardware products that use specific AI technologies. The devices include wearables, mixed-reality systems, home security tools, appliances, and robotics. Each relies on cameras, sensors, processors, and AI models to gather data from the physical world, analyze it locally, and trigger specific responses.
Wearables and Headsets
Rokid is previewing AI glasses that feature a first-person camera, microphones, a micro-display, and an internal computing unit, all designed to fit like regular glasses. According to the company’s CES preview, these glasses have entered mass production, moving past the stage of developer kits or concept wearables.
The camera continuously captures what the wearer sees. AI models process this input, performing tasks like text recognition, object identification, and scene interpretation. Voice commands picked up by the microphones are managed by speech recognition models, enabling users to interact hands-free. The outputs appear as short visual overlays on the display instead of full-screen interfaces.
Play For Dream is presenting a mixed-reality headset that operates independently on Android, allowing for spatial computing without any additional hardware. It features dual 4K-per-eye micro-OLED displays powered by onboard processors to create digital images directly within the headset. Cameras and motion sensors track head position and orientation, while software calculates how digital objects should be placed in relation to the user’s physical environment. This ensures virtual content stays in place as users move, which is crucial for mixed-reality applications like simulations and interactive settings.
AI in Security Cameras
Home and small business security systems are another area where AI is built directly into the hardware. Reolink is previewing security cameras that run AI models on the device rather than depending solely on cloud processing. The cameras use image sensors paired with processors to analyze video streams in real time. Object detection and classification models are trained to tell apart people, vehicles, and animals, allowing the system to identify which events need attention.
Processing AI on the device enables immediate alerts and reduces the need to send continuous video footage offsite. This method also supports features like object-based search and customizable detection rules, even when network connectivity is weak or unavailable.
AI in Household Appliances
In the kitchen, Samsung displayed AI Vision systems developed with Google Gemini, integrating cameras and AI models into appliances such as refrigerators. Internal cameras take pictures of stored food items and containers. Computer vision models analyze these images to identify what is inside, while a language model interprets the findings and provides responses through the appliance interface.
According to the company, this system can recognize a broader range of food items without requiring users to manually label or register them beforehand. It relies on visual identification instead.
This arrangement connects perception and language processing within the appliance. The system is designed to operate continuously as the contents change.
LG’s Home Robot
LG is incorporating similar AI features into robotics with a home robot unveiled before CES. The robot includes cameras, proximity sensors, motors, and an onboard AI processor on a mobile platform meant for indoor use. Vision systems help the robot recognize people and objects around it, while navigation software processes sensor data to move through rooms and avoid obstacles.
The robot also has articulated arms for basic interactions with its environment. Voice input and audio output facilitate simple interactions, allowing the robot to respond to spoken requests while following predefined guidelines.
Read more AI News here.
Source: https://www.pymnts.com/
