News JVTech Google wants to integrate a radar into its connected objects, but for what?
Google’s connected speakers and screens could, in the coming years, be equipped with technology involving radar. But what could be the point of such an approach? The user is at the heart of this ambition.
Today, owners of a Google Nest speaker must activate their device by voice with the “Ok Google” or “Hey Google” trigger to enjoy interactions with the Google Assistant. But that could change in the future. Indeed, the Mountain View firm is exploring new avenues to make future generations of its connected devices more “listening” to their owner. And how to get there can be surprising.
Google devices able to understand their user
In a blog post published on Tuesday, Google explains that its teams responsible for developing its connected object system are looking “new ideas to design and build technology that helps devices understand us without verbal commands”. The problem raised is the following: If our environment is increasingly connected thanks to devices responsible for making our daily lives easier, it is still necessary to constantly call on them for this.
The fact of having to say “Ok Google” in particular to activate a company device is presented by the company as particularly off-putting. “The repetition of these commands is tedious and interferes with the natural flow of daily life. » And that’s what Google wants to respond to by developing smarter connected objects, capable of anticipating the intentions of users without them necessarily having to use a voice activation command.
A radar detection system in development at Google
Google has been developing Soli, a radar detection technologydesigned for “creating these new interaction techniques”.
“The radar sensor uses radio waves to detect presence, body language and gestures within its detection area. »
For Google, the primary benefit of this technology is that the Soli radar sensor “is not a camera”which responds to the concerns of people who want to preserve their privacy. Then, the system is able to learn from the habits of the user, in particular the way in which he enters “personal space”also called “Fields” or “Champs”, of a device in order to activate it. “Through our studies, we have found that the degree of overlap between these fields is a good indicator of the level of interest between a user and a device”explains Google.
The company uses its studies to develop what she calls a “social intelligence” which should allow, in the future, its devices to “to participate in daily life in a more harmonious and considerate way”. How could this translate? For example, let’s say you’re using a Google Nest device with a screen to follow a cooking recipe you’re preparing in real time. The device would be able to identify when you walk away to get an ingredient, and it would then pause the video. When you returned, the video would start again without you having anything to say tonight.
Google does not give details about the arrival on the market of connected devices equipped with such a system. But there is no doubt that this will be one of the essential challenges in the coming years to continue to integrate “benevolent” connected objects into homes around the world.