Innovation unlocks many new ways for companies to capture new markets, onboard new users, and in turn, improve their bottom lines. And this is why like every year, there was a usual fanfare at the Mobile World Congress event recently held in Barcelona which showcased path breaking innovations. A vivid example is a Metaverse, a whole new virtual world where things work beyond imagination. Apart from this, applications powered by AI (Artificial Intelligence), IoT (Internet of Things), and other emerging technologies surprise us at regular intervals and indicate a more tech-enabled future. In this line, Google has introduced a new technology that can read body language without needing any cameras, and it’s called Google’s Soli.
Essentially the chip is a purpose-built chip that tracks body motion at a microscopic scale, with unmatched accuracy making use of a miniature radar to track the movement of the human body in real-time. The chip can track sub-millimeter motion, no matter how swift it is, with high precision. The size of the chip is just 8 x 10 mm, and it puts a sensor and antenna array together in a single device, which makes sure that this chip can be included in the smallest devices. Since it comprises no moving parts, the energy consumption of this chip is comparatively low. This aspect makes Google’s Soli chip the most preferred choice for all upcoming flagship device makers.
Intending to make a huge leap, Google’s ATAP (Advanced Technology and Products) division is working on a language to facilitate the interaction with the Soli sensor. The goal is to establish a universal set of gestures through which people will instruct their devices (running with a Soli sensor) to perform certain actions.
How Soli tracks movements
According to Google ATAP, the Soli sensor uses electromagnetic waves to identify movements. And whatever information the reflected signal provides like time delay and frequency fluctuations allows the sensor to understand what action needs to be taken.
The remarkable feature about the Soli sensor is, it can sense subtle changes in the received signal, which allows it to distinguish complicated finger movements easily. It understands the gestures by interpreting information in all possible ways, which makes it super precise. Wearables and other likely applications
Google’s Soli is garnering attention from the wider tech world because of its small size and efficiency. Wearables are the most natural choice to apply this technology because such devices usually have small displays, and that’s why technology like the Soli sensor is required to make them feature-rich more than ever.
Let’s take an example of the Apple Watch, which comes with a physical Digital Crown that allows users to interact with the watchOS interface in multiple ways. And Apple can take its game to the next level if it incorporates the Soli sensor into its smartwatch as it will eliminate the need for a digital crown, as users will just need to wave their fingers to get the desired task done.
For instance, they just move their fingers in the clockwise direction to increase the volume, imitate the action of pressing a button to turn on or off their Apple Watch. In a nutshell, the sensor can enhance the user experience, make the product look futuristic and justify the premium price tag, and so on.
Applications in the market
The Soli sensor was introduced in the Pixel 4 smartphone. It took the smartphone industry by storm because the customers were able to perform gesture controls like waving hands to skip songs, put their phone on silent, snooze their alarm, etc. Google even announced that these capabilities are just a glimpse of what is coming its users’ way. But Google didn’t launch the improved version of the Soli technology in the Pixel 5 series. Despite dropping the idea of using Soli in its smartphones, Google decided to use this technology in its other devices like Google Nest Hub with the objective of giving a whole new experience to its users. The radar sensors are embedded in the second generation of the Google Nest Hub smart display that detects body language and breathing patterns when anybody sleeps close to it. The remarkable fact is that this device observes the sleeping pattern without needing physical contact, unlike wearables.
What’s next for Soli
Other than its gesture-picking functions, Google wants this technology to have other game-changing applications, and that’s why it is using sensors in devices like desktops to recognize users’ everyday movements, to make new kinds of decisions rather than taking some gesture-based actions. This has the potential of revolutionizing everyday tech products. Some of the advanced applications in the pipeline are a thermostat that shares weather reminders or suggests the user take an umbrella before leaving the house, a TV that turns itself off automatically after detecting no movement from the user, a computer cleans the cache without needing a specific command using the keyboard.
Leonardo Giusti, head of design at ATAP, revealed that the major portion of the research revolves around proxemics. In a layman’s language, when we come in contact with another person, we expect certain things like engagement. Similarly, the ATAP team has used this concept to establish a connection between humans and devices.
The ATAP team has a vision where machines give immersive experiences to their users. For instance, whenever a computer detects the user is coming closer and entering its personal space, it takes the initiative to perform specific actions like turning on the screen without waiting for any button to be pressed.
But it is significant to understand that proximity alone won’t do the job because there will be scenarios when the machines will perform undesirable actions following proximity cues. This is where the Soli sensor comes into the picture because it closely monitors subtleties in the body movement to conclude. It tries to understand which path the user is going to take next, the direction of the head, among other fine details.
The origins of the chip
The ATAT team had pulled off such solutions after putting a dedicated team to perform a series of choreographed tasks in a living room, using multiple cameras to track movements.
Lauren Bedal, the senior interaction designer at ATAP, has a background in dancing, which is why she was able to explain how choreographers understood a basic movement idea and came up with more variations. She emphasized how it was crucial to observe a dancer managing their weight while changing their body position and orientation. Such studies helped the ATAP team to create a set of movements so that nonverbal communication could take place easily between humans and machines.
She did put some light on a few examples like a computer starts showing unread emails after sensing the user’s presence, pauses a video when no one is looking into the screen and resumes it after sensing the ideal face direction, suggests the user switch to the only-audio mode after sensing a lack of concentration during a video call.
How far is Soli from being a mainstay
After reading this far, it is natural to be curious about experiencing this technology, but it is not ready yet because some challenges are due to be addressed. For example, if multiple people are present in a room, it would be difficult for devices to decide which action they need to take. This is the prime reason why Bedal was putting stress while saying that this technology is still in the research phase, and there’s no point in launching it hastily, as it would end up ruining the user experience and drive unwanted backlash. It’s pretty much understandable because we will not be able to react calmly if our computer deletes an ultra-important file, takes illegal action on the internet, or does anything worse.
This is why the ATAP team has to take every small thing into account before launching this technology globally. The most important thing for them is to zero in on the timing because if a device reacts too early or late, users won’t get a pleasant experience.
Let’s say there is a TV equipped with a Soli sensor. And the user leaves their couch to get some snacks or due to some other reason in the meantime, the TV automatically turns off because it didn’t detect anybody in front of it. Should this be OK? No, of course not, because it would be frustrating to see the TV turning on and off after every few minutes. These kinds of problems are keeping the ATAP team busy in the study phase.
Apart from such complicated challenges, privacy is the major concern because people will hesitate to be present in front of their devices running on the sensor in fear of losing sensitive information.
However, Chris Harrison, a researcher from Carnegie Mellon University and Future Interfaces Group’s director, has thrown its weight behind Google and said that devices equipped with a Soli sensor will be user-friendly and privacy-centric. So, people would trust this technology sooner or later. All in all, we will have to wait a bit longer to see this technology running at its full potential.