Google recently held the I/O 2022 conference. The business unveiled a slew of software improvements at the event. Google is adding a slew of new capabilities to its products, including Google Maps, Google Meets, Google Assistant, and others, as it advances in the field of artificial intelligence.
Continue reading to learn more about the new features that will be added to these Google products soon.
Google Maps Immersive View
Google Maps is getting a new feature that’s named Immersive View. Google describes it as a wholly new way of exploring with Maps. According to the business, Immersive View would allow users to experience what a neighborhood, landmark, restaurant, or popular location is like – and even feel as if you are there.
Google essentially utilizes artificial intelligence to combine billions of ‘Street Views’ and build aerial photographs that serve as a digital model of the location. The functionality will let users check a landmark, building, or location during various times of the day. Google is also introducing new features such as eco-friendly navigation and Live View.
Google Meet Portrait restore
To begin, Google Meet will receive a new function called Portrait restore, which will improve video quality by utilizing Google AI technology. If users are in a dimly lit setting or using slow Wi-Fi, Google will identify this and improve the user’s overall video quality.
Furthermore, Google is introducing a new feature called Portrait Light. Users will be able to add studio-quality lighting simulated by AI to their video feed using the feature. Other features being added to Google Meet include audio de-reverberation, live contact sharing, automated transcriptions, and security safeguards.
No more “Hey Google” on Google Assistant
Google’s voice assistant is getting additional features. Users of the Nest Hub Max in the United States will be able to deliver Google Assistant commands without saying ‘Hey Google’ beginning May 11, 2022. Look and Talk is a function that employs both facial and voice recognition technology to identify the user and perform speech commands. In the official blog article, Google described the technology underlying this functionality.
“There’s a lot going on behind the scenes to recognize whether you’re actually making eye contact with your device rather than just giving it a passing glance. In fact, it takes six machine learning models to process more than 100 signals from both the camera and microphone — like proximity, head orientation, gaze direction, lip movement, context awareness and intent classification — all in real time.”
Which one of those new features will you use first? Let us know in the comment section below.