Google teased translation glasses at last week’s Google I / O Developers Conference, keeping the promise that one day you will be able to speak a foreign language and see English translation through your glasses.
Company executives show glasses in a video; It doesn’t just show “closed captions” – real-time text spelled out what other people say in the same language – but translated from English and Mandarin or Spanish, enabling people who speak two different languages to continue a conversation. Hearing-impaired users let others see what they are saying.
As with Google Translate hardware, glasses will solve a major pain problem using Google Translate, which is: If you use audio translation, the translation depends on the audio real-time conversation. By visually presenting the translation, you can follow the conversations more easily and naturally.
Unlike Google Glass, the translation-spectacle prototype is also augmented reality (AR). Let me explain what I mean.
Augmented Reality occurs when a device captures data from the world and adds information to the user based on its recognition of what that data means.
Google Glass wasn’t an augmented reality – it was a head-up display. The only relevant or environmental awareness that can address this is location. Depending on the location, it may take turns giving directions or position-based reminders. But while it may not normally collect visual or audio data, it can then return to user information about what they are seeing or hearing.
Google’s translation glasses actually receive audio data from the environment and return to the user a copy of what is being said in the language of their choice.
Audience members and the tech press have reported on the translation function as the exclusive application for these glasses without any analytical or critical inquiry, as far as I can tell. The most striking fact that should be mentioned in each report is that translation is an arbitrary choice for processing audio data in the cloud. Glasses can do so much more!
They can easily process any audio for any application and return any text or audio consumed by the wearer. Isn’t that obvious?
In reality, hardware sends sound to the cloud and displays text whatever the cloud sends back. To make all those glasses. Send voice. Accept and display text.
Applications for processing audio and returning workable or informationally relevant information are virtually unlimited. The glasses can send a word, and then display any text that comes back from the remote application.
The word can even be encoded like an old-fashioned modem. A sound-producing device or smartphone app can send beeps and whistles like the R2D2, which can be processed in the cloud as an audio QR code that, once interpreted by the server, can return any information displayed on the glasses. This text may contain instructions for operating equipment. It can be information about a specific pattern in a museum. It can be information about a specific product in a store.
This type of application we will wait for five years or more to provide visual AR. In the interim, most of this can be done with audio.
One obvious use of Google’s “translation glasses” is to use them with Google Assistant. It would be like using a smart display with Google Assistant – a home appliance that provides visual data, including general audio data from Google Assistant queries. But that visual data can be found in your glasses, hands-free, no matter where you are. (This will be a head-up display application instead of AR.)
But imagine if “translation glasses” are attached to a smartphone. With permission from others, Bluetooth transmissions of contact data (glasses) can also show you who you are talking to at a business event and your history with them.
Why Tech Press broke Google Glass
Google Glass critics have criticized the product for two main reasons. First, a forward-facing camera mounted on the headset makes people uncomfortable. If you were talking to a Google Glass wearer, the camera was aimed at you, making you wonder if you were being recorded. (Google has not said whether their “translation glasses” will have a camera, but the prototype does not have one.)
Second, excessive and obvious hardware wearers look like cyborgs.
The combination of these two hardware breaches led critics to argue that Google Glass is not socially acceptable in a decent company.
Google’s “translation glasses,” on the other hand, don’t have cameras or look like cyborg implants – they look a lot like ordinary glasses. And the text visible to the wearer is not visible to the person they are talking to It looks like they are making eye contact.
The only remaining point of social disapproval for Google’s “translation glasses” hardware is that Google will “record” other people’s words without permission, upload them to the cloud for translation, and possibly retain those recordings as others. Voice-related products.
However, the real issue is that augmented reality and even head-up displays are extremely attractive, if only the manufacturers could set the feature correctly. One day, our normal looking glasses will have full visual AR. In the meantime, proper AR glasses will have the following features:
- These look like regular glasses.
- They can take prescription lenses.
- They have no camera.
- They process audio with AI and return data via text.
- And they offer helpful functionality, providing results with text.
To date, there is no such product. But Google has shown that it has the technology to do so.
While language captions and translations may be the most compelling feature, it is – or should be – a Trojan horse for many other compelling business applications.
Google has not announced when – or even if – “translation glasses” will be shipped as a commercial product But if Google doesn’t create them, someone else will, and it will prove to be a killer category for business users.
For ordinary glasses the ability to give you access to the visual results of AI interpretation and who and what you are listening to, as well as the visual and audio results of the assistant questions will be a complete game changer.
We are in an awkward time of technology development where AR applications exist primarily as smartphone apps (where they are not included) while we wait for the mobile, socially acceptable AR glasses that will last for many years to come.
In the meantime, the solution is clear: we need audio-centric AR glasses that capture sound and display sound.
That’s just what Google displays.
Copyright © 2022 IDG Communications, Inc.