As reported by Android Authority, fragments of code for Google's companion app tailored for its smart glasses, aptly named the 'Google Glass Companion,' have surfaced in the Android Studio preview edition. This discovery sheds light on the app's fundamental features and its underlying design philosophy. The latest iteration of the device places a premium on 'privacy protection,' ensuring that numerous data-processing tasks are executed locally, minimizing external data transfers.
The code snippets suggest that the glasses are equipped with a 'conversation detection' capability. When a user's conversation is detected, the Gemini AI system automatically suspends voice notification broadcasts. Notably, all processing related to conversation detection occurs locally on the glasses themselves, without any data being uploaded to Google's servers. Furthermore, users have the flexibility to manually activate a Do Not Disturb mode, which can be set for durations ranging from 1 to 8 hours.
Additionally, the glasses may offer varying AI experiences based on the hardware's capabilities. This implies that certain devices might not be able to fully support all Gemini functions due to hardware limitations. Within the app, an 'audio-only mode' has also been identified, enabling users to deactivate the screen and interact exclusively through audio commands.
Turning to the imaging capabilities, the glasses support 1080P video recording and also feature an 'experimental' 3K resolution setting. To deter unauthorized filming, the system incorporates an LED recording indicator light. This light must remain visible during recording; if it is obscured, the system automatically disables the recording function.
