Post-privacy AI Glasses Claim To Listen To Your Every Word

The headline-making Harvard duo who turned a pair of Meta smart glasses into a privacy violation machine last year now have their own pair of smart specs to sell, which they tell The Register will make people "super intelligent" by listening in on their conversations 24/7 and offering unsolicited feedback. 

Caine Ardayfio and AnhPhu Nguyen on Tuesday opened preorders for Halo X, a pair of smart glasses they designed that, while not equipped with a camera like a pair of Meta Ray-Bans, do include a heads-up display. The display and some embedded microphones, combined with an agentic AI that is purportedly able to digest what the glasses hear, mean that the frames are able to respond with information they deem relevant to the wearer's current circumstances.

"The moment you put them on, you can answer literally any question," Nguyen told us in an interview. "You could ask, what's the top three GDPs in the world, or what date Christopher Columbus set sail - stuff like that....You could know any fact about any field from economics, history and more."

To be clear, Nguyen and Ardayfio are still working through their own hardware development process, and told The Register that they're currently deciding between three vendors for the final product, concept images of which appear in this article. There are about 20 testers in the Silicon Valley area wearing beta versions, which are running on unspecified third-party hardware.

But a social media video Halo published over the summer shows a beta pair of the glasses being used in public, as well as simulated imagery of using the glasses.

As designed, the glasses transmit audio via a Bluetooth Low Energy connection to a paired iPhone running the Halo app, which transmits transcribed requests to Halo's cloud platform for processing. While they told us that transcripts and summaries are stored locally on the paired device, all AI processing is done in the cloud. Ardayfio told us that using a combination of models from providers, including Google and Perplexity, allows Halo to balance "speed, cost, and contextual reasoning."

The display in the glasses is able to show around four lines of text at roughly 40 characters per line - not much, but Ardayfio described it as a happy medium of optimization "for quick, glanceable prompts rather than long passages."

As nothing but a screen, the Halo X glasses themselves lack much in the way of control. Ardayfio said that the only thing one can do on the glasses themselves is nod upward to bring up a dashboard that displays time, calendar events, and the like. Everything else is done from the paired smartphone.

Nguyen and Ardayfio told us that queries requiring an internet search can take the glasses up to two seconds, while some questions – like asking the weather – can be answered within a few hundred milliseconds. The glasses themselves don't do much in the way of heavy AI lifting, with Ardayfio describing them more as a display for the larger AI system.

Searching the web for fun facts isn't all that the glasses can do. They also create a repository of everything they've heard, and if prompted, can provide AI summaries of past conversations, reminders of things said in meetings, and the like. For instance, if you were to ask them, "Tell me what my wife and I decided about our daughter's college tuition on Saturday," the glasses would send a request to the cloud storage service searching for the conversation text, summarize it, and present the summary as text on the display.

As designed and demonstrated, Halo X doesn't even listen for a prompt word - it's simply supposed to be smart enough to understand whether something was meant for it and come up with the appropriate response. What that means, naturally, is that Halo X is always listening, always recording and still sometimes gets it wrong.

"Our custom AI agent listens to your whole day," Nguyen explained, "and every sentence you say goes to the AI so it can figure out if it should help you at this given time and what it should help you with." 

As for the noise (and privacy intrusion) of recording every sound heard in a public space, Ardayfio told us the glasses are designed to focus on the wearer, with background speech typically ignored.

"The glasses continuously check whether what you’ve said merits a response," Ardayfio said in an emailed statement. "We’ve been training the system with human feedback to improve when it should and shouldn’t intervene. It’s not perfect yet, but we’ve seen steady improvement with this approach."

Eventually, the designers envision the glasses getting enough training to become polite and useful assistants instead of hardware that might sometimes step in when not desired.

"We want our technology to disappear completely. We don't want to make just another smart watch, but a second brain that feels like intuition," Nguyen said. 

If Halo X manages to get there, it could be quite useful at work - no one could say they didn't volunteer to lead that project or skirt responsibility for a bad idea ever again. The pair told us that they're considering enterprise users and have been talking to people in that space to see what their needs might be, but had nothing to share yet. 

A privacy provocation

When we last spoke to Ardayfio and Nguyen in 2024, they had just demonstrated software that captured video streams from Meta Ray-Ban smart glasses and fed them through AI that gathered in-depth dossiers on anyone unfortunate enough to get within eye line.

Given their prior work, instant, in-your-eyes information retrieval makes sense. However, it comes with privacy complications. Halo X's AI agent can be disabled via the paired mobile phone app (available for iOS, but is unlikely to come to Android "for a long time," per Ardayfio), but if you want it to be more than a pair of glasses that slowly drains a battery without providing a service, you'll have to let the AI agent listen. Constantly. 

If that seems like a privacy nightmare to you, you're not alone. Ardayfio and Nguyen have plans to address those concerns, naturally.

"An AI that knows everything about you can be super useful," Nguyen said. "But privacy is a huge problem." 

The AI glasses one can preorder from Halo beginning on Tuesday will ship with software that offers end-to-end encryption, Nguyen said, "so no one can read your conversations except you."

While not implemented in their testing yet, Ardayfio said encryption will happen at the glasses at the point of capture, while stored, and while in transit. In other words, if it's implemented correctly, Halo X glasses transcripts should be accessible only to the person who created them on their smartphone and the screen of their glasses. Halo said it's also working to achieve SOC 2 compliance. 

As for whether there might be privacy concerns around Halo X users recording everything they hear in public for digestion and indexing by an AI, that's not on them, Nguyen told us. 

"At the end of the day, from Zoom notetakers to iPhone voice memos, the onus is on the end user to ask for proper consent," Nguyen said. "We expect users to use the product in a responsible manner, getting consent from others they interact with, just like any other meeting notetaker." 

The difference between Zoom notetakers and Halo X, however, is that the latter is designed to be on at all times and in all places. So are users going to turn it off when they're out in public, ask the waiter at the restaurant and the cashier at the grocery store for recording consent, or just keep it on and forget about everyone else's wishes? And what if you live in California, which requires both parties to consent to being recorded?

halo-glasses-render

A rendered concept image of the Halo X smart glasses - Click to enlarge

After three weeks of testing, the results have been positive, but not perfect. 

"It's early days, but the feedback is super promising," Nguyen said. "Professionals have generally loved it because they already use note-taking software now but this one works everywhere."

"I think there's still a lot of work to be done on the AI, though," Nguyen continued. That's not to say that the AI is stupid. Instead, early testers simply wanted it to be a bit better at figuring out how, with what, and when to assist users. 

"There's so many different ways people want to be assisted," Nguyen added. "So we're still thinking of ways to improve how to control it." One possibility, the pair explained, would be adding the ability for users to edit the master AI prompt for the Halo agent to tell it to only respond in certain circumstances.

If you want to give an ever-listening pair of AI glasses a try, you can preorder them now for $249, though you shouldn't expect them to ship until early next year. Just don't be surprised if some people stop talking in your presence. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more