January is nearly over. The discussions these past few days have been… erratic. We’ve jumped from hyper-specific physical parameters straight into abstract life goals.
We’re essentially trying to solve two problems: teaching the glasses to understand human physical habits (like, what actually counts as “looking up”?), and teaching the AI to understand human intent (like, “who do I want to become?”).
It sounds grand when you say it like that. But in the documentation, it’s just endless arguments about “angles,” “tags,” and “reset buttons.”
The HUD: Translating “Angles” into “Comfort”
We hit a classic “Engineer Mindset vs. User Mindset” wall while designing the “Look Up to Wake” feature.
The original idea was quite hardcore: ask the user to set a specific trigger angle. But who actually knows the difference between looking up 15 degrees and 20 degrees? It’s a blind spot for users.
On Monday, we decided to scrap the numbers. Instead, we’re mimicking the FaceID setup logic—a calibration flow.
- “Look straight ahead and click.” — Sets the 0-degree baseline.
- “Look up comfortably and click.” — Sets the trigger threshold.
- “Look up as far as you can and click.” — Sets the accidental trigger limit.
We have to admit that everyone’s neck flexibility and line of sight are different. Rather than forcing the user to adapt to the machine’s parameters, we should just let the machine record the user’s natural movement.
The Goal System: More Than Just Tagging
Wednesday afternoon, Wang Yi and I got into a rather metaphysical discussion: How involved should the AI be in a user’s life goals?.
If we let a user set a goal, say, “I want to learn Japanese,” that shouldn’t just be a static tag. It needs to change how the AI filters the world.
- When the glasses catch relevant content, it should be more aggressive with Suggestions.
- It might shift from being a passive assistant to a “Japanese study supervisor.”
We debated “Preset Tags” (Occupation/Hobby) vs. “Free Text” for ages. Presets are efficient, but free text carries more… desire. The consensus was: this isn’t just a fill-in-the-blank exercise; it’s giving the AI a “backstory.” If the AI knows I’m a Product Manager trying to survive, its answers shouldn’t sound like a cold encyclopedia entry.
By the way, I made a joke in the meeting: “After all, I’m just a typist.” Watching my laptop heat up from running too many docs… sometimes the gap between us and the AI doesn’t feel that big.
That Button to “Reset Everything”
Monday also brought up a very practical issue: The Reset.
As features pile up—multi-device connections, cloud sync, custom settings—the side effect of complexity is that bugs can hide anywhere. Just like the iPhone’s “Reset Network Settings,” we need to give the user a Panic Button.
When Bluetooth won’t connect, notifications won’t pop, or the glasses just feel “stupid,” users don’t need to know which line of code broke. They just need a button that says: “Restore settings to default, but don’t delete my data.” It’s not sexy, but it’s the last line of defense for system stability. We even argued about the difference between “Unbind” (forgetting completely) and “Disconnect” (just a temporary break). That semantic distinction often determines whether a user feels anxious or reassured when things go wrong.
Closing Thoughts
The vibe this week… it feels like we’re trying to give these glasses a bit of “humanity.”
It needs to know if your neck is tired (HUD calibration), it needs to know what you want to learn (Goal System), and it needs to offer you a “regret pill” when you mess up the settings (Reset).
That’s hardware, I suppose. You think you’re writing code, but really, you’re translating an understanding of human behavior, line by line, explaining it to a stone.
Leave a Reply