Blog

  • 2026.1.26-2026.1.30

    January is nearly over. The discussions these past few days have been… erratic. We’ve jumped from hyper-specific physical parameters straight into abstract life goals.

    We’re essentially trying to solve two problems: teaching the glasses to understand human physical habits (like, what actually counts as “looking up”?), and teaching the AI to understand human intent (like, “who do I want to become?”).

    It sounds grand when you say it like that. But in the documentation, it’s just endless arguments about “angles,” “tags,” and “reset buttons.”

    The HUD: Translating “Angles” into “Comfort”

    We hit a classic “Engineer Mindset vs. User Mindset” wall while designing the “Look Up to Wake” feature.

    The original idea was quite hardcore: ask the user to set a specific trigger angle. But who actually knows the difference between looking up 15 degrees and 20 degrees? It’s a blind spot for users.

    On Monday, we decided to scrap the numbers. Instead, we’re mimicking the FaceID setup logic—a calibration flow.

    • “Look straight ahead and click.” — Sets the 0-degree baseline.
    • “Look up comfortably and click.” — Sets the trigger threshold.
    • “Look up as far as you can and click.” — Sets the accidental trigger limit.

    We have to admit that everyone’s neck flexibility and line of sight are different. Rather than forcing the user to adapt to the machine’s parameters, we should just let the machine record the user’s natural movement.

    The Goal System: More Than Just Tagging

    Wednesday afternoon, Wang Yi and I got into a rather metaphysical discussion: How involved should the AI be in a user’s life goals?.

    If we let a user set a goal, say, “I want to learn Japanese,” that shouldn’t just be a static tag. It needs to change how the AI filters the world.

    • When the glasses catch relevant content, it should be more aggressive with Suggestions.
    • It might shift from being a passive assistant to a “Japanese study supervisor.”

    We debated “Preset Tags” (Occupation/Hobby) vs. “Free Text” for ages. Presets are efficient, but free text carries more… desire. The consensus was: this isn’t just a fill-in-the-blank exercise; it’s giving the AI a “backstory.” If the AI knows I’m a Product Manager trying to survive, its answers shouldn’t sound like a cold encyclopedia entry.

    By the way, I made a joke in the meeting: “After all, I’m just a typist.” Watching my laptop heat up from running too many docs… sometimes the gap between us and the AI doesn’t feel that big.

    That Button to “Reset Everything”

    Monday also brought up a very practical issue: The Reset.

    As features pile up—multi-device connections, cloud sync, custom settings—the side effect of complexity is that bugs can hide anywhere. Just like the iPhone’s “Reset Network Settings,” we need to give the user a Panic Button.

    When Bluetooth won’t connect, notifications won’t pop, or the glasses just feel “stupid,” users don’t need to know which line of code broke. They just need a button that says: “Restore settings to default, but don’t delete my data.” It’s not sexy, but it’s the last line of defense for system stability. We even argued about the difference between “Unbind” (forgetting completely) and “Disconnect” (just a temporary break). That semantic distinction often determines whether a user feels anxious or reassured when things go wrong.

    Closing Thoughts

    The vibe this week… it feels like we’re trying to give these glasses a bit of “humanity.”

    It needs to know if your neck is tired (HUD calibration), it needs to know what you want to learn (Goal System), and it needs to offer you a “regret pill” when you mess up the settings (Reset).

    That’s hardware, I suppose. You think you’re writing code, but really, you’re translating an understanding of human behavior, line by line, explaining it to a stone.

  • 2026.1.12-2026.1.23

    If the previous phase was about drawing the blueprints, then these last two weeks? It feels like we’ve been paving a road in the mud.

    It’s the details. The ones that look insignificant until you actually dig in and realise there’s a massive sinkhole underneath. We’ve spent the last ten days or so dealing almost exclusively with Edge Cases—connection drops when switching devices, cross-border data compliance, and that Android permission headache that just won’t go away.

    It’s a bit tedious. Maybe even a bit maddening. But I suppose this is just the reality of shipping hardware.

    The Boundaries of the Physical World (And That Wall)

    The most “absurd yet real” discussion this week was about data storage and networks.

    When we designed the Global Version, we took it for granted that the “Cloud” is omnipresent. But reality is… the world is chopped up by invisible borders. We were debating a specific scenario: What happens if a user registered in the US flies to China with their glasses?

    We were half-joking in the meeting about VPNs, about “tunneling out” or “swimming back”. While we know the technical side—latency, data residency compliance—there’s something quite jarring about having to write code that checks “Are you in China right now?” to decide which server to hit.

    The conclusion was pragmatic: We have to keep data where it was registered (e.g., APAC data stays in Singapore) to follow the law. As for the latency of accessing it across borders? That’s physics. We just have to try and optimise it.

    The Cost of “Double Dipping” (Multi-Device Support)

    Another one that’s causing headaches: Multi-device support.

    A user might have two phones, but only one pair of glasses. When they switch between iPhone A and iPhone B, what happens to the audio they’re currently recording?

    We thought this was a simple Bluetooth switching issue. Turns out, it’s more like “data surgery”. If Bluetooth cuts out, the glasses cache a chunk of audio, the phone has another chunk. When the user reconnects to this (or the other) device, we have to stitch these fragments back together like a jigsaw puzzle.

    We argued for ages: Auto-repair? A pop-up asking for confirmation? We even got into the weeds of a “Repair and Transcribe” button logic. My gut feeling is to hide the complexity from the user as much as possible, but the technical boundaries are there—if it breaks, it breaks. We just need to find a way to patch the gap so it doesn’t look too ugly.

    Who Are You? (AI Memory & Preferences)

    On the feature side, we’re pushing ahead with the Personalization module.

    To make the AI actually smart, we need the user to feed it some “Facts”—job, nickname, or just “Tell me more about you”.

    It’s tricky, though. We want the AI to be clever, so we need data; but asking a user to fill out a “Who am I?” section feels a bit like a census interrogation. We went back and forth on the UI—tags or free text? We’re leaning towards giving the AI a “Persona” entry point. Tell it “I’m a Product Manager,” so its answers sound like a capable assistant, not a search engine.

    That Annoying “Forced Update”

    Finally, the infrastructure work we can’t avoid—Forced Updates for the App and Firmware.

    The ecosystem gap between Android and iOS is… vivid here. To ensure users are on a critical version, we need various pop-up strategies. Full-screen block? Standard pop-up?

    Honestly, I detest that “update or don’t use it” logic. It feels aggressive. But reviewing the Product Update Page, I have to admit… sometimes for safety (or to stop a critical bug), we have to be the “bad guy.” All we can really do is make the pop-up text sound sincere and make the changelog clear. At least let the user know that the 10-minute wait is worth it.

    Closing Thoughts

    The documents from these past few weeks are full of “What if…?”

    • What if the net cuts during recording?
    • What if they haven’t enabled Bluetooth permissions?
    • What if the user has two accounts?

    Product management is sometimes just battling these 1% probabilities. It’s exhausting—arguing all afternoon over the logic of a single pop-up. But seeing these holes get plugged one by one… it brings a bit of peace.

    This is “filling in the cracks,” I suppose. The road still needs paving.

  • 08.12.2025 – 09.01.2026

    If the last few weeks were about… cutting things down, finding out what we couldn’t do… then this week? This week has felt more like… filling in the cracks. Or maybe digging up the foundations and pouring the concrete again.

    We’ve moved away from the high-level concepts now. We’re in the weeds. The gritty, invisible stuff that—if we get it wrong—is an absolute disaster.

    The Pairing Process: A bit of a rethink

    I’ve been obsessing over the pairing flow. We had this Version 1.0, and looking back at it… well, it was a bit of a monster, honestly. We were trying to get the user to do everything—notifications, AI permissions, data sharing—before they’d even really used the glasses. It was just… too heavy.

    So, we scrapped it. We’ve gone for a Version 2.0.

    The intuition here was simple: just let them connect. We’ve decoupled the “connection” from the “setup”. All those permissions—the notifications, the contacts—we’ve pushed them back. Let’s wait until the user actually needs them, or when they enter the Dashboard. It feels… lighter, somehow. Less of a barrier. I think it’s the right call. It gets them to that “magic moment” of connection much faster.

    The Invisible Safety Nets

    Then there’s the stuff that keeps me up at night. The “infrastructure.”

    We spent a ridiculous amount of time on Account Security. It sounds dry, I know. But when you’re dealing with overseas markets… the GDPR stuff is a minefield. We had this whole debate about how to determine a user’s region—IP address versus manual selection. We settled on a mix.

    And it gets oddly specific. Like… did you know we have to build specific logic just to stop a teenager in Italy from using the LLM features?. We have to verify their birthday in the backend and silently gate those features if they’re under 18. It’s a lot of logic for something you hope most users never notice.

    The Fear of the “Paperweight”

    Firmware updates (OTA). This is the one that makes me nervous.

    We had a review on the update logic, and the mood was… cautious. Rightly so. We’re basically terrified of “bricking” the device. So we’ve put in these strict gates: battery must be over 50%, network must be stable.

    But the big one is the MD5 integrity check. We’re now forcing a local self-check after the download. Basically, the app checks the package hasn’t been corrupted before it even thinks about sending it to the glasses. It’s a necessary safety rail.

    We also figured out the Beta channel. I’m quite keen on this. It lets us push updates to the enthusiasts (and ourselves) without risking the general public. But the version sorting logic… making sure a Beta user can slide back to a Stable release without things breaking… that was a headache.

    Closing Thoughts

    It’s been a week of arguments, honestly. Arguing with Rongke and Jialiang about pop-up logic or error codes. But seeing the PRD evolve from that bloated 1.0 to this sharper, safer 2.0… it’s satisfying. In a quiet way.

    It’s like building a road. No one thanks you for the tarmac, do they? They only scream if there’s a pothole. We’re just trying to make sure there aren’t any potholes.

    Next week… privacy policies. Joy.

    Onwards.

  • 1.12.2025-4.12.2025

    This week has been focused on moving from the “what if” phase to the “exactly how” phase. We’ve been locking down complex logic flows, defining edge cases, and making sure the core user experience is airtight for launch.

    The Command Centre: Finalising the Controls The highlight of the week was the detailed review of our hardware control page within the app. This is the user’s primary interface for managing their device, and we’ve aimed for a balance of visual feedback and utility. We finalised a design featuring a 3D model that reflects the device’s status in real-time.

    Key adjustments reviewed include:

    • Visual Precision: Refining how users adjust display brightness and the perceived position of content within their field of view.
    • Quick Toggles: Ensuring stable entry points for core modes, such as focus/DND settings and recording features.
    • Interaction Standards: Aligning hardware controls, such as scroll wheel directions and head gestures, with established mental models to ensure the learning curve is as flat as possible.

    Smart and Secure Updates Defining a reliable firmware update process was a significant part of our logic discussions. We’ve established a robust sequence of checks—verifying battery levels and network stability—before any transfer begins. To guarantee security and package stability, we are implementing a local integrity check to prevent corrupted data from being sent to the hardware. We also mapped out a channel system that allows for distinct update paths for standard users versus those on early-access beta versions.

    Experimental Foundations While not every feature will launch on day one, we spent time building the foundation for data-driven iteration. We discussed the infrastructure needed for A/B testing different hardware algorithms. This allows us to potentially split user groups to compare performance in areas like audio capture or display clarity, ensuring that future updates are backed by real-world usage data.

    Privacy and Stability We continued to refine focus modes to ensure user privacy. Discussions focused on “locking” the device display when removed or put into specific silence modes, requiring a phone-based re-authorisation. We also confirmed granular privacy controls for hardware sensors, giving users clear visibility into which specific functions are accessing data at any given time.

    Closing Thoughts It was a week of deep dives and technical scrutiny. The product is becoming leaner and more defined. While some of the more complex customisation features were trimmed to focus on launch stability, the vision for a solid, intelligent assistant remains clear.

    Next week, we turn our attention to finalising the remaining UI elements and syncing with the hardware teams on feasibility for our refined control schemes.

  • 24.11.2025-28.11.2025

    This week has been a lesson in the harsh reality of product management. If I had to sum it up, it would be: “The Art of the Cut.”

    We went from high-level dreaming to the granular, sometimes painful, ground reality of shipping a Version 1.0 product. Here is the breakdown of a week spent navigating compromises and complex logic.

    Killing the Darlings (Again)

    The biggest headline this week is a tough one. Remember the modular, widget-based homepage I was so excited about? The one that would allow users to fully customise their dashboard?

    It’s been cut from Phase 1.

    Leadership reviewed the scope and decided it was too heavy for our initial launch timeline. We are reverting to a simpler, static list structure. I won’t lie—I’m gutted. Seeing weeks of interaction logic and design work get “deprioritised” (read: put in the freezer) is a bitter pill to swallow. But, stepping back, I understand it. Stability and speed must come first. We need to ship a solid product, not a perfect concept that never launches.

    The “Invisible” Mountain: Login & Permissions

    With the fancy UI features trimmed, my focus shifted to the unsexy but critical backend logic: Onboarding.

    You wouldn’t believe how complicated “just signing up” can be when you’re building a global product. We spent hours debating:

    • Region Selection: We can’t just let users sign in; we have to route them to the correct server cluster (GDPR, etc.) before they even enter a password. We explored using IP detection vs. manual selection to make this seamless.
    • Age Restrictions: We had to implement logic to handle users under 18 in certain regions (like Italy), restricting access to specific LLM features while keeping the rest of the device functional.
    • The “Email vs. Social” Matrix: Handling edge cases where a user signs up with Google, then tries to sign in with the same email address manually. It’s a logic maze.

    Android vs. iOS: The Notification Nightmare

    We also dove deep into the Notification System. The disparity between iOS and Android continues to be my biggest headache.

    iOS gives us neat categories (Social, News, etc.). Android? It’s the Wild West. We had to design a fallback logic where we maintain our own list of app packages to categorise notifications intelligently.

    We also made a firm decision on Replies. We debated letting users reply to messages directly from the glasses. Ultimately, we decided no. The interaction cost is just too high with our current hardware controls. It’s better to offer a perfect viewing experience than a clumsy, frustrating replying experience.

    Team Banter & Reality Checks

    Despite the cuts, the team spirit is still there. I had some long sessions with the development team (and some solid banter with my colleague Shaozheng). There’s a running joke now about the “intern shield”—if things go wrong, I’m just here to learn, right? (Though in reality, the pressure is definitely on).

    We wrapped up the week feeling exhausted but clearer. The product is leaner. The logic is tighter. The “bells and whistles” are gone, but the engine is being built properly.

    Next week: locking down the final UI for the simplified dashboard and hopefully getting some of these logic flows approved by legal.

    Onwards.

  • 17/11/2025-21/11/2025

    This week has been a proper rollercoaster, to say the least. If I had to sum it up in one phrase, it would be “killing your darlings.”

    We started the week with high energy, diving deep into the architecture of the new AI homepage. My vision—and what we spent hours refining—was a fully modular, widget-based dashboard. The idea was brilliant: users could customise their feed with dynamic cards for their daily stories, upcoming schedules, spinal health stats, and project updates. We spent days nailing down the interaction logic, figuring out how the “Daily Story” would pull in data and how the calendar permissions would sync seamlessly with the system. I even mapped out the “newbie village” strategy to ensure the dashboard didn’t look empty for first-time users. It felt solid. It felt complete.

    However, the reality of product management is rarely just about what looks good in a design file.

    During the final review with the department head, the decision came down hard: the widget system is too heavy for the Phase 1 launch. To ensure stability and meet our timeline, we have to strip it back.

    I won’t lie, I’m absolutely gutted. Seeing weeks of logic flows, component designs, and interaction specs effectively put on the shelf—or “deprioritised to V2,” which often feels like the same thing—is a tough pill to swallow. We are reverting to a much simpler, static list structure. It’s cleaner, yes, and safer for development, but it lacks the soul and customisability I was championing.

    That said, the show must go on.

    While the dashboard took a hit, we made significant progress on the backend and logic side of things. We spent a lot of time sorting out the “invisible” parts of the product. We locked down the logic for notification forwarding, specifically how to handle message queuing so we don’t bombard the user’s eyes when they put the glasses on after a break. We also hammered out the nitty-gritty of data permissions, server selection for GDPR compliance, and how we handle log uploads for feedback without freezing the user interface.

    It wasn’t all dry technical talks, though. We had some interesting debates about whether we should support replying to messages directly from the device. The consensus is shifting towards “no” for now—the interaction cost is just too high for the hardware we have. It’s better to do a great notification experience than a clumsy reply experience.

    So, as I wrap up this week, I’m feeling a mix of exhaustion and resignation. The product is leaner, sharper, and definitely more shippable, but I’m still mourning the loss of my widgets. I guess that’s the job—sometimes you have to cut the features you love the most to get the product out the door.

    On to the next sprint. Hopefully, with fewer cuts next time.

  • 10/11/2025 – 14/11/2025

    This week was, in a word, massive. After weeks of debating individual features and philosophies, this was the week we pulled it all together for the main project framework review. I’m thrilled (and slightly relieved) to say it landed incredibly well. We presented the new app structure, and the team is now fully aligned on the path forward.

    A huge part of this new framework is the main AI dashboard. We’re moving away from a static feed to a much more flexible and personalized main screen. It finally feels like a proper, intelligent hub rather than just a list of files. We’ve mapped out how this new dashboard will intelligently surface different kinds of information to the user at the right time. It’s a big step up for the whole experience.

    Of course, before we could finalise a framework that relies so heavily on AI, we had to get philosophical. A huge chunk of the week was spent in a marathon session on privacy and security. It’s not enough to just tick a legal box. My position, which I argued for, is that we have to be user-centric from the absolute start. We must build transparency and trust into the why of our design, making our processes visible and speaking to users in plain English. This philosophy will now underpin every privacy feature we build.

    While the review was the main event, we were also in the weeds nailing down other critical details. We had a fascinating debate on the scope of our AI assistant, particularly about how the mobile and glasses experiences should differ. We also (finally!) got alignment on how to display sync status to the user, which is vital for managing expectations.

    It was an exhausting but incredibly productive week. The framework is locked, the vision is clear, and the team is aligned. Now, the real fun begins.

  • Smashing the First Review & Musing on the Future

    This week was dominated by a significant milestone: our first major project review. I’ll be honest, I was bracing myself for a proper grilling, but the entire process was surprisingly smooth. No curveball questions from the development team, which is a victory I’ll gladly take. My mentor and I managed to field all project queries, and we’re now fully aligned on the path forward.

    With the review ticked off, my focus snapped back to the core of the project: the AI.

    A large chunk of the week was spent nailing down its core capabilities. This isn’t just about what it can do, but perhaps more importantly, what it shouldn’t. We’re drawing very clear privacy boundaries. The next challenge, which I’m still mulling over, is how to translate this into the homepage UI. The goal is to create a dashboard that elegantly displays AI-gathered insights alongside a user’s daily tasks and general “busy-ness” level. It’s a tricky balance to get right, and it’s not quite there yet.

    On the collaboration front, things are looking brilliant. I had a great chat with our Interaction Designer, and she’s fully on board with the design direction. Our working styles seem to click perfectly, which feels refreshingly harmonious. Here’s hoping that smooth collaboration continues.

    The week ended not with a whimper, but with a deep-dive conversation. On Friday, a colleague and I found ourselves chatting until 10:30 PM, mapping out the future of the AR industry. We inevitably landed on Apple, and you just have to admire the sheer depth and muscle they’re putting into the field. It was a good reminder that, at the end of the day, Apple is still Apple.

  • So, I Tried to Score Myself. It Got… Real.

    Okay, so I sat down with that MyCAF framework and tried to honestly score myself on this whole placement experience. No tool, just me and a blinking cursor. It was… illuminating. And a little bit painful.

    It turns out I’m living a life of extremes.

    Here’s how I rated myself: I gave myself a solid 8/10 for Proactivity and 8/10 for Enterprise. That felt right. It’s exactly what I was feeling when I wrote about my app framework getting picked over my mentor’s. I didn’t just wait; I wasproactive, I did treat it like my own business, and I got the win. A high-five from me, to me.

    But then I had to score the other stuff.

    I gave myself a 6/10 for Resilience and a 6/10 for Agility.

    That’s me being brutally honest about that “Product Furnace” week. When the friction was high, I didn’t feel “agile.” I felt like I was about to break. My resilience was running on fumes. It’s a pretty humbling thing to admit, but it’s the truth. I’m just not there yet.

    The most interesting part was trying to score my communication. I gave myself a 9/10 for Connection (I can talk to anyone) but only a 6/10 for Story Telling.

    And that’s the whole ball game right there. I’m learning that being a good PM isn’t just about connecting with people. It’s about convincing them. It’s about being a storyteller who can sell a vision, not just a team member who can chat.

    So yeah. That’s my self-assessment. I’ve got the engine, but I’m clearly still in the shop getting the armour fitted.