Surgery Planner

Mixed Reality, Interaction Design

Surgery Planner

A Mixed Reality medical app for brain surgeons to plan and visualise surgeries, designed for and built on Snap Spectacles smart glasses. 

Role Interaction Designer;
XR Developer
Timeline 5 months
Team Solo
Skills Lens Studio;
Snap Spectacles
Surgery Planner preview

Background

This is my graduation internship with Augmedit (The Netherlands) ("the client") as part of XR Creative Developer education program at Hyper Island (Sweden).

I was tasked to re-design, develop, and evaluate some of the existing features of a medical AR app on Microsoft Hololens into a new version on Snap Spectacles smart glasses, assisting surgeons in planning brain surgeries.

It was also my first time working with Spectacles (how fun! 🤩), a new hardware platform for AR that will be available for public access in later 2026.


Product Requirements

By the client's requirements, my prototype should enable users to:

  • move, rotate, and scale brain hologram models
  • select parts of the brain hologram models, then hide/display them, turn on/off the labels of them, make them look see-through/opaque
  • place markers on a brain hologram's surface to indicate the location of interests
  • draw surgical paths on a brain hologram's surface to indicate the surgical plan

My Design Analysis

In order to summarise all required features together and to visualise the dependencies between them, I first drew a User Flow chart ✨

User Flow chart

Then to design fitting interactions, I defined and aligned with the client WHO uses the app, WHERE they use it, and WHY they use it:

  1. User profile: medical professionals who are proficient in brain surgery planning but not necessarily familiar with XR technologies.
  2. Scenario: a surgeon is planning a brain surgery solo or collaboratively in hospital office (well-lit, quiet environment, no need to wear PPE, able to use two-handed interaction).
  3. User needs: efficient and intuitive interaction with complex brain holograms to plan brain surgeries.

Finally, I translated these definitions into design space and UX metrics, taken Snap Spectacles' hardware capabilities into consideration šŸ¤”

Design Space UX Metrics

Now let's zoom in to some of the design highlights šŸ‘€!

Highlight 1: Selecting Small Part(s) Of A Complex 3D Model

There are many small components nested in a brain hologram model, and selecting desired component(s) is a prerequisite for component-level interactions (e.g., show/hide, turn on/off labels, make them look see-through/opaque).

I designed and prototyped three interaction paradigms for component-selection: 1) Voice, 2) Raycast, and 3) Traditional Menu.

Voice Interaction Demo

Voice Interaction

I used Spectacles’ built-in ASR module and wrote custom logics, translating user speech to interaction commands.

Raycast Interaction Demo

Raycast Interaction

Inspired by Blender/Maya-like contextual menu, I prototyped from scratch a donut-shaped menu that appears around user's index finger tip after wrist-finger raycast dwelling.

I also added raycast line visual feedback and colour-coded menu buttons for quicker visual search.

Traditional Menu Demo

Traditional Menu

Our ā€œold friendā€ - flat UI panel - served as a usability benchmark.


User Test

I conducted remote user tests with the help of my supervisor and medical experts in the product team, video-recorded their in-app behaviours and asked them to share their thoughts and feedback in a post-test survey.

User Test Results

I iterated my design and prototype based on user test results. Examples of improvements include:

  • Voice Interaction
    • Added voice command synonyms in back-end to allow more flexible and robust voicie interaction.
    • Provided visual feedback to indicate voice transcription status (e.g., "listening", "thinking", "done").
  • Raycast Interaction
    • Added in-app tutorial to explain the interaction to users.
    • Added raycast visual feedback to indicate location of interests.
    • Colour-coded menu buttons by brain component colours for a quicker visual search.
  • Traditional Menu
    • Adopted easier wording and iconography for better readability.
    • Adjusted button size and spacing to improve poke accuracy.

Highlight 2: Annotating On A 3D Model By Hand Interaction

In the existing production app on HoloLens, there is a Landmark tool to precisely anchor a pointer onto brain surface, and a Pen tool to draw (and erase) surgical paths on brain surface. So by the given functionalities, I redesigned and prototyped the interaction of the tools:

Landmark Demo

Landmark

I played around with its form factor. By the metaphor of a lever, I designed the landmark’s top part/lever’s further end as the control of ā€œspeedy movementā€ mode, and the landmark’s lower part/lever’s closer end as the ā€œaccurate movementā€ mode.

I also borrowed the "throw-to-delete" idea from ShapesXR.

Drawing Pen Demo

Drawing Pen

Similar to the two-part design of the landmark, I made that grabbing pen’s lower part to draw, and upper part (then the pen is auto-flipped) to erase.


User Test

User Test Results

Sooo, I iterated my design and prototype again:

  • Landmark tool
    • Made audio feedback fading out when Landmark is contantly moving.
  • Drawing Pen tool
    • Added Low Pass Filter to smoothen pen's movement.
    • Made pen's nib/tip stick to the brain surface to ease drawing/erasing.
    • Added visual hints explaining drawing and erasing modes when user's hand hovers on the pen.

Development

Time to throw in some development bits! I programmed the prototype using Lens Studio and TypeScript. With SceneManager singleton, I separated the responsibility of UI view controllers from the hologram data models for better maintainability and scalability.

System Architecture
Words on My Work

Video Demo

Putting everything together ... šŸŽ‰



Last (oh yeah there's more! šŸ¤ÆšŸ˜), I have some further thoughts from Human-Computer Interaction perspective to share for general interaction design on smart glasses.

Words on My Work Words on My Work

Work presented by me independently | Nov. 2025 - Mar. 2026
Project supervised by Joost van Schaik and Sophie Chen.
Design used Figma. Development used Lens Studio, TypeScript, and Git.


This page represents my personal insights, independent of Augmedit's official views.

Next Project