When I'm not immersed in the world of design, I find joy in a range of creative activities. From painting and pottery to engaging in epic D&D adventures, these outlets fuel my imagination and allow me to express myself. Nature serves as a constant source of inspiration for me, reminding me of the vastness of the world and our place within it. A nature walk is my favorite passtime. Coming from a Fine Art background, I also have a deep appreciation for museums, where I get to travel back in time and explore the wonders of art and culture.

If you want to chat or work together, let’s connect :)!




Augmenting Seattle Asian Art Museum







Project Background


Seattle Asian Art Museum was built in 1933, it is located in Volunteer Park, was renovated and reopened in February 2020, and now provides 8 different tours.

Augmented Reality or Mixed Reality is the interactive experience that enhances the real-world environment with computer-generated objects, sometimes across multiple sensory modalities.

Utilizing the Mixed Reality technology can help introduce people to an immersive cultural and educational experience.


Goals


Harness AR/MR technology similar to Microsoft Hololens or Bynorth Focals, create an enhanced museum experience through Augmented Reality and Mixed Reality, and provide undisruptive services that assist and inspire visitors through augmented reality and mixed reality.

MY ROLE
User Experience Design
User Interface Design
Voice User Interface Design

TOOLS
Sketch
Adobe XD
Adobe





Challenge


Because of the convenient location and the nature of the exhibits, Seattle Asian Art Museum is usually crowded and packed with tourists, students and locals, which can be very disruptive. Based on the survey we sent out for 40 people, the results told us that the constant distraction, long descriptions and information can make the museum visit not so exciting anymore.

Solution


The demographic is mostly tourists and foreigners, and in order to make the experience immersive, fun, interactive, and easy to use, we created light-weight mix-reality glasses that will help the user feel comfortable and natural during the experience. The glasses implement eye Gaze, hand Gestures, and Voice to help enhance the accessibility for everyone, based on different needs.




What makes our glasses different?



  1. The glasses include 8 different museum tours.
  2. Users are able to use of eye Gaze, hand Gestures, and Voice based on their needs.
  3. The glasses provide wayfinding
  4. Users are able to find out information about certain art pieces and interact with them.
  5. An assist button on the side of the glasses so it’s easier to ask for help
  6. A voice button that allows the user to use voice command while pressing the button. Compare to use a “wake up word” in order to communicate with the glasses, a button would minimize the possibility of glasses picking up the wrong cue when there are multiple people using voice commands at the same time.



Gestures and symbols


Gaze: in order to interact with something, the user needs to look at it, it is a dot that follows the user, but is small enough and will not be disruptive.




Gestures: Referencing Microsoft’s Hololens, we integrated 2 gestures called Air Tap and Bloom. The user will use “Air Tap” to scroll, tap, and interact with the objects. And they can use “Bloom” to open up the menu. The gestures will be taught to the user during the onboarding process and will be also shown at the lower corner of their view when an action is needed.


AIR TAP AND BLOOM






On-boarding


None of the steps should be omitted because each step is crucial in order to make the process smooth.



01. Come to the front desk and rent glasses.

02. The user presents their ID and proceed to sign a waiver.
03. Staff show how to turn on the glasses and how to use the assist button and voice command button, and check if they fit properly.

04. The user puts the glasses on, the first screen will show “Welcome to Seattle Asian Art Museum, please select a language.”
05. The user then selects the proper font size.
06. In-glasses eye-tracking calibration.
07. After the calibration, there will be a short introduction teaching the user how to use the gestures by showing the gesture animation, then it will ask the user to interact with a cube based on the gestures they just learned.

08. Before finishing up, the user will be informed by the voice in the glasses that the settings and the gesture list will be in the menu in case there are any questions in the future. And the menu can be found by using the “Bloom” gesture.

09. A quick introduction to the history of the building and the Volunteer Park.


Navigation


The glasses offer tours for the user to choose from,  as well as Self Explore mode which allows the user to roam and walk around to learn about random art pieces.
There are 8 themed tours to choose from, and each tour contains 5 art pieces that are located at different spots in the museum. The tours are: Conservation, Highlights, Families, Religion, Collecting, Music, Contemporary Art, and Low Vision Tour.
After the user chooses the tour, the screen will show a brief overview and the map of the tour.
The route of the tour will be highlighted and shown on the floor, letting the user know which direction to go, and the closest art pieces will glow when the user gazes at it.
When the user chooses to approach the art and prepares to interact with it, the tap symbol will appear next to the art which indicates that the user now can tap and interact with the object.
After tapping on “read summary”, the user is able to scroll and read the summary, or play the audio to have the glasses read for them.


Other Screens





The assistance button is on the side of the glasses, after the user presses it, there will be a notification shown at the bottom of the screen letting the user know “Assistance is one the way, and press the button again to cancel the request.”

The user can open the menu using Bloom gesture, or end the tour in the menu.





What Happens When the User Leave the Tour?


our device will not interrupt the user, instead of yelling and telling the user to come back to the tour, the glasses will show the new updated route on the floor, so when the user is ready to go back they can simply follow the new arrows. And they can cancel the tour anytime in their menu.

Animation vs Distraction


When designing, I wanted to make sure the animation will only help the user focus more on the art. The wayfinding animations will be subtle and won’t distract the user from their visit, would only enhance the experience.



VUI User Commands


In order to minimize disruption to our users as well as other visitors, the Gaze and Gestures would be the main function. However, in order to make the system accessible to everyone, when someone is not able to interact with the screen, a limited voice command functionality is necessary to be included. The glasses will only listen to the user when they press on the physical button on the glasses, because we don’t want the glasses to pick up the wrong cue from other people, and provide inaccurate results.








Summary


Personally I think this project is successful because it’s the first time I worked with AR and voice together. I recognized that this project is future-facing, and therefore has some technology dependencies that are speculative at this point. I have learned that as a designer, it’s important to adapt, keep myself educated, and realize the limitations and how much my decisions would impact users’ experience.

Next Steps


The next step is expanding the VUI command list, and think about what the flow would be like if it’s for a different persona, who is vision impaired and isn’t technology savvy.