Mia (Mood-Intelligent Assistant) is a wearable point-of-view camera concept that uses mood sensors to capture memories and automatically create immersive experiences based on emotion. A gestural interface allows the wearer to navigate memories and tune the experience to his mood. The concept was designed for a final project in an interaction design course, by a team of 5 MHCI+D (Masters of Human Computer Interaction and Design) graduate students. We produced a video prototype to present the concept to the class.

In this project, my major contribution was ideation, interaction design, and flow design of the camera.


Our team started out by conducting market research on current point-of-view cameras and identifying common issues in their designs. We examined a number of existing products (e.g. GoPro, Google Glass) and framed our research questions in terms of future outlook and effect on society. We also discussed the kind of relationships we have with technology, using our own experiences as well as specific examples from cinema. Finally, we looked at the different ways in which we currently browse recorded photos and videos.

Key insights

  • Most point-of-view cameras are unwieldy, especially when mounted on our bodies
  • People generally feel uncomfortable when they think they are being recorded
  • Controlling a point-of-view camera can be unintuitive and inconsistent across similar products
  • Managing and retrieving recorded experiences can be a monumental task


Functional Decomposition

Ideation began with a one-hour functional decomposition session as a process of breaking down concrete ideas in order to innovate.  As a team, we deconstructed the concept of a camera step-by-step into abstraction, then rebuilt the concept back into something completely new.  The stages of decomposition in the abstraction hierarchy are physical form, physical function, generalized function, abstracted function, and functional purpose.  The result of our decomposition was the functional purpose of the camera: to capture sensory information from the environment.  Using this as a baseline, we built a new camera concept: a wearable camera that could recreate the mood of an experience based on the mood of the wearer.

The next step was to develop the camera design and method of retrieval.  We wanted to create something that could improve the well-being of the wearer by altering mood through past memories.  To do this, we incorporated a mood sensing mechanism based on EEG technology that would automatically retrieve and play back recorded experiences.  For example, the device would know when the wearer is feeling stressed out and project a calming experience to help relaxation.


(Rough storyboard sketch by James Pai)


The scenario starts with a man frustrated by his work. The device senses his frustration and links with another device somewhere else in the world that is sensing the opposite: a feeling a calmness.  The second device then transmits sensory data from the environment back to the initial device.  Using light projectors built into the device, an immersive relaxing sunset ambience is recreated in the man’s room.  Both parties now share the same feeling of relaxation and the man is given a break from his former frustration.


(Final storyboard by James Pai)


After our design was developed, we set out to produce a video prototype of the experience.  We decided on a husband and wife scenario as a team.  Since our video prototype needed to show how people interact with the interface, we then developed an interface concept as a team.

We wanted the logo to convey a sense of personality for Mia and settled on a script typeface that gives it a more human-like / emotional quality.  The colored circle is a visualization of the mood spectrum from our interface. We spent an entire evening shooting the video. The last part of production involved animating the interface in After Effects and compositing it onto the footage.


Result and reflection

We received highly positive feed back from the instructor and the peers. This project was a great opportunity to explore more about interation design and gesture-controlled interface. We went through the process of bring something concrete (a line of sight camera) to an abtrast vision, and from that vision gradually went back to functional detail. This approach allowed us to explore more opportunites, and to focus on experience instead of technology.

mia cap1  cap3 cap2Camera-1 MoodWheel-1

Other Projects