Awards
Overview
“Dots can bring real impacts on the disabled community globally, as it is the first time an AR product considers their unique conditions and help them get into the future digital world.”

-- Alex Lewis, founder of the Alex Lewis Trust
Problem
The rising of ubiquitous technology like Mixed Reality (AR/VR) and Internet of Things (IoT) cause the user interface to shift from 2D to 3D. The traditional inclusive interaction design is no longer applicable. A new approach for accessibility needs to be proposed. 

Solution
By conducting experiments and researches, we discovered the relative spatial movement between two dots could describe any spatial interactions. We proposed the customizable interface based on the insights, which enables people with physical disabilities to freely interact with AR/VR technology.

My Role
Led team of 3 people, including two designers and one engineer, conducted the problem definition, experiment design, qualitative and quantitative research, ideation lo-fi prototype, and video design.
Future Signal
Over the last few years, the development of the Mixed Reality and the Internet of Things has revealed the possibility of a screen-free future, marked we are entering an era of ubiquitous computing. The user interface starts shifting from the touch screens to the surrounding environment. The interaction between humans and machines will change a lot. And more spatial interaction will happen
Identify Problem
Problem 1
Spatial interaction requires more ability of body movement. However, most existing technologies only rely on limited body parts, mainly hands, to do spatial interaction, which essentialy decreased the technology accessibility. 

Problem 2
The traditional and dominant gesture recognition method is based on supervised machine learning, which needs to be trained by a massive amount of similar data. It may fail in the inclusive interface area due to the different body condition between disabled people.
Reframe Question
How might we involve everyone in the exciting future by designing an inclusive AR/VR control interface?
Under this context, the traditional inclusive interaction design method can no longer apply. And a new approach of interface design for accessibility needs to be proposed. ​​​​​​​
Inclusive Design vs Customizable Design
Instead of making different types of users to adapt to one system, how might we create a flexible and customizable approach to adjust itself to fit different users and scenarios
User Research
Hypothesis
We first suggested specific interaction patterns about how people use their body-gesture to control digital products or convey their intentions.

1st Rounds of ‘Wizard of Oz’ Experiment
We recruited 20 participants, including 3 disabled people, and set four tasks for them. We allow them to freely use their bodies to conduct 3D-object-manipulation, including selecting, scaling, rotating, and moving a cube in the computer interface. Since we discovered that people still tend to use their fingers or hands, we iterated our experiments and added a list of limitations.

2nd Round of ‘Wizard of Oz’ Experiment
To better explore whether people will create their unique way to interact using their body gestures, we randomly assigned every participant two body parts, like the head and the elbow, and required them to conduct the same manipulation task as the 1st round.
Key Insights - Two Point System
Insight 1
Any 3D interactions can be described as the relative movement of two points in the 3D space. We can learn people’s intentions just by tracking relative motions of two points. And this result has no difference between the disabled and non-disabled participants.

Insight 2
To allow disabled people with various body conditions to benefit from our design, we should use the combination of different body parts to control the system. So that everybody can find their best way to use and interact with spatial technologies.
Selection
Selection
Moving
Moving
Scaling
Scaling
Rotation
Rotation
Meet Dots
Dots is a two-point body gesture recognizing system composed of two attachable slices and one wireless charger. Each piece contains an IMU sensor, Bluetooth, and battery. 
After the initial calibration onboarding, the inertial-navigation method can detect the relative motion between two pieces. After connecting with MR and IoT devices, users can enjoy complete control of any spatial interaction.
Affix Dots Anywhere
Users can affixes two “dots”, or sensors, to any parts of the body they feel comfortable moving around, depending on their unique body conditions and the task they wish to perform. The relative motion between those points is captured and calculated via IMU sensor, allowing the person to control the software. It is also applicable to use the surrounding environment, like attaching one dot on the table and another on the arm to perform an AR drawing. ​​​​​​​
Start Designing your spatial interaction
By connecting Dots with Mixed Reality facilities and IoT devices, users can use their body gestures to accomplish multiple tasks under the guidance of the two-point system. Dots empowered everyone, especially disabled people, to interact with future technologies by providing them with a customizable system based on their body conditions and specific situations.

Using Dots to connect and control the Internet of Things like the smart lightning.

Using Dots to build 3D interior design models with the HoloLens AR glasses.

Using Dots to do typing in the HoloLens AR glasses.

Back to Top