___________________

Awards
Introduction
“Dots can bring real impacts on the disabled people globally, as it is the first time an AR product considers their unique conditions and help them get into the future digital world.”
-- Alex Lewis, founder of the Alex Lewis Trust
Rising of ubiquitous technology like Mixed Reality (AR/VR) and Internet of Things (IoT) cause the user interface to shift from the touch screens to the surrounding environment. In this context, the traditional inclusive interaction design method may no longer be applicable, and a new interface design approach for accessibility needs to be proposed. 
By conducting a qualitative analysis of experiments and user research, we provide novel insights into customizable interaction design and present an inclusive natural user interface, enabling people with physical disabilities to interact with the spatial computing environment.
Future Signal
Over the last few years, the development of the Mixed Reality and the Internet of Things has revealed the possibility of a screen-free future, marked we are entering an era of ubiquitous computing. The user interface starts shifting from the touch screens to the surrounding environment. The interaction between humans and machines will change a lot. And more spatial interaction will happen. 
Identify Problem
Spatial interaction requires more ability of body movement. However, most of the existing technologies only rely on limited body parts, mainly hands, to do spatial interaction, which decreases the accessibility of technologies. The traditional dominant method of current gesture recognition, based on supervised machine learning, may also fail due to the differences between different disabled peoples. 
Reframe Question
How might we involve everyone in the exciting future by designing an inclusive AR/VR control interface?
Under this context, the traditional inclusive interaction design method may no longer apply. And a new approach of interface design for accessibility needs to be proposed. ​​​​​​​
Inclusive Design vs Customizable Design
Bad cases of Inclusive Design
Disabilities are often highly individual, which leads to a lack of generality in inclusive design. For better adoption, most inclusive design projects are aimed at smaller groups of people, which leads to many systems having to be designed for users with diverse conditions. By contrast, if a system with an immutable form or interactive model is designed to adapt to people with different characteristics, it could be hard to maintain the same efficiency and usability for other users. 

Customizable Design
Instead of making different users adapt to one system, can we create a flexible and customizable approach to adjust to different users and scenarios? ​​​​​​​
User Research
Hypothesis
We first suggested specific interaction patterns about how people use their body-gesture to control digital products or convey their intentions.

1st Rounds of ‘Wizard of Oz’ Experiment
We recruited 20 participants, including 3 disabled people, and set four tasks for them. We allow them to freely use their bodies to conduct 3D-object-manipulation, including selecting, scaling, rotating, and moving a cube in the computer interface. Since we discovered that people still tend to use their fingers or hands, we iterated our experiments and added a list of limitations.

2nd Round of ‘Wizard of Oz’ Experiment
To better explore whether people will create their unique way to interact using their body gestures, we randomly assigned every participant two body parts, like the head and the elbow, and required them to conduct the same manipulation task as the 1st round.
Key Insights - Two Point System
Insight 1
Any 3D interactions can be described as the relative movement of two points in the 3D space. We can learn people’s intentions just by tracking relative motions of two points. And this result has no difference between the disabled and non-disabled participants.

Insight 2
To allow disabled people with various body conditions to benefit from our design, we should use the combination of different body parts to control the system. So that everybody can find their best way to use and interact with spatial technologies.
Meet Dots
Dots is a two-point body gesture recognizing system composed of two attachable slices and one wireless charger. Each piece contains an IMU sensor, Bluetooth, and battery. After the initial calibration onboarding, the inertial-navigation method can detect the relative motion between two pieces. After connecting with MR and IoT devices, users can enjoy complete control of any spatial interaction.
Attach Dots Anywhere
Users can attach two dots separately to any of their body parts depending on their unique body conditions and the task they wish to perform. It is also applicable to use the surrounding environment, like attaching one dot on the table and another on the arm to perform an AR drawing. ​​​​​​​
Start Designing your spatial interaction
By connecting Dots with Mixed Reality facilities and IoT devices, users can use their body gestures to accomplish multiple tasks under the guidance of the two-point system. Dots empowered everyone, especially disabled people, to interact with future technologies by providing them with a customizable system based on their body conditions and specific situations.
Back to Top