Back to Top

Interfaces Everywhere

Breaking the boundary of screens.

Type

School Project

Role

UI, Motion

Focus

Input & Output

Date

Jan 2024

When AI becomes the core of smart home systems, interaction can happen anywhere — shifting users toward an intent-driven mental model beyond screens. In this context, the challenge shifts to designing when and how interfaces should appear.

I worked with other interaction designers from ArtCenter College of Design to explore future interaction models across voice, emotion-aware systems, and interface design. My focused on interfaces that emerge in space and can be naturally invoked through gestures.

Insight

People feel tone before reading information.

In the future where our home systems is responsive and intelligent, interaction can happen anywhere. Interfaces are no longer confined to screens—they can be projected onto walls, floors, or objects, emerging only when needed as an augmentation to the physical space.

The image is a vision during a morning routine, while exercising, AI can proactively organize your schedule, while guidance content seamlessly appears on surrounding surfaces. Picture made with Adobe Illustration.

Idea

Design gesture input for no touching interaction.

As interfaces move into space, touch-based interaction becomes less practical. Users are often in motion, interacting with distributed and transient interfaces that are not physically reachable. I designed a non-touch interaction model — combining voice and gesture to wake up the interfaces. Simple actions such as revealing interfaces, pointing, and mid-air grabbing allow users to interact seamlessly without interrupting their ongoing activities.

Team

Daniel Dian Gu, Munchy Wu, Colin Feng