Back to Top

Co-Creating the Ride

Imagining future rides where passengers create their own experiences with AI.

Type

Personal Study

Focus

UX, AI Workflow

Date

Feb - Mar 2026

As generative AI enables experiences to be created and adapted in real time, it is reshaping how we interact with digital systems. This project focused on envisioning autonomous ride experiences that passengers can create personal apps & spaces with AI.

I designed the core experience storytelling and interaction framework, and used existing AI tools to simulate generative workflows.

The future ride is fully shaped by what you choose to create.

Insight

Digital controllability defines the next era of comfort

As intelligent cabins evolve, the vehicle becomes a space where people choose how to spend their time to relax, explore, or stay efficient. With the advancement of AI, experiences can now be generated and adjusted in real time, allowing each passenger to shape their own activities and environment during the ride.

In this context, the ability to control and customize digital experiences defines the next era of comfort.

Solution

Passenger-Centric Display Architecture

To meet a fully autonomous scenario, I used AI to rapidly prototype alternative interior layouts. The interface was shaped as a capsule to establish a clear but soft boundary for interaction.

Experiences

Experiences: Mini Apps

Create mini apps for personal commute needs.

Every passenger uses travel time differently:  some focus on productivity with podcast or news, while people like me would even count trees outside the window.

Within a 20–40 minute ride, passengers expect their specific needs to be translated into something they can interact with or capture. Thus, creating personal apps in car as a solution must be both fast-created and precise.

The Mini Apps

In my imagination, apps become flexible canvases that passengers can shape based on their needs during the ride. This paradigm shifts apps from static software to highly controllable, generative experiences tailored to each journey.

Experience: Immersive Spaces

Generate the spaces that fits to the moods.

As generative medias become increasingly realistic and seamless by time, people will grow accustomed to creating experiences with AI. In a 20–40 minute commute, this opens the opportunity for passengers to shape a private atmosphere that fits the mood of their journey.

By using AI-generated visuals and sound across the panoramic display and ambient lighting, the vehicle becomes a space where passengers and the car co-create immersive environments for each ride.

The interface is designed to feel minimal yet highly controllable, allowing users to shape the space through two clear dimensions: elements and motion.

Freedom of creation on generated scenes

In the future, interacting with video will become simple: passengers speak natural language to add elements, change styles, or transform the mood of a scene.

Process

20k+ AI Credits used, I created a AI workflow.

The goal of the Immersive Space experience was to simulate how a person could interact with video content to seamlessly switch between scenes. I tried to simulate that in current AI tools. However, just explaining a simple prompt is more challenging than expected, as simple language prompts alone made it difficult to produce videos with consistent visual standards across a large model.

I developed a workflow that simulates how a person gives simple instructions and the system translates them into executable video actions. This workflow illustrates how generative video creation can become controllable enough to support a human-like editing experience.

With the structured workflow in place, video generation became highly controllable.Simple user adjustments could be translated into model-ready prompts, enabling the system to reliably produce the intended visual outcome.