Imagining future rides where passengers create their own experiences with AI.
Type
Personal Study
Focus
UX, AI Workflow
Date
Feb. - March 2026
As generative AI enables experiences to be created and adapted in real time, it is reshaping how we interact with digital systems. This project applies this shift to autonomous mobility, envisioning ride experiences that passengers can actively shape through AI.
I designed the core experience storytelling and interaction framework, and used existing AI tools to simulate generative workflows—translating user intent into on-demand, controllable ride experiences.
The future ride is fully shaped by what you choose to create.
Insight
Digital controllability defines the next era of comfort
I observed that AI is rapidly gaining the ability to generate images, videos, and even functional apps within seconds. Meanwhile, ride time is evolving to lead a new lifestyle — a space for relaxation, exploration, or planning ahead. This shift revealed an opportunity for experiences to be generated on demand, while passengers remain free to customize what they use during the ride.
Solution
Passenger-Centric Display Architecture
I led the design of the cabin display surfaces, including the main screen and the integrated ambient lighting behind it. On the right side, typically the primary seating in taxis, the control capsule is designed for apps and ride interactions. Together with the wide display and the illuminated surface behind it, the system creates a panoramic visual field that enables richer and more immersive in-ride experiences.
The Control Hub
I led the design of the cabin display surfaces, including the main screen and the integrated ambient lighting behind it. On the right side, typically the primary seating in taxis, the control capsule is designed for apps and ride interactions. Together with the wide display and the illuminated surface behind it, the system creates a panoramic visual field that enables richer and more immersive in-ride experiences.
Experience: Mini Apps
Create mini apps for specific use cases.
I observed that AI is rapidly gaining the ability to generate images, videos, and even functional apps within seconds. Meanwhile, ride time is evolving to lead a new lifestyle — a space for relaxation, exploration, or planning ahead. This shift revealed an opportunity for experiences to be generated on demand, while passengers remain free to customize what they use during the ride.
The Mini Apps
In my imagination, apps become flexible canvases that passengers can shape based on their needs during the ride. This paradigm shifts apps from static software to highly controllable, generative experiences tailored to each journey.
Experience: Immersive Spaces
Generate the spaces that fits to the moods.
As generative medias become increasingly realistic and seamless by time, people will grow accustomed to creating experiences with AI. In a 20–40 minute commute, this opens the opportunity for passengers to shape a private atmosphere that fits the mood of their journey.
By using AI-generated visuals and sound across the panoramic display and ambient lighting, the vehicle becomes a space where passengers and the car co-create immersive environments for each ride.
The interface is designed to feel minimal yet highly controllable, allowing users to shape the space through two clear dimensions: elements and motion.
Freedom of creation on generated scenes
In the future, interacting with video will become simple. Instead of editing clips or adjusting parameters, passengers speak natural language to add elements, change styles, or transform the mood of a scene.
Process
20k+ AI Credits used, I found the way to use AI generative videos
In the early exploration of the immersive ride, AI videos were used as placeholders. However, through extensive experiment with 200+ video generating, I realized a picture that the generative content will be immediately responsive and interactive once it is more common and mature.
Through repeated experimentation, I established a creation workflow that simulates the journey from user input to generated output. By using GPT or Gemini to translate vague ideas into structured prompts, and Google Flow to generate and refine videos, I mapped how generative creation can become more intuitive and accessible.