Moonvalley

Dec, 2024 - Now

Project overview

What is it?

Moonvalley is a generative video creation and editing tool built for professional filmmakers and studios. It enables creators to generate, edit, and iterate on cinematic shots using AI, while maintaining creative control through visual inputs and production-ready workflows.

Motion transfer

Trajectory control

Camera control

Pose control

Role

As sole product designer, I led end-to-end product design across UX direction, information architecture, interaction design, and visual design. I collaborated closely with the project manager, CEO, AI team, and engineers.

Design goal

The goal is to design a generative video tool that filmmakers can trust and control, integrating AI into real production workflows through visual inputs, granular editing, and professional-grade usability.

Outcome

Launched a production-ready generative video platform Moonvalley across desktop and mobile after three major design iterations, supporting paid users and a strategic integration with Adobe.

Gallery + Generation flow

Design challenge

Generative video tools were powerful but impractical for professional filmmaking. Most products focused on single-prompt outputs with limited control, poor predictability, and workflows that didn’t reflect how films are actually made.
The challenge was to translate advanced generative models into usable, production-ready software, enabling filmmakers to explore ideas, iterate on shots, and organize assets in a way that fits real creative workflows.

Design process

Early research and exploration

In the earliest phase, product requirements were still undefined and model capabilities were evolving. The AI team was exploring what the technology could support, and there were few concrete constraints.
To work effectively within this uncertainty, I designed a simple, familiar UX flow that could flex and scale as capabilities became clearer. This allowed the team to align quickly, test ideas early, and iterate in parallel with model development.

V1 design exploration

Team collaboration and feedback

As a Hollywood-based filmmaking team Asteria joined the project, we gathered ongoing feedback to align the product with real production workflows.
I worked closely with the AI team to understand model capabilities and constraints, ensuring designs were both ambitious and realistic. To ground the experience in familiar patterns, I studied professional tools such as Adobe Premiere Pro and Final Cut Pro, and reviewed other AI generation products to evaluate common UX approaches.
This combination of filmmaker feedback, technical collaboration, and competitive research informed early design decisions.

V2 design exploration

I introduced a editor, allow user to generate and edit generations. I also add early concepts for multi-layer of prompt, Motion/Pose control and other AI edit tools.
Worked with enginer, we build the functional features and it allow user to use it in real. Internal testing revealed several limitations. The experience lacked flexibility for exploring multiple creative directions for a single shot, and it was difficult to organize outputs across larger projects. Combining all the features we want in the same screen increased complexity and slowed iteration.

These insights highlighted the need for a more focused approach.
Design details can’t be publicly shared due to confidentiality.

V3 design direction

In V3, CEO and the team decide to change the main focus and layout of the App, base on new goal, i explored new version of design for that.
To support larger projects, I redesigned the information architecture around projects, scenes, and shots, and make AI controls features work in new system. I redesigned the experience around the work workflow, enabling users to generate, edit, import, and arrange media freely.
This shift simplified the product, improved iteration speed, and better aligned the tool with early-stage creative exploration in professional filmmaking.
Design details can’t be publicly shared due to confidentiality.

Usability testing

Our filmmaker team Asteria continuously tested the product during development, generating real content and providing ongoing feedback. This input guided UX improvements and informed future feature planning.

Visual design & Design system

Work with CEO and team, we decided use professional, cinematic, and retro as three principles. I designed and maintained the product’s design system, including typography, color, and core UI components. This ensured consistency and scalability across the application, while the same visual language extended to the marketing website and brand.

Design system

Final designs

As launch approached, it became clear that additional features could be built, but time constraints required focus. Rather than delay, the team chose to ship a simplified, production-ready version to gather real market feedback.
We returned to a V1-style UX flow for clarity, while incorporating improved AI controls and a refined visual system. This approach balanced simplicity with professional capability and allowed the product to launch publicly with paid pricing tiers.

Desktop version

Try Moonvalley here in real.

Generate with multi keyframes

Camera control, allow user change the camera control in 3d space

Trajectory control

Mobile version

We also launched an integrated version in partnership with Adobe, bringing Marey to Firefly and embedding Adobe Firefly image generation directly into Moonvalley’s professional filmmaking workflows.

Moonvalley in Adobe

Outcome and impact

Moonvalley launched as a production-ready generative video platform across desktop and mobile, validating an exploration-first workflow for professional filmmakers.
The final design established a clear product direction by separating creative exploration from traditional editing, improving iteration speed and clarity. Continuous feedback from professional users informed future feature planning, while the platform’s scalability enabled a strategic integration with Adobe.

Iteration & evolution

The launch clarified Moonvalley’s core users and next challenges. Professional filmmakers and studios asked for more control, including details need for workflows. As AI capabilities expand, building a scalable, control-first design system has become the primary focus for the product’s next phase.
This project is still in progress, so some details aren’t shown yet.

Key learnings

Through multiple iterations, I learned how to structure and ship an end-to-end AI creative product, balancing emerging model capabilities with the needs of professional workflows. This project deepened my understanding of what matters most in AI tools for professional users control, predictability, and scalability.
Designing Moonvalley also strengthened my ability to work under uncertainty, collaborating closely with AI and filmmaking teams while delivering real, paid software to market.