Skip to content
UXClaim
Design Ops

Video to UI

Convert screen recordings into design data, code edits, or runnable React scaffolds with motion and micro-interactions intact.

What it does

Video to UI transforms screen recordings into actionable design assets and code. Unlike static screenshots, it extracts every frame from videos, analyzes them in parallel, and synthesizes the full design system—including transitions, micro-animations, hover states, and scroll-triggered reveals that vanish in still images.

How it works

Point the skill at any screen recording (Figma prototypes, app demos, marketing videos, Loom clips) and choose your output format. The skill extracts frames, reads them in parallel using disposable subagents, and produces either a design analysis report, concrete code edits for existing files, or a complete runnable React scaffold.

Use cases

  • Design audit: Get a markdown report of the design system—palette in hex, type scale, spacing rhythm, component styles, iconography
  • Code alignment: Compare recordings against your codebase and get a diff list of edits (replace --button-radius 4px → 8px in Button.tsx)
  • Rapid prototyping: Generate a Vite + React + TypeScript + Tailwind + Framer Motion scaffold in seconds, complete with mock API timing that mimics the video
  • Motion documentation: Capture timing, pacing, and animation sequences that survive in video but disappear in screenshots

Who benefits

Product designers, UX engineers, and design systems maintainers who need to move faster—especially when auditing external designs, rebuilding from video references, or documenting motion and interaction timing.

Frequently asked questions

How do I install video-to-ui?
Run `npx skills add mmohajer9/video-to-ui --skill video-to-ui --global` via the skills.sh CLI, restart Claude Code, then invoke it with `/video-to-ui ~/path/to/recording.mp4`. Requires ffmpeg installed locally.
What video formats does it support?
Any format ffmpeg can read—MP4, MOV, WebM, AVI, etc. Works with Figma prototype videos, app demos, marketing recordings, Loom clips, and screen captures from any tool.
What are the four output modes?
Extract frames (raw PNGs), Analyze design (markdown report of system, palette, type scale, components), Compare against code (mode 2 + concrete diffs for your files), Scaffold React app (mode 2 + runnable Vite + React + Tailwind + Framer Motion project).
Does it capture micro-animations and hover states?
Yes—every frame is extracted and analyzed. The design report and scaffolds include timing, transitions, scroll-triggered reveals, and state changes that static screenshots miss entirely.
Can it edit my existing codebase?
Yes, in mode 3. Point it at target files and it analyzes the video, then shows you a diff list of edits with frame references (`frame 014`) before making any changes. Always waits for your approval.
What's included in the React scaffold?
A complete Vite + React + TypeScript + Tailwind + Framer Motion app with one component per screen, a mock API that mimics video timing, Tailwind tokens auto-populated from the analysis, and ready to run with `npm install && npm run dev`.
How long does analysis take?
Depends on video length and complexity. Frames are processed in parallel by disposable subagents. A 2-minute recording typically completes in 2–5 minutes. Longer videos may take proportionally longer.
Can I use this for mobile app recordings?
Absolutely. Works equally well with mobile, web, and desktop recordings. The output adapts—React scaffold mode includes responsive design; code-edit mode knows how to handle mobile-first patterns.

Glossary

Frame extraction
Breaking a video file into individual image frames (PNGs). Video to UI extracts every frame to ensure no animation or state change is missed during analysis.
Disposable subagents
Temporary Claude instances spawned to analyze video frames in parallel. Each reads one or more frames, reports findings, then closes. Faster and cheaper than sequential processing.
Design synthesis
Combining observations from many frames into a single cohesive design system report—reconciling colors, typography, spacing, and components seen across the entire recording.
Micro-animation
Small, quick motion effects (typically under 500ms) that enhance perceived performance and polish—button bounces, opacity fades, icon rotations. Invisible in static screenshots.
Mock API
A fake backend included in the React scaffold that simulates API response timing observed in the video, letting the scaffold animate and transition at the same pace as the original.

More in Design Ops

All →
Design Ops

AI Atelie

Local-first, open-source design tool. Bring your own AI agent (Claude Code, Kimi, Codex). Generate designs as HTML/JSX/CSS folders with instant tweaks, inspe...

aiatelie
Design Ops

AI Toolbox

Claude Code plugin with 13+ skills for code review, accessibility audits, design systems, and end-to-end feature planning backed by ClickUp.

Matisantillan11
Design Ops

Architect Playbook

Self-improving Claude Code audit skills for TypeScript/React codebases covering architecture, security, accessibility, performance, testing, and more.

BenSheridanEdwards
Design Ops

Chrome DevTools Skill

Browser debugging, automation, performance analysis, accessibility auditing, and LCP optimization for Claude Code without MCP server setup.