video-creator
Use this skill when creating complete videos from scratch - product demos, explainers, social clips, or announcements. Orchestrates the full workflow: deep interview, script generation, visual verification, Remotion project build, audio design, narration, and 4K rendering. Triggers on "make me a video", "create a video about", video production, and end-to-end video creation requests.
video video-creationorchestratorremotionprogrammatic-video4kproductionWhat is video-creator?
Use this skill when creating complete videos from scratch - product demos, explainers, social clips, or announcements. Orchestrates the full workflow: deep interview, script generation, visual verification, Remotion project build, audio design, narration, and 4K rendering. Triggers on "make me a video", "create a video about", video production, and end-to-end video creation requests.
video-creator
video-creator is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex. Creating complete videos from scratch - product demos, explainers, social clips, or announcements. Orchestrates the full workflow: deep interview, script generation, visual verification, Remotion project build, audio design, narration, and 4K rendering.
Quick Facts
| Field | Value |
|---|---|
| Category | video |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill video-creator- The video-creator skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
This is the orchestrator skill for end-to-end video creation. It coordinates a complete 7-step workflow that takes a user from "I need a video" to a finished 4K render. Each step delegates to companion skills (remotion-video, video-scriptwriting, video-audio-design, video-analyzer) while this skill manages sequencing, approval gates, and handoffs between stages.
You do NOT need to know Remotion internals or audio engineering - those are handled by companion skills. This skill's job is to run the process, ask the right questions, and make sure nothing gets skipped.
Tags
video-creation orchestrator remotion programmatic-video 4k production
Platforms
- claude-code
- gemini-cli
- openai-codex
Related Skills
Pair video-creator with these complementary skills:
Frequently Asked Questions
What is video-creator?
Use this skill when creating complete videos from scratch - product demos, explainers, social clips, or announcements. Orchestrates the full workflow: deep interview, script generation, visual verification, Remotion project build, audio design, narration, and 4K rendering. Triggers on "make me a video", "create a video about", video production, and end-to-end video creation requests.
How do I install video-creator?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill video-creator in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support video-creator?
This skill works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
Video Creator
This is the orchestrator skill for end-to-end video creation. It coordinates a complete 7-step workflow that takes a user from "I need a video" to a finished 4K render. Each step delegates to companion skills (remotion-video, video-scriptwriting, video-audio-design, video-analyzer) while this skill manages sequencing, approval gates, and handoffs between stages.
You do NOT need to know Remotion internals or audio engineering - those are handled by companion skills. This skill's job is to run the process, ask the right questions, and make sure nothing gets skipped.
When to use this skill
Trigger this skill when the user:
- Says "make me a video", "create a video about", or "I need a video for"
- Wants a product demo, explainer, social clip, or announcement video
- Asks for end-to-end video production from concept to render
- Needs help turning an idea into a finished video file
- Mentions wanting a programmatic video with Remotion
- Has a reference video and wants to create something similar
Do NOT trigger this skill for:
- Editing an existing video file (use video-analyzer or ffmpeg skills)
- Writing a script only without producing a video (use video-scriptwriting)
- Audio-only production like podcasts (use video-audio-design for music/SFX)
- Remotion coding questions without a production context (use remotion-video)
- Analyzing or reviewing a video (use video-analyzer)
Key principles
Visual-first: Get visuals approved before spending money on audio. Narration costs real dollars via ElevenLabs - never generate it until the visual layer is locked.
Interview exhaustively: Ask up to 30 questions to capture all context about the product, audience, goals, tone, assets, and visual preferences. Incomplete context leads to expensive re-work.
Structured handoff: The
video-script.yamlfile is the single source of truth between all steps. Every scene, frame count, narration line, and SFX cue lives in this file.User approval gates: Explicit approval is required at each major step. Never auto-advance. The user must say "approved" or "looks good" before the next step begins.
4K default: Always render at 3840x2160 unless the user specifies otherwise. Downscaling is easy; upscaling is not.
The 7-Step Workflow
Step 0: Ensure Remotion Project Exists (prerequisite)
Before anything else, the user must have a Remotion project set up. This is the
workspace where all video code will be written. Check if you are already inside
a Remotion project (look for remotion.config.ts or @remotion/cli in
package.json). If not, scaffold one:
npx create-video@latestThis creates a new folder with the starter project. Then install dependencies:
cd <project-name>
npm installDo NOT proceed to Step 1 until a Remotion project exists and dependencies are installed. All subsequent steps write code into this project.
If the user already has a Remotion project, cd into it and continue.
Step 1: Deep Interview
Gather all context needed to write a complete script. Ask questions one at a time using an interactive conversational approach.
Question categories (aim for 15-30 questions total):
| Category | Example Questions |
|---|---|
| Product/subject | What does the product do? What is the core value prop? |
| Audience | Who is the target viewer? Technical level? |
| Video goals | What should the viewer do after watching? |
| Tone/style | Playful or professional? Fast or slow-paced? |
| Assets | Do you have logos, screenshots, brand colors, fonts? |
| Content | What are the key features or messages to cover? |
| Visual preferences | Any reference videos? Preferred animation style? |
| Duration | How long should the video be? Where will it be published? |
Rules for Step 1:
- Ask one question at a time - do not dump a questionnaire
- If the user provides a reference video, analyze it FIRST using the video-analyzer skill before asking questions
- Summarize what you know after every 5-8 questions
- Do not proceed until you have enough context for every scene
- End with: "I have enough context to write the script. Ready to proceed?"
Exit criteria: User confirms you have enough context.
Step 2: Generate Script (YAML)
Use patterns from the video-scriptwriting skill to produce a structured YAML script.
Frame count formula:
frames = duration_seconds * 30All Remotion compositions use 30fps by default.
Generate a video-script.yaml file with this structure:
title: "Product Demo - Acme Widget"
fps: 30
resolution: { width: 3840, height: 2160 }
total_duration_seconds: 60
scenes:
- id: scene-01
title: "Hook"
duration_seconds: 5
frames: 150
narration: "Ever wished your widgets could think for themselves?"
visuals: "Dark background, glowing Acme logo fades in, particles"
animation_notes: "Logo scale 0->1 with spring, particles emit from center"
music_cue: "Ambient synth pad, low energy"
sfx: "Subtle whoosh on logo reveal"
transition_out: "crossfade"
- id: scene-02
# ... continue for all scenesRules for Step 2:
- Every scene must have
duration_seconds,frames,narration,visuals,animation_notes,music_cue, andsfxfields - Frame counts must equal
duration_seconds * 30 - Total scene durations must sum to
total_duration_seconds - Present the full script to the user for review
- Iterate on feedback until the user explicitly approves
Exit criteria: User approves the script.
Step 3: Visual Verification
Build a minimal Remotion project with visuals only - no audio layer yet.
What to build:
- Remotion compositions for each scene (visuals, animations, typography, colors)
- Scene transitions matching the script
- Correct frame counts per scene
- Basic layout and spacing at 4K resolution
What NOT to build yet:
- Audio integration
- Narration sync
- Volume ducking
- Final render pipeline
Process:
- Verify the Remotion project is set up (done in Step 0)
- Build visual compositions for each scene inside the project
- Launch Remotion Studio:
npx remotion studio - Tell the user to preview at
http://localhost:3000 - Iterate on visual feedback (colors, timing, animations, transitions)
- Get EXPLICIT visual approval before proceeding
Exit criteria: User explicitly approves the visuals.
Step 4: Build Full Remotion Project
Now that visuals are approved, flesh out the complete project.
Tasks:
- Finalize all compositions with polished animations
- Wire up scene-to-scene transitions
- Add Zod schemas for parametrization (titles, colors, durations)
- Ensure frame counts match the script exactly
- Organize project structure cleanly:
src/
compositions/
Scene01Hook.tsx
Scene02Feature.tsx
...
components/
AnimatedLogo.tsx
TransitionWipe.tsx
...
Root.tsx
index.ts
video-script.yamlRules for Step 4:
- Each scene should be its own composition file
- Shared components go in
components/ - All magic numbers should be replaced with Zod schema props
- Frame counts must still match
video-script.yaml
Exit criteria: Full project builds without errors and matches approved visuals.
Step 5: Add Background Audio + SFX
Use patterns from the video-audio-design skill.
Tasks:
- Source or select background music (user-provided or from documented sources)
- Place SFX at trigger points from the script (clicks, typing, whooshes, etc.)
- Set base volume levels for music and SFX
- Implement ducking infrastructure - prepare volume curves that will lower music during narration segments in Step 6
- Preview audio-visual sync in Remotion Studio
Volume guidelines:
| Layer | Base Volume | During Narration |
|---|---|---|
| Background music | 0.3-0.5 | 0.1-0.15 |
| SFX | 0.5-0.8 | 0.4-0.6 |
| Narration | N/A (Step 6) | 1.0 |
Exit criteria: User approves audio-visual sync.
Step 6: Add Narration (Deferred - costs money)
This step involves paid API calls to ElevenLabs. Always confirm before proceeding.
Setup:
- Check if user has an ElevenLabs API key
- If not, guide them through signup at
https://elevenlabs.io - Ask voice preference questions:
- Gender preference?
- Age range (young, middle, mature)?
- Accent preference?
- Energy level (calm, moderate, energetic)?
- Warmth (warm and friendly, neutral, authoritative)?
Process:
- Generate narration audio for each scene via ElevenLabs API
- Calculate exact audio durations from generated files
- Adjust frame counts if narration is longer/shorter than planned
- Update
video-script.yamlwith actual audio durations - Sync narration with visual timing
- Activate volume ducking on background music during narration segments
- Preview complete audio mix in Remotion Studio
Rules for Step 6:
- Always confirm costs before making API calls
- If user wants to skip narration, that is fine - proceed to Step 7 without it
- If user prefers a different TTS provider, support that
- Re-sync all timing if audio durations differ from script estimates
Exit criteria: User approves narration sync (or explicitly skips this step).
Step 7: Final Preview + 4K Render
Process:
- Launch full preview with all layers in Remotion Studio
- Tell user to review the complete video (visuals + music + SFX + narration)
- Get final approval
- Render at 4K:
npx remotion render src/index.ts Main out/video.mp4 \
--width 3840 --height 2160Format guidance:
| Format | Use Case |
|---|---|
| MP4 (H.264) | Web, social media, general sharing |
| MP4 (H.265) | Smaller file size, modern device playback |
| ProRes | Further editing in Premiere, Final Cut, DaVinci |
| WebM (VP9) | Web embedding with transparency support |
Exit criteria: Rendered video file delivered to user.
Orchestration rules
- Each step requires explicit user approval before advancing to the next step
- Explain what you are doing and why at each step
- If the user provides a reference video, run video-analyzer FIRST before starting Step 1
- Always create a
video-script.yamlas the single source of truth - The Remotion project structure must be clean and well-organized
- If the user wants to skip narration (Step 6), proceed directly to Step 7
- If the user wants a different TTS provider, adapt Step 6 accordingly
- Never generate narration audio before visual approval (Step 3 complete)
- Never skip visual verification - it prevents expensive re-work
- Always default to 4K (3840x2160) unless the user specifies otherwise
Supported video types
| Type | Duration | Scenes | Per Scene |
|---|---|---|---|
| Product demo | 30-120s | 6-15 | 5-10s |
| Explainer | 60-180s | 8-20 | 5-12s |
| Social clip | 15-60s | 3-8 | 3-8s |
| Announcement | 15-45s | 3-6 | 4-8s |
Anti-patterns / common mistakes
| Anti-pattern | Why it fails | Correct approach |
|---|---|---|
| Generating narration before visual approval | Wastes money if visuals change | Complete Steps 1-4 first, then narration |
| Skipping the interview | Script lacks context, scenes feel generic | Ask 15-30 targeted questions |
| No YAML script file | No source of truth, timing drifts | Always generate video-script.yaml |
| Auto-advancing without approval | User loses control, re-work is expensive | Wait for explicit "approved" at each gate |
| Hardcoding values in Remotion | Cannot parametrize or reuse | Use Zod schemas for all configurable values |
| Rendering at 1080p by default | Cannot upscale later | Always default to 4K (3840x2160) |
| Building audio and visuals simultaneously | Makes iteration painful | Visual-first, audio after approval |
Gotchas
Frame count math matters. 60 seconds at 30fps = 1800 frames exactly. If scene durations do not sum correctly, the video will be too short or too long. Always verify:
sum(scene.duration_seconds) == total_duration_seconds.Remotion Studio needs a running dev server. After building compositions, you must run
npx remotion studioand tell the user to openhttp://localhost:3000. Do not assume they can preview without this.ElevenLabs narration durations are unpredictable. A 5-second script line might produce 4.2s or 6.1s of audio. Always measure actual durations after generation and adjust frame counts accordingly.
Volume ducking requires knowing narration timing. The ducking curves for background music depend on knowing exactly when narration starts and stops in each scene. This is why Step 5 sets up ducking infrastructure but Step 6 activates it.
4K rendering is slow. A 60-second video at 3840x2160 can take 10-30 minutes depending on complexity. Warn the user about render time before starting, and suggest a 1080p test render first if they want a quick check.
References
references/workflow-checklist.md- Detailed checklist for each of the 7 steps with sub-tasks, expected outputs, and approval criteriareferences/project-templates.md- Starter Remotion project structures for each video type with file trees and component scaffoldsreferences/troubleshooting.md- Common issues and fixes for Remotion rendering, audio sync, FFmpeg, ElevenLabs API, and performance
References
project-templates.md
Video Creator - Project Templates
Starter Remotion project structures for each video type. Use these as scaffolding when building the Remotion project in Steps 3-4.
Shared Setup (All Video Types)
package.json dependencies
{
"dependencies": {
"@remotion/cli": "^4.0.0",
"@remotion/player": "^4.0.0",
"@remotion/zod-types": "^4.0.0",
"react": "^18.0.0",
"react-dom": "^18.0.0",
"remotion": "^4.0.0",
"zod": "^3.22.0"
},
"devDependencies": {
"@remotion/eslint-config": "^4.0.0",
"typescript": "^5.0.0"
}
}remotion.config.ts
import { Config } from "@remotion/cli/config";
Config.setVideoImageFormat("jpeg");
Config.setOverwriteOutput(true);Base schema pattern
import { z } from "zod";
export const baseSchema = z.object({
title: z.string(),
primaryColor: z.string().default("#3B82F6"),
secondaryColor: z.string().default("#1E293B"),
backgroundColor: z.string().default("#0F172A"),
fontFamily: z.string().default("Inter"),
fps: z.number().default(30),
});Product Demo (30-120s, 6-15 scenes)
File tree
product-demo/
src/
compositions/
Scene01Hook.tsx
Scene02Problem.tsx
Scene03Solution.tsx
Scene04Feature1.tsx
Scene05Feature2.tsx
Scene06Feature3.tsx
Scene07SocialProof.tsx
Scene08CTA.tsx
components/
AnimatedLogo.tsx
ScreenRecording.tsx
FeatureCard.tsx
TransitionWipe.tsx
CallToAction.tsx
PriceTag.tsx
lib/
schema.ts
colors.ts
animations.ts
Root.tsx
index.ts
public/
logo.svg
screenshots/
recordings/
video-script.yaml
package.json
remotion.config.ts
tsconfig.jsonScene scaffold - Hook
import { AbsoluteFill, useCurrentFrame, interpolate, spring, useVideoConfig } from "remotion";
export const Scene01Hook: React.FC<{ title: string; primaryColor: string }> = ({
title,
primaryColor,
}) => {
const frame = useCurrentFrame();
const { fps } = useVideoConfig();
const logoScale = spring({ frame, fps, config: { damping: 12 } });
const titleOpacity = interpolate(frame, [20, 40], [0, 1], {
extrapolateRight: "clamp",
});
return (
<AbsoluteFill style={{ backgroundColor: "#0F172A", justifyContent: "center", alignItems: "center" }}>
<div style={{ transform: `scale(${logoScale})` }}>
{/* Logo component */}
</div>
<h1 style={{ opacity: titleOpacity, color: primaryColor, fontSize: 80 }}>
{title}
</h1>
</AbsoluteFill>
);
};Typical scene flow
- Hook - grab attention with bold statement or question (5s)
- Problem - show the pain point (8s)
- Solution intro - reveal the product (5s)
- Feature 1-3 - demonstrate key features with screen recordings (8s each)
- Social proof - testimonials, stats, logos (5s)
- CTA - clear next step with URL/button (5s)
Explainer (60-180s, 8-20 scenes)
File tree
explainer/
src/
compositions/
Scene01Intro.tsx
Scene02Context.tsx
Scene03Problem.tsx
Scene04HowItWorks1.tsx
Scene05HowItWorks2.tsx
Scene06HowItWorks3.tsx
Scene07Benefits.tsx
Scene08Comparison.tsx
Scene09UseCases.tsx
Scene10CTA.tsx
components/
AnimatedDiagram.tsx
StepIndicator.tsx
ComparisonTable.tsx
IconGrid.tsx
ProgressBar.tsx
AnimatedArrow.tsx
lib/
schema.ts
colors.ts
animations.ts
Root.tsx
index.ts
public/
icons/
diagrams/
video-script.yaml
package.json
remotion.config.ts
tsconfig.jsonTypical scene flow
- Intro - topic and why it matters (8s)
- Context - background information (10s)
- Problem - what goes wrong without this (8s)
- How it works 1-3 - step-by-step walkthrough (10s each)
- Benefits - key advantages (8s)
- Comparison - before/after or vs competitors (10s)
- Use cases - real-world applications (8s)
- CTA - next steps (5s)
Social Clip (15-60s, 3-8 scenes)
File tree
social-clip/
src/
compositions/
Scene01Hook.tsx
Scene02Key Message.tsx
Scene03CTA.tsx
components/
BoldText.tsx
AnimatedEmoji.tsx
GradientBackground.tsx
SwipeTransition.tsx
lib/
schema.ts
colors.ts
Root.tsx
index.ts
public/
brand/
video-script.yaml
package.json
remotion.config.ts
tsconfig.jsonKey differences from other types
- Much faster pacing (3-8 seconds per scene)
- Bolder typography (120px+ headlines)
- High contrast colors for mobile viewing
- Vertical format option (1080x1920 for Stories/Reels)
- No complex diagrams - text and motion graphics only
- Hook must grab attention in first 2 seconds
Vertical format schema override
export const socialClipSchema = baseSchema.extend({
orientation: z.enum(["landscape", "portrait"]).default("landscape"),
width: z.number().default(3840), // landscape
height: z.number().default(2160), // landscape
// For portrait: width=2160, height=3840
});Announcement (15-45s, 3-6 scenes)
File tree
announcement/
src/
compositions/
Scene01Reveal.tsx
Scene02Details.tsx
Scene03Availability.tsx
Scene04CTA.tsx
components/
CountdownReveal.tsx
ConfettiExplosion.tsx
DateBadge.tsx
PricingCard.tsx
lib/
schema.ts
colors.ts
Root.tsx
index.ts
public/
brand/
product-images/
video-script.yaml
package.json
remotion.config.ts
tsconfig.jsonTypical scene flow
- Reveal - dramatic product/feature reveal with animation (6s)
- Details - what's new, key highlights (8s)
- Availability - when, where, pricing (6s)
- CTA - how to get it (4s)
Root.tsx Pattern (All Types)
import { Composition } from "remotion";
import { z } from "zod";
import { baseSchema } from "./lib/schema";
import { Scene01Hook } from "./compositions/Scene01Hook";
// ... import all scenes
export const RemotionRoot: React.FC = () => {
return (
<>
{/* Individual scene compositions for preview */}
<Composition
id="Scene01"
component={Scene01Hook}
durationInFrames={150} // 5s * 30fps
fps={30}
width={3840}
height={2160}
schema={baseSchema}
defaultProps={{
title: "Your Product",
primaryColor: "#3B82F6",
secondaryColor: "#1E293B",
backgroundColor: "#0F172A",
fontFamily: "Inter",
fps: 30,
}}
/>
{/* Main composition: all scenes stitched together */}
<Composition
id="Main"
component={MainVideo}
durationInFrames={1800} // total frames from script
fps={30}
width={3840}
height={2160}
schema={baseSchema}
defaultProps={{
title: "Your Product",
primaryColor: "#3B82F6",
secondaryColor: "#1E293B",
backgroundColor: "#0F172A",
fontFamily: "Inter",
fps: 30,
}}
/>
</>
);
};Transition Patterns
Crossfade
<Sequence from={0} durationInFrames={150}>
<Scene01 />
</Sequence>
<Sequence from={135} durationInFrames={165}>
{/* 15 frame overlap = 0.5s crossfade */}
<Scene02 />
</Sequence>Hard cut
<Sequence from={0} durationInFrames={150}>
<Scene01 />
</Sequence>
<Sequence from={150} durationInFrames={150}>
<Scene02 />
</Sequence>Slide transition component
const SlideTransition: React.FC<{ direction: "left" | "right"; children: React.ReactNode }> = ({
direction,
children,
}) => {
const frame = useCurrentFrame();
const x = interpolate(frame, [0, 15], [direction === "left" ? 3840 : -3840, 0], {
extrapolateRight: "clamp",
});
return <div style={{ transform: `translateX(${x}px)` }}>{children}</div>;
}; troubleshooting.md
Video Creator - Troubleshooting
Common issues and fixes for Remotion rendering, audio sync, FFmpeg, ElevenLabs API, and performance problems.
Remotion Rendering Errors
"Cannot find module" when running npx remotion render
Cause: Missing dependencies or incorrect import paths.
Fix:
# Install all dependencies
npm install
# Verify the entry point exists
ls src/index.ts
# Check for typos in imports
npx tsc --noEmit"Composition not found: Main"
Cause: The Main composition is not registered in Root.tsx or the id does
not match.
Fix:
- Verify Root.tsx exports a
<Composition id="Main" ... />element - Ensure
RemotionRootis registered insrc/index.ts:
import { registerRoot } from "remotion";
import { RemotionRoot } from "./Root";
registerRoot(RemotionRoot);"durationInFrames must be a positive integer"
Cause: Frame count is zero, negative, or a float.
Fix:
- Check
video-script.yaml- every scene must haveframesas a positive integer - Verify:
Math.round(duration_seconds * 30)for each scene - Total frames must equal sum of all scene frames
White/blank frames at the end of the video
Cause: Total durationInFrames on the Main composition is larger than the
sum of all scene Sequences.
Fix:
- Calculate:
sum(scene.frames)and set Main durationInFrames to that value - Account for transition overlaps (crossfades reduce total frames)
Render crashes with "JavaScript heap out of memory"
Cause: Large assets or complex compositions exceeding Node.js memory limit.
Fix:
# Increase memory limit
NODE_OPTIONS=--max-old-space-size=8192 npx remotion render \
src/index.ts Main out/video.mp4 --width 3840 --height 2160
# Or reduce concurrency
npx remotion render src/index.ts Main out/video.mp4 \
--width 3840 --height 2160 --concurrency 2"Could not find a browser" error
Cause: Remotion needs Chromium for rendering.
Fix:
npx remotion browser ensureAudio Sync Problems
Music starts at wrong time
Cause: Audio <Sequence> from value does not match the intended start
frame.
Fix:
// Music should start at frame 0 and span the entire video
<Sequence from={0} durationInFrames={totalFrames}>
<Audio src={backgroundMusic} volume={0.3} />
</Sequence>SFX plays out of sync with visual event
Cause: The SFX from frame does not match the visual trigger frame.
Fix:
- Identify the exact frame where the visual event occurs
- Set the SFX Sequence
fromto that frame number - Preview in Remotion Studio to verify sync
// Button click happens at frame 45 of Scene03 (frame 420 globally)
<Sequence from={420} durationInFrames={30}>
<Audio src={clickSFX} volume={0.7} />
</Sequence>Audio continues after video ends
Cause: Audio file is longer than the remaining frames.
Fix:
- Set
durationInFrameson the Audio Sequence to clip it - Or use
endAtprop on the<Audio>component
Volume ducking is too aggressive / not enough
Cause: Ducking volume levels are wrong.
Fix - recommended volume levels:
const musicVolume = (frame: number) => {
// Check if narration is active at this frame
const isNarrationActive = narrationSegments.some(
(seg) => frame >= seg.startFrame && frame <= seg.endFrame
);
if (isNarrationActive) {
return 0.12; // Duck to 12% during narration
}
return 0.4; // Normal level 40%
};
<Audio src={backgroundMusic} volume={musicVolume} />Audio pops/clicks at segment boundaries
Cause: Abrupt volume changes create audio artifacts.
Fix: Add a short fade (5-10 frames) at volume transition points:
const musicVolume = (frame: number) => {
const fadeFrames = 8; // ~0.27s fade
// Use interpolate for smooth transitions instead of hard cuts
// ...calculate smooth volume curve with ramps
};FFmpeg Issues
"FFmpeg not found" on render
Cause: FFmpeg is not installed or not in PATH.
Fix:
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpeg
# Verify
ffmpeg -versionRender produces corrupt MP4
Cause: Interrupted render or incompatible codec settings.
Fix:
# Re-render with explicit codec
npx remotion render src/index.ts Main out/video.mp4 \
--width 3840 --height 2160 --codec h264
# For ProRes output
npx remotion render src/index.ts Main out/video.mov \
--width 3840 --height 2160 --codec prores --prores-profile 4444Output file is very large
Cause: No CRF (quality) setting or using ProRes unintentionally.
Fix:
# Set CRF for H.264 (lower = better quality, larger file)
# 18 = visually lossless, 23 = default, 28 = smaller file
npx remotion render src/index.ts Main out/video.mp4 \
--width 3840 --height 2160 --crf 23ElevenLabs API Errors
401 Unauthorized
Cause: Invalid or missing API key.
Fix:
- Verify key at https://elevenlabs.io/app/settings/api-keys
- Set environment variable:
export ELEVEN_API_KEY=sk_... - Check key has not expired or been revoked
429 Too Many Requests
Cause: Rate limit exceeded.
Fix:
- Wait 60 seconds and retry
- Process scenes sequentially, not in parallel
- Check your plan's rate limits at https://elevenlabs.io/app/subscription
400 Bad Request on text-to-speech
Cause: Text contains unsupported characters or exceeds length limit.
Fix:
- Remove special characters (emojis, unusual Unicode)
- Keep narration text under 5000 characters per request
- Split long narration into multiple API calls
Generated audio has wrong voice
Cause: Using wrong voice_id.
Fix:
# List available voices
curl -H "xi-api-key: $ELEVEN_API_KEY" \
https://api.elevenlabs.io/v1/voices
# Use the correct voice_id from the responseAudio quality is poor / robotic
Cause: Using the wrong model or stability settings.
Fix:
- Use
eleven_multilingual_v2model for best quality - Adjust stability (0.3-0.5 for natural, 0.7-0.9 for consistent)
- Adjust similarity_boost (0.5-0.75 recommended)
{
"model_id": "eleven_multilingual_v2",
"voice_settings": {
"stability": 0.4,
"similarity_boost": 0.65
}
}Performance Problems
Remotion Studio is very slow / laggy
Cause: 4K preview is too demanding for the machine.
Fix:
- Use the scale slider in Remotion Studio to preview at 50% or 25%
- Close other resource-heavy applications
- Preview individual scenes instead of the full Main composition
Render takes extremely long (>30 min for 60s)
Cause: Complex compositions, large images, or low concurrency.
Fix:
# Increase concurrency (default is half your CPU cores)
npx remotion render src/index.ts Main out/video.mp4 \
--width 3840 --height 2160 --concurrency 8
# Do a quick 1080p test render first
npx remotion render src/index.ts Main out/test.mp4 \
--width 1920 --height 1080Images/screenshots appear blurry at 4K
Cause: Source images are too small for 4K resolution.
Fix:
- Use images at least 3840px wide for full-width backgrounds
- Screenshots should be taken at 2x or 3x Retina resolution
- Use SVG for logos and icons (scales infinitely)
Spring animations look janky
Cause: Wrong spring config or fps mismatch.
Fix:
// Use appropriate damping for smooth animation
const scale = spring({
frame,
fps: 30, // Must match composition fps
config: {
damping: 12, // Higher = less bouncy
stiffness: 100, // Higher = faster
mass: 0.5, // Lower = more responsive
},
});Project Structure Issues
Compositions don't appear in Remotion Studio
Cause: Root.tsx is not properly exporting compositions or not registered.
Fix checklist:
src/index.tscallsregisterRoot(RemotionRoot)Root.tsxexports a component that returns<Composition>elements- Each
<Composition>has a uniqueid - No runtime errors in any imported component
TypeScript errors block the build
Cause: Type mismatches in Remotion components.
Fix:
# Check for type errors
npx tsc --noEmit
# Common fix: ensure props match the Zod schema
# The schema type and component props must align
type Props = z.infer<typeof mySchema>;
const MyScene: React.FC<Props> = (props) => { ... };Hot reload not working in Remotion Studio
Cause: File watcher limit reached or wrong file being edited.
Fix:
# macOS: increase file watcher limit
echo kern.maxfiles=524288 | sudo tee -a /etc/sysctl.conf
echo kern.maxfilesperproc=524288 | sudo tee -a /etc/sysctl.conf
# Restart Remotion Studio after changes
# Ctrl+C then npx remotion studio workflow-checklist.md
Video Creator - Workflow Checklist
Detailed checklist for each of the 7 steps with sub-tasks, expected outputs, and approval criteria.
Step 1: Deep Interview
Sub-tasks
- Check if user provided a reference video
- If reference video exists, run video-analyzer skill first
- Ask about the product/subject (what it does, core value prop, key features)
- Ask about the target audience (who, technical level, pain points)
- Ask about video goals (CTA, where it will be published, success metrics)
- Ask about tone and style (playful vs professional, pace, energy)
- Ask about existing assets (logos, screenshots, brand colors, fonts)
- Ask about content priorities (must-include features, messaging hierarchy)
- Ask about visual preferences (animation style, reference videos, layout)
- Ask about duration and format (target length, platform constraints)
- Summarize context after every 5-8 questions
- Confirm with user that you have enough context
Expected output
- Complete understanding of product, audience, goals, tone, assets, and visual direction documented in conversation history
Approval criteria
- User explicitly confirms: "You have enough context" or equivalent
Common mistakes at this step
- Dumping all questions at once instead of asking one at a time
- Not analyzing a reference video when the user provides one
- Proceeding with only 3-4 questions and guessing the rest
- Not summarizing periodically to confirm understanding
Step 2: Generate Script (YAML)
Sub-tasks
- Determine video type (product demo, explainer, social clip, announcement)
- Calculate total duration and scene count based on type
- Write scene-by-scene breakdown with narration text
- Calculate frame counts:
duration_seconds * 30 = frames - Verify total duration:
sum(scene.duration_seconds) == total_duration_seconds - Add visual descriptions for every scene
- Add animation notes for every scene
- Add music cues and SFX markers
- Add transition types between scenes
- Write
video-script.yamlfile - Present full script to user for review
- Iterate on feedback until approved
Expected output
video-script.yamlfile with complete scene breakdown
Required YAML fields per scene
- id: scene-XX
title: string
duration_seconds: number
frames: number # duration_seconds * 30
narration: string
visuals: string
animation_notes: string
music_cue: string
sfx: string
transition_out: stringApproval criteria
- User explicitly approves the script
- All frame counts are mathematically correct
- Scene durations sum to total duration
Step 3: Visual Verification
Sub-tasks
- Initialize Remotion project if not already created
- Create composition files for each scene (visuals only)
- Implement animations described in script
- Set correct frame counts per composition
- Implement scene transitions
- Set resolution to 3840x2160
- Run
npx remotion studio - Tell user to preview at http://localhost:3000
- Collect visual feedback
- Iterate on feedback (colors, timing, animations, transitions)
- Get explicit visual approval
Expected output
- Working Remotion project with visual-only compositions
- All scenes previewable in Remotion Studio
Approval criteria
- User explicitly approves visuals: "Looks good" or equivalent
- No audio layer present yet (visual-only verification)
What NOT to do at this step
- Do not add audio files or audio components
- Do not implement volume ducking
- Do not generate narration
- Do not render the final video
Step 4: Build Full Remotion Project
Sub-tasks
- Finalize all compositions with polished animations
- Extract shared components to
components/directory - Wire up scene-to-scene transitions in Root.tsx
- Add Zod schemas for parametrization
- Replace all magic numbers with schema props
- Verify frame counts match
video-script.yaml - Ensure project builds without errors:
npx remotion studio - Organize file structure cleanly
Expected output
- Complete, well-organized Remotion project
- Clean component hierarchy
- Zod schemas for all configurable values
File structure check
src/
compositions/ # One file per scene
components/ # Shared UI components
Root.tsx # Main composition wiring
index.ts # Entry point
video-script.yaml # Source of truthApproval criteria
- Project builds without errors
- Visual output matches Step 3 approved visuals
- Code is clean and well-organized
Step 5: Add Background Audio + SFX
Sub-tasks
- Source background music (user-provided or from documented sources)
- Import music into Remotion project
- Set base music volume (0.3-0.5)
- Place SFX at trigger points from the script
- Set SFX volumes (0.5-0.8)
- Implement ducking infrastructure (volume curves)
- Configure ducking to lower music to 0.1-0.15 during narration segments
- Preview audio-visual sync in Remotion Studio
- Collect feedback on audio mix
- Iterate until approved
Expected output
- Background music integrated with correct volume levels
- SFX placed at correct timestamps
- Ducking infrastructure ready for narration layer
Approval criteria
- User approves audio-visual sync
- Music and SFX enhance rather than distract from visuals
Step 6: Add Narration (Optional - costs money)
Sub-tasks
- Confirm user wants narration (this step is skippable)
- Check for ElevenLabs API key
- If no key, guide through setup at https://elevenlabs.io
- Ask voice preference questions (gender, age, accent, energy, warmth)
- Confirm costs before making API calls
- Generate narration audio for each scene
- Measure actual audio durations
- Compare actual durations to script estimates
- Adjust frame counts if durations differ
- Update
video-script.yamlwith actual durations - Sync narration with visual timing
- Activate volume ducking during narration segments
- Preview complete audio mix
Expected output
- Narration audio files for each scene
- Updated
video-script.yamlwith actual durations - Complete audio mix (music + SFX + narration with ducking)
Approval criteria
- User approves narration quality and sync
- OR user explicitly skips this step
Step 7: Final Preview + 4K Render
Sub-tasks
- Launch full preview in Remotion Studio
- Tell user to review complete video
- Get final approval
- Ask about output format preference (MP4/H.264, ProRes, WebM)
- Run 4K render command
- Verify output file exists and has expected duration
- Report file size and location to user
Render command
npx remotion render src/index.ts Main out/video.mp4 \
--width 3840 --height 2160Expected output
- Rendered video file at 3840x2160
Approval criteria
- User confirms final video is acceptable
- Output file plays correctly
Quick Reference: Step Dependencies
Step 1 (Interview)
|
v
Step 2 (Script YAML)
|
v
Step 3 (Visual Verification) <-- APPROVAL GATE: visuals locked
|
v
Step 4 (Full Remotion Build)
|
v
Step 5 (Audio + SFX)
|
v
Step 6 (Narration) [OPTIONAL] <-- APPROVAL GATE: costs money
|
v
Step 7 (Final Render) Frequently Asked Questions
What is video-creator?
Use this skill when creating complete videos from scratch - product demos, explainers, social clips, or announcements. Orchestrates the full workflow: deep interview, script generation, visual verification, Remotion project build, audio design, narration, and 4K rendering. Triggers on "make me a video", "create a video about", video production, and end-to-end video creation requests.
How do I install video-creator?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill video-creator in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support video-creator?
video-creator works with claude-code, gemini-cli, openai-codex. Install it once and use it across any supported AI coding agent.