How to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3's "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios gain full control with a layered ComfyUI pipeline to output multi-layer EXR files for standard VFX compositing.
Links
Notes and resources at聽ocdevel.com/mlg/mla-27
Try a walking desk聽- stay healthy & sharp while you learn & code
Generate a podcast - use my voice to listen to any AI generated content you want
AI Audio Tool Selection
Music:聽Use聽Suno聽for complete songs or聽Udio聽for high-quality components for professional editing.
Sound Effects:聽Use聽ElevenLabs' SFX聽for integrated podcast production or聽SFX Engine聽for large, licensed asset libraries for games and film.
Voice:聽ElevenLabs聽gives the most realistic voice output.聽Murf.ai聽offers an all-in-one studio for marketing, and聽Play.ht聽has a low-latency API for developers.
Open-Source TTS:聽For local use,聽StyleTTS 2聽generates human-level speech,聽Coqui's XTTS-v2聽is best for voice cloning from minimal input, and聽Piper TTS聽is a fast, CPU-friendly option.
I. Prosumer Workflow: Viral Video
Goal:聽Rapidly produce branded, short-form video for social media. This method bypasses Veo 3's weaker native "Extend" feature.
Toolchain Image Concept:聽GPT-4o (API: GPT-Image-1) for its strong prompt adherence, text rendering, and conversational refinement.
Video Generation:聽Google Veo 3 for high single-shot quality and integrated ambient audio.
Soundtrack:聽Udio for creating unique, "viral-style" music.
Assembly:聽CapCut for its standard short-form editing features.
Workflow Create Character Sheet (GPT-4o):聽Generate a primary character image with a detailed "locking" prompt, then use conversational follow-ups to create variations (poses, expressions) for visual consistency.
Generate Video (Veo 3):聽Use "High-Quality Chaining." Clip 1: Generate an 8s clip from a character sheet image.
Extract Final Frame: Save the last frame of Clip 1.
Clip 2: Use the extracted frame as the image input for the next clip, using a "this then that" prompt to continue the action. Repeat as needed.
Create Music (Udio):聽Use Manual Mode with structured prompts ([Genre: ...], [Mood: ...]) to generate and extend a music track.
Final Edit (CapCut):聽Assemble clips, layer the Udio track over Veo's ambient audio, add text, and use "Auto Captions." Export in 9:16.
II. Indie Filmmaker Workflow: Narrative Shorts
Goal:聽Create cinematic short films with consistent characters and storytelling focus, using a hybrid of specialized tools.
Toolchain Visual Foundation:聽Midjourney V7 to establish character and style with聽--cref聽and聽--sref聽parameters.
Dialogue Scenes:聽Kling for its superior lip-sync and character realism.
B-Roll/Action:聽Runway Gen-4 for its Director Mode camera controls and Multi-Motion Brush.
Voice Generation:聽ElevenLabs for emotive, high-fidelity voices.
Edit & Color:聽DaVinci Resolve for its integrated edit, color, and VFX suite and favorable cost model.
Workflow Create Visual Foundation (Midjourney V7):聽Generate a "hero" character image. Use its URL with聽--cref --cw 100聽to create consistent character poses and with聽--sref聽to replicate the visual style in other shots. Assemble a reference set.
Create Dialogue Scenes (ElevenLabs -> Kling): Generate the dialogue track in ElevenLabs and download the audio.
In Kling, generate a video of the character from a reference image with their mouth closed.
Use Kling's "Lip Sync" feature to apply the ElevenLabs audio to the neutral video for a perfect match.
Create B-Roll (Runway Gen-4):聽Use reference images from Midjourney. Apply precise camera moves with Director Mode or add localized, layered motion to static scenes with the Multi-Motion Brush.
Assemble & Grade (DaVinci Resolve):聽Edit clips and audio on the Edit page. On the Color page, use node-based tools to match shots from Kling and Runway, then apply a final creative look.
III. Professional Studio Workflow: Full Control
Goal:聽Achieve absolute pixel-level control, actor likeness, and integration into standard VFX pipelines using an open-source, modular approach.
Toolchain Core Engine:聽ComfyUI with Stable Diffusion models (e.g., SD3, FLUX).
VFX Compositing:聽DaVinci Resolve (Fusion page) for node-based, multi-layer EXR compositing.
Control Stack & Workflow Train Character LoRA:聽Train a custom LoRA on a 15-30 image dataset of the actor in ComfyUI to ensure true likeness.
Build ComfyUI Node Graph:聽Construct a generation pipeline in this order: Loaders: Load base model, custom character LoRA, and text prompts (with LoRA trigger word).
ControlNet Stack: Chain multiple ControlNets to define structure (e.g., OpenPose for skeleton, Depth map for 3D layout).
IPAdapter-FaceID: Use the Plus v2 model as a final reinforcement layer to lock facial identity before animation.
AnimateDiff: Apply deterministic camera motion using Motion LoRAs (e.g.,聽v2_lora_PanLeft.ckpt).
KSampler -> VAE Decode: Generate the image sequence.
Export Multi-Layer EXR:聽Use a node like聽mrv2SaveEXRImage聽to save the output as an EXR sequence (.exr). Configure for a professional pipeline: 32-bit float, linear color space, and PIZ/ZIP lossless compression. This preserves render passes (diffuse, specular, mattes) in a single file.
Composite in Fusion: In DaVinci Resolve, import the EXR sequence. Use Fusion's node graph to access individual layers, allowing separate adjustments to elements like color, highlights, and masks before integrating the AI asset into a final shot with a background plate.