Collections

Edit your videos

Edit, restyle, extend, and remix your videos with AI.

Models we recommend

Style transfer and restyling

Luma Modify Video transforms the visual style of a video while preserving its structure. Three modes: adhere for subtle tweaks, flex for stylistic changes, and reimagine for dramatic transformations. Supports videos up to 30 seconds.

Text-based video editing

Kling 3.0 Omni edits videos using natural language instructions — swap backgrounds, change lighting, modify clothing, or apply style transfers while keeping the original motion. Also handles reference-based editing with up to 7 reference images.

Wan 2.7 VideoEdit edits videos with text instructions while preserving the original motion and structure. Open-source, supports optional reference images, 720p and 1080p output.

Grok Imagine Video from xAI includes a video editing mode alongside its generation capabilities.

Extend and continue videos

Grok Imagine Video Extension seamlessly extends any video by 2-10 seconds. Describe what happens next and it generates a continuation from the last frame — maintaining visual style, motion, and consistency. Great for lengthening clips or building longer sequences iteratively.

Reframe and resize

Luma Reframe Video changes the aspect ratio of any video (e.g. landscape to portrait for social media) with AI-generated content filling the expanded areas. Outputs 720p, supports videos up to 30 seconds.

Add audio to video

MMAudio generates contextual audio — background music, sound effects, and ambient sounds — that matches the motion and mood of your video.

ThinkSound is another option for synchronized audio generation.

Basic video tools

Frequently asked questions

Which models are the fastest?

For quick edits and smaller clips, luma/reframe-video is one of the fastest options—it can reformat short videos (up to 30 seconds) in 720p almost instantly.

lucataco/trim-video and lucataco/video-merge are also lightweight tools designed for snappy turnaround when cutting or combining short clips.

Which models offer the best balance of cost and quality?

If you want strong quality with minimal compute, luma/modify-video is a great middle ground. It supports style transfer and prompt-based edits without requiring long render times.

For workflows that combine enhancement and output-ready results, lucataco/video-utils provides versatile functions (trim, merge, reframe) in one package.

What works best for stylizing or transforming videos?

For creative transformations, luma/modify-video lets you apply visual style changes directly from a text prompt. You can make your clip look painted, cinematic, or stylized without manual editing.

If you want to go beyond visuals and add synchronized sound effects, try zsxkib/mmaudio—it generates contextual audio that matches motion and mood in the video.

What works best for reframing or resizing clips?

luma/reframe-video specializes in changing aspect ratios while keeping subjects centered. It’s ideal for adapting horizontal footage for vertical formats like TikTok or Reels.

It outputs in 720p and supports videos up to about 30 seconds long, making it well-suited for social content workflows.

What’s the difference between key subtypes or approaches in this collection?

There are three main editing categories:

Each category can be used separately or chained together for more complex edits.

What kinds of outputs can I expect from these models?

Most models return enhanced or edited MP4 videos, though some (like lucataco/extract-audio or lucataco/frame-extractor) produce separate audio files or image frames.

Visual edits preserve motion while changing tone, style, or composition; audio models create synchronized, AI-generated soundtracks.

How can I self-host or push a model to Replicate?

Many of these video-editing models are open source. You can fork one (for example, luma/modify-video) and customize it using Cog or Docker.

To publish your own model, define inputs and outputs in a replicate.yaml, push it to your account, and it will run on Replicate’s managed GPUs.

Can I use these models for commercial work?

Yes, many of the tools listed are licensed for commercial use, but always confirm on each model’s page.

If a model includes third-party data (like pretrained style references), check for attribution or redistribution requirements before using outputs in published media.

How do I use or run these models?

Upload your source video, choose the desired transformation or effect, and click Run.

For example, you can reframe a landscape clip into portrait format with luma/reframe-video, or apply a “cartoon” style prompt in luma/modify-video.

You can also chain tasks—extract frames, enhance visuals, and then merge the results into one final file.

What should I know before running a job in this collection?

  • Shorter clips (under 30 seconds) process faster and more reliably.
  • Keep input resolution moderate if you’re applying heavy style transfer.
  • For videos with dialogue or music, run lucataco/extract-audio first, edit visuals separately, and re-merge using lucataco/video-audio-merge.
  • Some models produce fixed-resolution outputs (e.g., 720p for luma/reframe-video), so check before scaling.

Any other collection-specific tips or considerations?