Model Capabilities
Video Editing
Edit an existing video by providing a source video along with your prompt. The model understands the video content and applies your requested changes.
- The input video must have the
.mp4extension and be encoded with.mp4supported codecs such as H.265 / H.264 / AV1, etc. - The maximum length of the input video provided via the video_url parameter is 8.7 seconds.
- The duration, aspect_ratio, and resolution parameters are not supported for video editing; the output retains the duration and aspect ratio of the input, and matches its resolution, capped at 720p.
The demo below shows video editing in action. grok-imagine-video delivers high-fidelity edits with strong scene preservation, modifying only what you ask for while keeping the rest of the video intact:

import osimport xai_sdkclient = xai_sdk.Client(api_key=os.getenv("XAI_API_KEY"))response = client.video.generate( prompt="Give the woman a silver necklace", model="grok-imagine-video", video_url="https://data.x.ai/docs/video-generation/portrait-wave.mp4",)print(response.url)In the Vercel AI SDK, video editing is triggered by setting providerOptions.xai.mode to "edit-video" and passing providerOptions.xai.videoUrl with a source video URL. The prompt describes the desired modifications; duration, aspectRatio, and resolution are ignored because the output inherits these properties from the input video, capped at 720p.
Concurrent Requests
When you need to apply several edits to the same source video, run requests concurrently. This is useful for branching multiple edits from the same intermediate result.
import os
import asyncio
import xai_sdk
async def edit_concurrently():
client = xai_sdk.AsyncClient(api_key=os.getenv("XAI_API_KEY"))
source_video = "https://data.x.ai/docs/video-generation/portrait-wave.mp4"
prompts = [
"Give the woman a silver necklace",
"Change the color of the woman's outfit to red",
"Give the woman a wide-brimmed black hat",
]
tasks = [
client.video.generate(
prompt=prompt,
model="grok-imagine-video",
video_url=source_video,
)
for prompt in prompts
]
results = await asyncio.gather(*tasks)
for prompt, result in zip(prompts, results):
print(f"{prompt}: {result.url}")
asyncio.run(edit_concurrently())
Related
- Video Generation — Generate videos from text prompts
- Image-to-Video — Animate a still image
- Video Extension — Extend existing videos
- API Reference — Full endpoint documentation
- Imagine API Landing Page — Showcase of the Imagine API in action
Did you find this page helpful?
Last updated: April 2, 2026