English | 简体中文 | 繁體中文 | Español | Deutsch | Français | 日本語 | 한국어 | Türkçe | Русский
Seedance 2.0
Human Face Now Available
Try Now
Seedance 2.0 Gateway Service pricing, models, text-to-video, image-to-video, and reference-to-video in one unified guide.
View Seedance 2.0 Pricing · Get API Key · Read API Docs
Seedance 2.0 Gateway Service is a video generation gateway service for creating AI videos from text prompts, images, and multimodal references. Through EvoLink.ai, developers can access the full Seedance 2.0 model family with one consistent API workflow:
- create a generation task
- receive a task ID immediately
- poll task status or receive a callback
- download the generated video result
This repository is designed for developers who want to:
- understand Seedance 2.0 Gateway Service pricing and model differences
- rank for searches like
Seedance 2.0 API,Seedance 2 API price, andSeedance API pricing - compare text-to-video, image-to-video, and reference-to-video modes
- understand the difference between standard and fast models
- copy production-ready request examples
- estimate pricing before shipping
- discover related Seedance resources across the EvoLinkAI GitHub ecosystem
seedance-2.0-text-to-videoseedance-2.0-image-to-videoseedance-2.0-reference-to-video
seedance-2.0-fast-text-to-videoseedance-2.0-fast-image-to-videoseedance-2.0-fast-reference-to-video
Create a Seedance 2.0 video task with a single API call:
curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "seedance-2.0-text-to-video",
"prompt": "A cinematic aerial shot of a futuristic city at sunrise, soft clouds, reflective skyscrapers, smooth camera motion",
"duration": 5,
"quality": "720p",
"aspect_ratio": "16:9",
"generate_audio": true
}'Example response:
{
"id": "task-unified-1774857405-abc123",
"model": "seedance-2.0-text-to-video",
"object": "video.generation.task",
"status": "pending",
"progress": 0,
"type": "video"
}All Seedance 2.0 models use the same task-based workflow.
POST https://api.evolink.ai/v1/videos/generationsGET https://api.evolink.ai/v1/tasks/{task_id}When the task is completed, the response returns generated video URLs in the result payload.
You can pass callback_url in the create request if you want asynchronous notifications instead of polling only.
| Model | Input Type | Best For | Notes |
|---|---|---|---|
seedance-2.0-text-to-video |
text only | prompt-first video generation | supports optional web search |
seedance-2.0-image-to-video |
1-2 images | first-frame or first/last-frame animation | ideal for image animation workflows |
seedance-2.0-reference-to-video |
images, videos, audio, text | advanced multimodal generation and editing | best for editing, extension, and guided generation |
seedance-2.0-fast-text-to-video |
text only | faster prompt-based generation | same basic workflow as standard text model |
seedance-2.0-fast-image-to-video |
1-2 images | faster image animation | supports more image formats than the standard image model |
seedance-2.0-fast-reference-to-video |
images, videos, audio, text | faster multimodal generation | strong default for quick iteration |
| Parameter | Type | Description |
|---|---|---|
model |
string | selects the Seedance 2.0 model |
prompt |
string | generation prompt, supported across all model families |
duration |
integer | output video duration, 4-15 seconds or -1 for smart duration |
quality |
string | 480p or 720p |
aspect_ratio |
string | 16:9, 9:16, 1:1, 4:3, 3:4, 21:9, or adaptive |
generate_audio |
boolean | whether synchronized audio should be generated |
callback_url |
string | optional HTTPS callback URL |
Use seedance-2.0-text-to-video or seedance-2.0-fast-text-to-video when you want to generate a video from prompt text only.
Key points:
- no image, video, or audio inputs
- supports optional
model_params.web_search - good for concept generation and trend-aware prompts
Docs:
Use seedance-2.0-image-to-video or seedance-2.0-fast-image-to-video when you want to animate one or two images.
Key points:
image_urlsaccepts 1-2 images- 1 image = first-frame animation
- 2 images = first-frame to last-frame transition
- useful for marketing, product visuals, social posts, and stylized motion clips
Docs:
Use seedance-2.0-reference-to-video or seedance-2.0-fast-reference-to-video for the most control.
Key points:
- supports
image_urls,video_urls, andaudio_urls - can create new outputs from multimodal references
- can extend, edit, or recompose videos
- input reference videos affect billing
Docs:
For text-to-video and image-to-video tasks, pricing is based on output duration:
cost = output video duration in seconds × resolution price
| Resolution | Price |
|---|---|
480p |
4.63 credits / second |
720p |
10.00 credits / second |
For reference-to-video tasks, input reference video duration is also billed:
cost = (input reference video duration + output video duration) × resolution price
- audio generation has no extra charge
web_searchcosts0.04credits per actual search call- smart duration (
-1) reserves 10 seconds first, then settles by actual output length - 1 credit = 10,000 UC = ¥0.10
Detailed breakdown:
curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "seedance-2.0-text-to-video",
"prompt": "A macro shot of a glass frog on a green leaf, focus shifting to its transparent body and visible beating heart",
"duration": 8,
"quality": "720p",
"aspect_ratio": "16:9",
"generate_audio": true
}'curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "seedance-2.0-image-to-video",
"prompt": "The camera slowly pushes in as the still image comes alive",
"image_urls": ["https://example.com/first-frame.jpg"],
"duration": 5,
"aspect_ratio": "adaptive"
}'curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "seedance-2.0-reference-to-video",
"prompt": "Use video 1 camera movement throughout the clip and audio 1 as background music",
"image_urls": ["https://example.com/ref1.jpg"],
"video_urls": ["https://example.com/reference.mp4"],
"audio_urls": ["https://example.com/bgm.mp3"],
"duration": 10,
"quality": "720p",
"aspect_ratio": "16:9",
"generate_audio": true
}'More code:
import requests
import time
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
create_resp = requests.post(
"https://api.evolink.ai/v1/videos/generations",
headers=headers,
json={
"model": "seedance-2.0-text-to-video",
"prompt": "A cinematic drone shot above snowy mountains at sunrise",
"duration": 5,
"quality": "720p",
"aspect_ratio": "16:9"
}
).json()
task_id = create_resp["id"]
while True:
task = requests.get(
f"https://api.evolink.ai/v1/tasks/{task_id}",
headers={"Authorization": "Bearer YOUR_API_KEY"}
).json()
if task.get("status") == "completed":
print(task)
break
elif task.get("status") == "failed":
print(task)
break
time.sleep(3)const createResp = await fetch("https://api.evolink.ai/v1/videos/generations", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.EVOLINK_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "seedance-2.0-fast-text-to-video",
prompt: "A neon-lit cyberpunk alley in the rain, cinematic camera movement",
duration: 5,
quality: "720p",
aspect_ratio: "16:9",
generate_audio: true
})
});
const task = await createResp.json();
console.log(task);Seedance 2.0 Gateway Service is well suited for:
- AI video generation apps
- creative tooling and editor workflows
- image animation pipelines
- video ad generation
- social media content creation
- product demo generation
- multimodal video editing workflows
- prototype filmmaking and concept visualization
No. Seedance 2.0 generation is asynchronous. You create a task first, then query task status later or use a callback URL.
Fast models follow the same request pattern but are positioned for quicker iteration. You should compare them by workflow needs, latency expectations, and output preference.
Yes. Use seedance-2.0-text-to-video or seedance-2.0-fast-text-to-video.
Yes. Use image-to-video. One image becomes the first frame. Two images become first and last frames.
Yes. Use reference-to-video with video_urls and a guiding prompt.
Yes. Reference video duration is included in billing for reference-to-video tasks.
Generated video URLs are valid for 24 hours. Save them promptly.
seedance-2-api/
├── README.md
├── assets/
│ └── banner.jpg
├── docs/
│ ├── text-to-video.md
│ ├── image-to-video.md
│ ├── reference-to-video.md
│ ├── fast-models.md
│ └── pricing.md
└── examples/
├── curl/
├── nodejs/
└── python/
These docs now cover both standard and fast variants for text-to-video, image-to-video, and reference-to-video.
- Seedance 2.0 Gateway Service: Price and Guide — this repository
- Seedance 2 Video Gen Skill for OpenClaw — OpenClaw skill integration for Seedance workflows
- Awesome Seedance 2 Guide — broader Seedance ecosystem guide and discovery entry
Please review Regional Availability before integration.
MIT
Now Available: You can integrate against the docs today. Once Seedance Gateway Service opens up, we’ll notify early-access users.