Seedance 2.0 — Access Channels and Face Upload Policy
Where you can actually access ByteDance's Seedance 2.0 video model (Volcengine, Jimeng, fal.ai, Replicate, and more), plus the model-level real-face upload block enforced after Disney/MPA pressure and the digital-avatar workaround.
A Full Breakdown of Seedance 2.0's Access Channels and Face Policy
First, the three things that matter most: (1) ByteDance has not released a formal product named "Seedance 2.0 Pro" — the official lineup includes only Seedance 2.0 (the main version, model ID doubao-seedance-2-0-260128) and Seedance 2.0 Fast (accelerated version); some overseas aggregator sites call it "Pro 2.0," which is a marketing mix-up. (2) Seedance 2.0 launched on February 12, 2026 on Jimeng, Doubao, and the Volcengine Ark experience center, with full API availability on Volcengine / BytePlus opening on April 14, 2026. (3) All of ByteDance's official channels and the vast majority of third-party channels now block real-face uploads at the model layer — tightened after the February 9 "Tim from Film Storm got deepfaked" incident and Disney/MPA copyright pressure. This is a policy decision, not a technical limitation. The rest of this article is in two parts.
1. Overview of Access Channels
Seedance 2.0's footprint falls into roughly three rings: ByteDance's own consumer products (Jimeng, Doubao, Xiaoyunque), ByteDance's enterprise products (Volcengine Ark / BytePlus ModelArk), and a dozen-plus overseas aggregators and creator platforms. The three major public clouds (AWS Bedrock, Google Vertex AI, Azure AI Foundry) have yet to ship Seedance, with their video model slots occupied by Veo, Luma, and Runway.
Channel Quick Reference (grouped by audience)
| Channel | Owner | Form | Region | Pricing (original figures) | Model ID / invocation |
|---|---|---|---|---|---|
| Volcengine Ark (volcengine) | ByteDance enterprise | HTTP API + SDK + console | Mainland China | Pure generation 46 RMB per million tokens; with video input 28 RMB per million tokens; a 15-second video ≈ 15 RMB ≈ 1 RMB per second | doubao-seedance-2-0-260128 |
| Jimeng AI (jimeng.jianying.com) | ByteDance consumer | Web + iOS/Android | Mainland China | Basic membership 69 RMB/month / 725 credits; a 15-second video consumes 90–195 credits; after the April 8 price hike, credits were broadly reduced | Model selector "Seedance 2.0" / "Fast" |
| Doubao App/Web | ByteDance consumer | Conversational + video generation entry | Mainland China | ~10 free uses per day, no top-up option | In-app "AI Creation → Video Generation" |
| Xiaoyunque (xyq.jianying.com) | ByteDance internal | Web | Mainland China | Requires a 1-RMB trial membership to unlock 2.0 | — |
| CapCut International (Video Studio / AI Video) | ByteDance consumer | Embedded in editor | Southeast Asia and Latin America first (Indonesia, Philippines, Thailand, Vietnam, Malaysia, Brazil, Mexico) | Included in CapCut Pro subscription with a fixed monthly quota | — |
| Dreamina International | ByteDance consumer | Web | Overseas | Basic ≈ $9.60/month; top tier $70–$84/month | — |
| BytePlus ModelArk | ByteDance enterprise | API + console | Overseas (ap-southeast-1) | Resource packs Light/Production/Premium, roughly 28/40/52 480p videos each; token-based | dreamina-seedance-2-0-260128 |
| Replicate | Third-party | API + Playground | Global | Billed by token, same formula as fal | bytedance/seedance-2.0, bytedance/seedance-2.0-fast |
| fal.ai | Third-party | API | Global | 720p standard $0.3034/sec, Fast $0.2419/sec; reference video input ×0.6 | Six endpoints (T2V/I2V/Reference, all with Fast variants) |
| PiAPI | Third-party | API | Global | $0.13/sec (standard) / $0.10/sec (Fast) | seedance-2, seedance-2-fast |
| Higgsfield AI | Third-party | Web | Global (excluding US and Japan in the early window) | 15-second 720p ≈ 90 credits; Plus plan $34/month ≈ 11–12 videos | — |
| Krea AI | Third-party | Web | Global | Available on all paid plans; a launch promotion offered "a week of unlimited + 50% off" | — |
| Freepik | Third-party | Web | Global paid tier | Credit-based | Supports 480p/720p/1080p, 2–12 seconds, multi-shot |
| Pollo AI | Third-party | Web | Global | Subscription credit pool, "Seedance 2.0 Free" available for trial | — |
| Runway | Third-party | Web | Non-US launch first (4/7), Japan shortly after | Standard $12/month ≈ 6–12 videos; Unlimited about $76–$95/month | T2V / References / Start-End frame, three modes |
| OpenRouter | Third-party | API aggregator | Global | From $7/M tokens | bytedance/seedance-2.0 (added April 15) |
| ComfyUI | Community | Nodes | Global | Requires a third-party API (muapi/Sjinn) for forwarding | Anil-matcha/seedance2-comfyui and others |
| Runware, WaveSpeed, Atlas Cloud, MuAPI, Magic Hour, GlobalGPT, etc. | Third-party | API | Global | Varies by vendor | Most offer standard + Fast tiers |
| AWS Bedrock / Google Vertex AI / Azure AI Foundry | Public cloud | — | — | Not available | — |
A few key notes on pricing: fal.ai's token formula (W×H×duration×24)/1024 has become the de facto industry standard, and Replicate, OpenRouter, and PiAPI all follow it. The most transparent per-second pricing comes from fal.ai ($0.3034/sec) and PiAPI ($0.13/sec) — the gap between the two is more than threefold, and users sensitive to bulk budget should compare these two directly. Volcengine Ark works out to roughly 1 RMB/sec for enterprise users, sitting between the two above, but offers higher specs including 2K resolution, 15-second single segments, and 10 concurrent requests (the BytePlus overseas version is still limited to 720p).
2. A Detailed Breakdown of Face Upload Policy
The most important conclusion: Seedance 2.0 enforces face detection at the model layer, and any image or video uploaded containing a photorealistic human face is blocked directly — this includes uploading your own face, strangers, people wearing glasses/helmets/sunglasses, and even highly realistic AI-generated faces. This restriction is applied uniformly across all of ByteDance's consumer channels (Jimeng, Doubao, Xiaoyunque, CapCut, Dreamina) and enterprise channels (Volcengine Ark, BytePlus ModelArk), and it propagates through the API to third-party platforms like Replicate, fal.ai, Runway, and Morphic — third-party frontends typically do not apply face filtering themselves, but requests that reach the Seedance 2.0 model are returned with a blocked by Seedance 2.0 error.
Policy Evolution: Three Months from Lenient to Strict
Seedance 1.0 / 1.0 Pro / 1.5 Pro (from June 2025) applied only basic content filtering and generally allowed real-face uploads, with restrictions concentrated at the prompt layer. The Seedance 2.0 closed beta (early February 2026) also allowed it, which is what enabled the viral "Jia Zhangke New Year film," "Feng Ji's late-night post," and "Tim from Film Storm got deepfaked" materials — a single facial photo was enough to replicate the person's voice and likeness. On February 9, 2026, Jimeng's operations community issued an urgent notice suspending real-person references; from February 13 to 16 Disney/Paramount Skydance/MPA sent ByteDance cease-and-desist letters, and ByteDance responded that it would "strengthen protections"; on March 24, when the global relaunch happened, "no real-person uploads + C2PA watermarking + IP filtering" was officially bundled in as compliance infrastructure; on April 2, The Paper disclosed the detailed rules for Volcengine Ark's enterprise API opening, writing "does not support real-person face generation or custom virtual avatar features" into the default base configuration. MindStudio's assessment (April 13) described it as "significantly more conservative than early internal versions," supporting the view that this is a policy decision rather than a technical inability.
Cross-Channel Policy Comparison
| Channel | Real-person photo reference | Real-person video reference | Legal alternatives | |---|---|---|---| | Jimeng Web / Dreamina International / Xiaoyunque | ❌ Message: "real-person faces are not currently supported" | ❌ | 10,000+ virtual avatar library / AI-generated portraits | | Jimeng App / Doubao App | ❌ Disallowed by default | ❌ | "Digital avatar" after liveness verification, for yourself only | | CapCut International (Video Studio / AI Video) | ❌ Official press release explicitly states "restricting…make videos from images or videos that contain real faces" | ❌ | — | | Volcengine Ark API | ❌ Official docs state verbatim "to protect personal privacy…does not support direct uploads of reference images/videos containing real-person faces" | ❌ | Virtual avatar library / console face verification + portrait authorization process | | BytePlus ModelArk | ❌ Same as above + Deepfake Defense + watermarking | ❌ | Virtual avatar library | | Replicate / fal.ai / WaveSpeed / Runway | ❌ The model layer uniformly returns a blocked error | ❌ | Switch to Kling / Veo / Sora, etc. | | Higgsfield AI | ⚠️ Offers "Face Eligibility" feature; restrictions relaxed after verification (the most permissive third-party entry) | ⚠️ Limited allowance | — | | High-tier enterprise partnership (Volcengine Ark only) | ✅ Unlocked | ✅ Unlocked | Requires a minimum-commitment agreement + 10% prepayment + 1 million RMB deposit |
Volcengine Ark official text (volcengine.com/docs/82379/2223965): "To protect personal privacy, the Seedance 2.0 series of models does not support direct uploads of reference images/videos containing real-person faces."
Jimeng Seedance 2.0 user manual Q&A: "The Seedance 2.0 model currently does not support materials containing real human faces. We recommend using a different image or a different model for generation."
Compliant Alternatives: Five Paths
For the vast majority of individual creators, only two paths are actually practical: Jimeng/Doubao App's "digital avatar" (record your own image and voice, complete liveness detection, then use your own face in AI videos afterward) and using image models like Seedream to first generate a photorealistic AI portrait and then feed it into Seedance — the latter is the "safe workflow" that ByteDance itself recommends. Enterprises can go through Volcengine Ark's portrait authorization process to complete face verification for owned or authorized models; teams needing bulk commercial real-person material can only unlock real-person rights by signing a minimum-commitment agreement (1 million RMB deposit + 10% prepayment). Among third-party channels, Higgsfield's Face Eligibility is currently the lowest-threshold I2V face reference entry, requiring enterprise email verification.
A Realistic Assessment of "Workarounds"
The workarounds circulating in the Chinese and English communities — converting real-person photos into cartoon/oil-painting/3D styles before uploading, adding solid 100%-opacity grid lines to reduce face-detection confidence, shooting the back/side/a small-proportion distant view, putting the image into the "first-frame slot" rather than the "global reference slot" — all violate ByteDance's ToS, and can fail at any time as the guardrail version updates. Zhihu and Sohu users reported that "after Chinese New Year the firewall was raised to the highest level, and even AI-sculpted photorealistic faces get blocked," which shows ByteDance is continuously tightening. SEO sites like Atlas Cloud and seedance2.ai that claim "real-face support / no watermark" are essentially marketing talk, contradicting the fact that CapCut/Dreamina explicitly add visible watermarks + C2PA content credentials, and are not to be trusted.
3. Practical Channel Selection (from a Chinese user's perspective)
If you mainly generate videos with people in them, here are the priorities by use case:
-
Videos with yourself on camera (bloggers, personal IP): Use the "digital avatar" feature on the Jimeng App or Doubao App first. After recording a video of yourself and your voice to complete liveness verification, your digital avatar can perform any action and speak any line, fully compliant with policy, and Jimeng's 69-RMB/month basic membership is enough. This is currently the only path open to individuals, zero-threshold, that genuinely allows legal use of a real human face.
-
Fictional characters / film shorts not depicting specific real people: Use the combination of Jimeng Web or Volcengine Ark API + AI-generated portraits. The workflow is to first generate a fictional-character portrait image with Seedream or Midjourney, then feed it into Seedance 2.0 for I2V or reference-based video generation. Jimeng Web's "omni-reference" supports up to 9 images + 3 videos + 3 audios, more complete than the App side.
-
Enterprise clients needing real-person endorsements, digital humans, or celebrity likeness authorization: Go directly to Volcengine Ark's enterprise partnership and run the portrait authorization process; if you need commercial real-person material at scale, prepare a 1-million-RMB deposit to sign a minimum-commitment agreement. Do not pick third-party aggregators — they get blocked by the model layer all the same, and the extra money buys nothing.
-
Pure API development / building tool products: fal.ai (most complete endpoints, best docs, transparent $0.3034/sec pricing, six endpoints covering T2V/I2V/Reference and each Fast variant) is the overall first pick; PiAPI at $0.13/sec has a clear advantage for budget-sensitive projects; Replicate is suited to fast prototyping when you need a playground to try the model. To go after 2K resolution and a 15-second maximum duration, you have to return to the native Volcengine Ark API.
-
Video content published overseas: Dreamina International or CapCut Video Studio, with C2PA watermarking and a copyright compliance layer built in, plus lip-sync support for 8+ languages, friendly to overseas markets; but face restrictions are equally strict.
-
Paths to clearly avoid: AWS Bedrock / Google Vertex AI / Azure AI Foundry don't carry Seedance, don't waste time looking there; niche aggregators claiming "no watermark + supports real faces" are essentially SEO bait; relying on grid lines, stylized conversions, or other workarounds for commercial delivery can fail at any time, and carries compliance risk.
Conclusion: Guardrails Are the Product Design
The Seedance 2.0 story can be summed up in one sentence: model capabilities jumped to the industry's top tier (2K, 15 seconds, multimodal 9+3+3 input), while "real human faces" were fully locked inside an authorization system. This is not a technical compromise — it is ByteDance's proactive compliance choice under the dual squeeze of Disney/MPA legal pressure and China's "Provisions on the Administration of Deep Synthesis" — pushing deepfake risk from the prompt layer down to the model layer, where ordinary users cannot work around it. For individual creators, "build a digital avatar first" is becoming the only legitimate path to using one's own face; for enterprises, a 1-million-RMB deposit is the price tag for using someone else's face; for developers, fal.ai and Volcengine Ark have already formed a de facto dual-center API ecosystem, while the three major public clouds remain absent. Two possible variables going forward: first, whether ByteDance will lower the enterprise minimum-commitment threshold (an overly high 1-million-RMB deposit could push small and medium enterprises toward Kling, Hailuo, and Veo); second, whether the "digital avatar" workflow will move from the App to the open API — if the latter happens, the compliance space for individual developers will expand significantly.