
State of AI Video — April 2026
Monthly market report: HappyHorse debuts at #1, four major model launches in March, and open source takes the lead. Rankings, trends, and what's next.
The AI video leaderboard just had its biggest shakeup since Runway Gen-4.5 debuted in December. HappyHorse 1.0, an open-source model from ATH-AI, debuted at #1 on the Artificial Analysis Video Arenawith an ELO of 1,347 — 124 points ahead of former leader SkyReels V4. Meanwhile, four major model launches in March made it the busiest month in AI video history.
This is the first issue of VidScore’s monthly State of AI Video report. Each month we’ll cover Arena rankings, new launches, pricing shifts, and the trends shaping this market.
Data as of April 10, 2026.
Key Numbers This Month
- New #1 model: HappyHorse 1.0 (ELO 1,347 T2V / 1,406 I2V)
- 4 major launches in March alone (Seedance 2.0, PixVerse V6, Wan 2.7, SkyReels V4)
- 27 models now tracked on VidScore with verified API pricing
- $0.02-$0.60/secAPI pricing range — a 30x spread
- 3 of the top 10 Arena models are open source or open-weight
Arena Rankings: April 2026
The Artificial Analysis Video Arena ranks models via blind A/B comparisons where humans pick the better video. Here are the current top 10 for text-to-video quality:
| # | Model | ELO | Developer | $/sec | Open Source |
|---|---|---|---|---|---|
| 1 | HappyHorse 1.0 | 1,347 | ATH-AI | No API yet | Yes |
| 2 | SkyReels V4 | 1,244 | Skywork AI | $0.12 | Yes |
| 3 | Grok Imagine Video | 1,229 | xAI | $0.05 | No |
| 4 | Vidu Q3 Pro | 1,223 | Shengshu | $0.07 | No |
| 5 | Runway Gen-4.5 | 1,223 | Runway | $0.25 | No |
| 6 | Kling 2.5 Turbo | 1,213 | Kuaishou | $0.042 | No |
| 7 | Veo 3 Fast | 1,210 | $0.10 | No | |
| 8 | PixVerse V6 | 1,209 | PixVerse | $0.025 | No |
| 9 | Luma Ray 3 | 1,204 | Luma AI | $0.20/gen | No |
| 10 | Wan 2.6 | 1,186 | Alibaba | $0.10 | Yes |
Notable: Runway Gen-4.5peaked at ELO 1,247 and #1 when it launched in December 2025. Four months later, it sits at #5. The pace of displacement is accelerating — no model has held #1 for more than 8 weeks in 2026.
What Changed This Month
HappyHorse 1.0: Open Source Takes #1
The biggest story this month is HappyHorse 1.0from ATH-AI, the independent team led by former Kuaishou VP Zhang Di (previously under Alibaba’s Taotian Group). It debuted at #1 on the Arena with ELO 1,347 in text-to-video and a record-setting 1,406 in image-to-video— 74 points ahead of #2 Seedance 2.0.
What makes this remarkable: it’s open source with commercial licensing. The 15B parameter model generates 1080p video in ~38 seconds on a single H100 with native 7-language lip-sync (Chinese, English, Japanese, Korean, German, French). Model weights are announced but not yet publicly released as of April 10 — the GitHub repo shows “coming soon.”
No API providers have listed it yet. When weights ship, expect FAL.ai and Replicate to offer hosted access within days.
Seedance 2.0: ByteDance Goes Multimodal
Seedance 2.0launched in March as ByteDance’s most ambitious video model. Its defining feature: a unified architecture that accepts text, up to 9 reference images, 3 reference videos, and 3 audio files simultaneously. No other model accepts this many input types.
At $0.302/sec on FAL.ai, it’s not cheap — but the multi-modal input support gives directors and editors unprecedented control. It generates up to 15 seconds with native lip-sync and beat-synchronized audio.
PixVerse V6: Cinematographer-Grade Controls
PixVerse V6 (March 30) entered the Arena at #16 with ELO 1,209 for text-to-video, but jumped to #4 for image-to-video(ELO 1,313). Its standout feature: 20+ cinematic lens controls including focal length, aperture, depth of field, and chromatic aberration — all adjustable via prompts.
Pricing is aggressive: $0.025/sec at 360p up to $0.115/sec at 1080p with audio. The widest resolution-based pricing ladder of any model.
Wan 2.7: Four Modes, One Architecture
Alibaba’s Wan 2.7 shipped in March as a 27B-parameter Mixture-of-Experts model (14B active per pass) under Apache 2.0. It consolidates four generation modes into one architecture: text-to-video, image-to-video with first/last-frame control, reference-to-video with voice cloning, and instruction-based video editing.
At $0.10/sec on FAL.ai, it offers the widest feature set per dollar of any current model. Open weights are expected mid-Q2 2026.
Winners and Losers
Winners
- Open source. HappyHorse at #1, Wan 2.7 under Apache 2.0, LTX-2 Pro as cheapest 1080p+audio option. The quality gap between open and commercial models has effectively closed. Self-hosting is now viable for production workflows.
- Native audio. Every major launch in 2026 includes audio generation. Silent-only models (Pika, Hailuo, CogVideoX) are increasingly at a feature disadvantage. Joint audio-visual generation is the new baseline.
- Budget creators. Grok Imagine Video delivers Arena #6 quality at $0.05/sec with audio included. Kling 2.5 Turbo at $0.042/sec is Arena #13. Premium quality at budget prices is now real.
- Chinese AI labs. 4 of the top 5 Arena models are from Chinese teams: ATH-AI (HappyHorse), Skywork AI (SkyReels), Kuaishou (Kling), and Shengshu (Vidu).
Losers
- Runway’s lead.Gen-4.5 held #1 for about 8 weeks after its December launch. It’s now #5 with four models ahead of it — three of which cost less. At $0.25/sec, it’s the most expensive model in the top 10.
- Closed-source moats. When an open-source model takes #1, the argument for closed-source premium pricing weakens. Providers charging $0.25+/sec need to justify the gap with unique features, not just quality.
- Audio-free models.Models without native audio — Pika 2.0/2.5, Minimax Hailuo, Luma Ray2, CogVideoX — are missing an increasingly mandatory feature. Post-production audio adds cost and friction that competitors eliminate.
- Sora’s consumer app.OpenAI shut down the Sora consumer web app in March, pivoting to an API-first strategy. The move signals that consumer-facing video generation tools are hard to monetize — the value is in API infrastructure, not front-end apps.
What’s Coming
- HappyHorse weights release: Expected within weeks. Once weights ship, self-hosting the #1 model becomes possible. API providers will race to list it.
- Wan 2.7 open weights: Alibaba targets mid-Q2 2026 for the full 27B model weights. Will be the most capable open-source model available for self-deployment.
- Runway response:With Gen-4.5 sliding from #1 to #5, expect Runway to accelerate its next release. The company hasn’t announced Gen-5 timing but competitive pressure is mounting.
- Google Veo 4:Veo 3.1 remains competitive but hasn’t topped the Arena. Google typically responds to competitive pressure with aggressive updates.
- Price compression continues: The floor has dropped from ~$0.05/sec a year ago to $0.02/sec today. Expect sub-$0.01/sec options by Q3 as open-source models proliferate.
Next month’s report will cover HappyHorse’s weight release impact, Wan 2.7 benchmarks on self-hosted infrastructure, and any new Arena entries. Follow our live leaderboard for real-time ranking updates between reports. For pricing across all models mentioned, see the AI Video Pricing Guide 2026. For detailed reviews of the top models: Kling v3 Review, Veo 3 Review, Sora 2 Review.
FAQ
What is the best AI video model in April 2026?
HappyHorse 1.0 from ATH-AI holds the #1 position on the Artificial Analysis Video Arena with an ELO of 1,347 for text-to-video and a record 1,406 for image-to-video. It debuted in April 2026 as an open-source model with 7-language lip-sync.
Which AI video models launched in March 2026?
March 2026 saw four major launches: Seedance 2.0 from ByteDance (unified multimodal architecture), PixVerse V6 (20+ lens controls), Wan 2.7 from Alibaba (27B MoE, Apache 2.0), and SkyReels V4 from Skywork AI (first unified multi-modal video foundation model).
Is open-source AI video catching up to commercial models?
Open source has already caught up. HappyHorse 1.0, an open-source model, debuted at #1 on the Arena in April 2026, beating every commercial model. Wan 2.7 (Apache 2.0) offers four generation modes at $0.10/sec. The quality gap between open and commercial models has effectively closed.
How often is the AI video leaderboard updated?
The Artificial Analysis Video Arena updates continuously as users submit blind comparisons. Rankings can shift daily with new model submissions. VidScore publishes this State of AI Video report monthly to track meaningful changes.
Sources
- Artificial Analysis Video Arena — ELO-based quality rankings from blind human evaluations
- HappyHorse 1.0 Official Site — ATH-AI launch page and demo access
- Seedance 2.0 by ByteDance — Official product page and architecture details
- PixVerse V6 Launch — V6 feature overview and capabilities
- Wan 2.7 on FAL.ai — API documentation and pricing
- SkyReels V4 Features — Official site with API access and documentation