xAI launches Grok 4.3 and Custom Voices
TECH

xAI launches Grok 4.3 and Custom Voices

45+
Signals

Strategic Overview

  • 01.
    xAI completed the full Grok 4.3 API rollout on April 30, 2026, after a quiet beta that surfaced in the grok.com model selector on April 17, 2026.
  • 02.
    Grok 4.3 ships with a 1 million token context window, native video input, and always-on reasoning where every request is reasoned over before answering, with improved tool calling and instruction following.
  • 03.
    API pricing lands at $1.25 per 1M input tokens and $2.50 per 1M output tokens, roughly 40% lower input and 60% lower output cost than Grok 4.20, with cached input at $0.20 per 1M.
  • 04.
    The model is exposed on Vercel's AI Gateway under model id xai/grok-4.3 with no markup, supporting vision input, tool calling, extended reasoning, and prompt caching.
  • 05.
    Custom Voices lets developers clone a voice from up to 120 seconds of reference audio in under two minutes, or pick from a Voice Library of 80+ preset voices spanning 28 languages.
  • 06.
    Custom Voices uses two-stage verification: the speaker reads a real-time passphrase, then xAI matches speaker embeddings against the full clip, designed to block cloning of pre-existing recordings or someone else's voice.
  • 07.
    Custom Voices is initially restricted to the United States (excluding Illinois), with each developer able to create up to 30 custom voices for free for use across xAI voice APIs.
  • 08.
    Independent benchmarks show Grok 4.3 ranking #1 on Vals AI's CaseLaw v2 (79.3%) and #1 on CorpFin v2 (68.5%), while landing #13 on the broader Vals Index, framing it as a domain specialist.
  • 09.
    Artificial Analysis scored Grok 4.3 at 53 on its Intelligence Index and reported a 321-point ELO jump on the GDPval-AA agentic benchmark vs. Grok 4.20.

The verticalization bet: always-on reasoning aimed at lawyers and CFOs, not Leetcode

Grok 4.3's headline architectural choice is that every request now runs through a reasoning pass before answering, with no toggle to skip it. Paired with a 1 million token context window, that's a deliberate posture: xAI wants the model judged on how well it digests entire case files, 10-Ks, and contracts rather than how cleanly it solves a one-shot coding prompt.

The benchmark scoreboard ratifies that bet. Vals AI ranks Grok 4.3 #1 on CaseLaw v2 (79.3%) and #1 on CorpFin v2 (68.5%), while landing it only #13 on the broader Vals Index, with Vals itself flagging that 'it struggles on general coding benchmarks.' Artificial Analysis adds a second confirmation: a 321-point GDPval-AA ELO jump from 1,179 to 1,500, the largest agentic uplift in xAI's lineup. Translated, xAI is conceding the general-purpose crown for now and competing where long-document reasoning, citations, and tool-use accuracy actually monetize.

The cost-per-intelligence collapse: $395 to run the full Intelligence Index

The cost-per-intelligence collapse: $395 to run the full Intelligence Index
Cost in USD to evaluate the Artificial Analysis Intelligence Index, by model (April 2026).

The pricing tells the rest of the story. At $1.25 per million input tokens and $2.50 per million output, Grok 4.3 is roughly 40% cheaper on input and 60% cheaper on output than Grok 4.20, with cached input dropping to $0.20 per million. Artificial Analysis pegs the cost to evaluate its entire Intelligence Index at $395 for Grok 4.3 versus $3,959 for GPT-5.5 and $4,811 for Claude Opus 4.7, roughly an order of magnitude.

That gap reframes what 'frontier' means for procurement. A team building a long-document agent that ingests, summarizes, and cross-references multi-hundred-page filings now pays single-digit cents per query on Grok where Opus charges dollars. The catch is that Grok 4.3 generated about 88M tokens evaluating the Intelligence Index versus an average of 36M for peers, meaning it reasons longer to get there. Reddit's r/singularity readout captured this trade-off bluntly: roughly Sonnet 4.6-class intelligence at about 5x cheaper per token, but with reasoning chains roughly 1.8x longer, leaving total spend closer to break-even on shorter tasks while widening Grok's lead on long ones.

Custom Voices and the two-stage liveness check that draws a US-only border

Custom Voices is the second half of the launch and arguably the harder design problem. Anyone can clone a voice from 120 seconds of audio, but xAI's wager is that it can do so without becoming the default tool for impersonation fraud. The system requires the speaker to read a real-time passphrase generated at enrollment, then matches speaker embeddings against the full clip; The Decoder's Matthias Bastian summarizes xAI's claim that 'the setup makes it impossible to clone existing recordings or someone else's voice.'

The legal geometry of the launch is just as load-bearing as the technology. Custom Voices is restricted to the United States and explicitly excludes Illinois, the home of the Biometric Information Privacy Act, which has produced the country's most aggressive voiceprint litigation. Each developer gets up to 30 free custom voices usable across xAI's voice APIs, plus a Voice Library of 80+ presets in 28 languages for builders who don't want the consent-management overhead at all. The shape of that carve-out is itself a signal about where voice cloning's compliance perimeter is settling.

The developer split: Sonnet-class budget pick vs. transparency holdouts

Developer reception has bifurcated along familiar lines. On the pragmatic side, the r/singularity thread on the API release coalesced around a 'Sonnet 4.6-class intelligence at roughly 5x cheaper, around 209 tokens/sec output' read, with practical commenters reporting they're swapping Grok 4.3 in for GPT-5.4 mini in production pipelines, particularly for finance and long-context summarization. Hands-on YouTube reviews echo the strengths: BridgeMind ranks Grok 4.3 #1 globally for low hallucination on its BridgeBench leaderboard, even while panning the model's UI generation as 'AI slop.'

The skeptical camp clusters around two concerns. First is transparency: the most analytical YouTube breakdown notes there's still no formal model card and no published first-party benchmarks, leaving independent evaluators to fill the gap. Second is the texture of Grok's reasoning traces themselves, which Reddit users observed contain repetition loops and occasional Chinese tokens, fueling unverified speculation that the model is distilled from Kimi. Layer on the quiet April 17 beta rollout, with no press release until the full API went live, and a pattern emerges: xAI is shipping aggressively but communicating selectively, leaving the benchmark community and the developer subreddits to do most of the framing for them.

Historical Context

2023-03-01
Elon Musk founded xAI; the company was publicly announced on July 12, 2023.
2023-11-04
xAI launched Grok in beta for X Premium users; Grok-1 was later open-sourced under Apache-2.0 in March 2024.
2024-03-29
Announced with improved reasoning and a 128k token context window; rolled out to X Premium users on May 15, 2024.
2025-02-17
xAI released Grok 3 as its flagship model alongside other Grok updates.
2025-07-09
Grok 4 launched with abstract reasoning, native tool calling, and real-time search; Grok 4 Heavy added multi-agent collaboration.
2025-11-17
Incremental update improving reasoning, multimodal understanding, personality, and reducing hallucinations.
2026-02-17
Public beta of a 'rapid learning' model designed to improve weekly from public usage.
2026-04-17
Quietly appeared in the grok.com model selector and iOS/Android apps for the SuperGrok Heavy ($300/mo) tier with no press release.
2026-04-30
Full Grok 4.3 API rollout completed alongside Custom Voices and the 80+ voice Voice Library going live.

Power Map

Key Players
Subject

xAI launches Grok 4.3 and Custom Voices

XA

xAI

Model developer; ships Grok 4.3 and Custom Voices and sets aggressive API pricing to undercut OpenAI/Anthropic and accelerate agentic adoption.

VE

Vercel (AI Gateway)

Distribution channel; exposes Grok 4.3 to developers via a unified API with retries, failover, BYOK, and observability at no markup.

VA

Vals AI

Independent benchmarker; ranks Grok 4.3 #1 on CaseLaw v2 and #1 on CorpFin while placing it #13 on its broader Vals Index, framing the model as domain-specialized.

AR

Artificial Analysis

Independent intelligence index provider; scored Grok 4.3 at 53 on its Intelligence Index and tracked the 321 ELO jump on agentic GDPval-AA.

DE

Developers and agentic-app builders

Primary target market; gain a long-context, low-cost reasoning model with strong tool calling and free voice cloning for voice agents, audiobooks, and game characters.

OP

OpenAI, Anthropic, Google

Direct competitors; Grok 4.3's $1.25/$2.50 price card materially undercuts GPT-5.2, Claude Opus 4.6, and Gemini 3.1 Pro on output tokens.

Source Articles

Top 4

THE SIGNAL.

Analysts

"Grok 4.3 has launched at #13 on the Vals Index. It ranks #1 on CaseLaw and #1 on CorpFin but it struggles on general coding benchmarks."

Vals AI
Independent AI benchmarking firm

"Grok 4.3 scores an ELO of 1500, up 321 points from Grok 4.20 0309 v2's score of 1179, the largest agentic uplift in xAI's lineup at roughly 20% lower cost to run the full Intelligence Index."

Artificial Analysis
AI evaluation firm

"According to xAI, the setup makes it impossible to clone existing recordings or someone else's voice."

Matthias Bastian
Editor, The Decoder

"Grok 4.3 is suited for agentic workflows, instruction-following tasks, and applications requiring high factual accuracy."

Vercel AI Gateway team
Distribution platform
The Crowd

"xAI has launched Grok 4.3, achieving 53 on the Artificial Analysis Intelligence Index with improved agentic performance, ~40% lower input price, and ~60% lower output price than Grok 4.20. The release of Grok 4.3 places @xAI just above Muse Spark and Claude Sonnet 4.6 on the [Artificial Analysis Intelligence Index]."

@@ArtificialAnlys0

"xAI's new Grok 4.3 model achieved a score of 53 on the Artificial Analysis Intelligence Index. This places the model just above Muse Spark and Claude Sonnet 4.6, marking a 4-point improvement over the latest version of Grok 4.20. On the GDPval-AA benchmark, it scored an ELO of [...]"

@@WesRoth0

"Grok 4.3 Beta is the latest pre-trained model from xAI, quietly rolled out on April 17, 2026 (just yesterday as of today). It's now available as 'Early Access' on grok.com and the apps for SuperGrok and Premium+ subscribers (you should be able to select it if [eligible])."

@@LTSmash4200

"Grok 4.3 achieves higher overall intelligence over 4.20 with less of a cost, at the price of slightly higher hallucination rate."

@u/Profanion121
Broadcast
Grok 4.3 Beta First Test – Is THIS a Frontier Model Competitor?

Grok 4.3 Beta First Test – Is THIS a Frontier Model Competitor?

Vibe Coding With Grok 4.3 in a Full Self Driving Tesla

Vibe Coding With Grok 4.3 in a Full Self Driving Tesla

Grok 4.3 Just Changed Everything — And Nobody Noticed

Grok 4.3 Just Changed Everything — And Nobody Noticed