Key Information

Models and Pricing

View as Markdown

An overview of our models' capabilities and their associated pricing.

Grok 4.20

Grok 4.20 is our newest flagship model with industry-leading speed and agentic tool calling capabilities. It combines the lowest hallucination rate on the market with strict prompt adherence, delivering consistently precise and truthful responses.

Modalities

Context window

2,000,000

Features

Function calling

Structured outputs

Reasoning

Lightning fast

Model
Modalities
Capabilities
Context
Rate limits
Pricing
Loading models...
Voice & Audio
100
$0.05/ min ($3.00 / hr)
3,000
50
100
$4.20/ 1M characters
600
10
100

Grok 4 Information for Grok 3 Users
When moving from grok-3/grok-3-mini to grok-4, please note the following differences:

  • Grok 4 is a reasoning model. There is no non-reasoning mode when using Grok 4.
  • presencePenalty, frequencyPenalty and stop parameters are not supported by reasoning models. Adding them in the request would result in an error.
  • Grok 4 does not have a reasoning_effort parameter. If a reasoning_effort is provided, the request will return an error.

Grok 4.20 Information
Grok 4.20 models do not support the logprobs field. If you specify logprobs in your request, it will be ignored.


Tools Pricing

Requests which make use of xAI provided server-side tools are priced based on two components: token usage and server-side tool invocations. Since the agent autonomously decides how many tools to call, costs scale with query complexity.

Token Costs

All standard token types are billed at the rate for the model used in the request:

  • Input tokens: Your query and conversation history
  • Reasoning tokens: Agent's internal thinking and planning
  • Completion tokens: The final response
  • Image tokens: Visual content analysis (when applicable)
  • Cached prompt tokens: Prompt tokens that were served from cache rather than recomputed

Tool Invocation Costs

Web Search$5 / 1k calls

Search the internet and browse web pages

web_search
X Search$5 / 1k calls

Search X posts, user profiles, and threads

x_search
Code Execution$5 / 1k calls

Run Python code in a sandboxed environment

code_executioncode_interpreter
File Attachments$10 / 1k calls

Search through files attached to messages

attachment_search
Collections Search$2.50 / 1k calls

Query your uploaded document collections (RAG)

collections_searchfile_search

Analyze images found during Web Search and X Search*

view_image

Analyze videos found during X Search*

view_x_video
Remote MCP ToolsToken-based

Connect and use custom MCP tool servers

Tool name is set by each MCP server

All tool names work in the Responses API. In the gRPC API (Python xAI SDK), code_interpreter and file_search are not supported.

* Only applies to images and videos found by search tools — not to images passed directly in messages.

For the view image and view x video tools, you will not be charged for the tool invocation itself but will be charged for the image tokens used to process the image or video.

For Remote MCP tools, you will not be charged for the tool invocation but will be charged for any tokens used.

For more information on using Tools, please visit our guide on Tools.


Batch API Pricing

The Batch API lets you process large volumes of requests asynchronously at 50% of standard pricing — effectively cutting your token costs in half. Batch requests are queued and processed in the background, with most completing within 24 hours.

Real-time APIBatch API
Token pricingStandard rates50% off standard rates
Response timeImmediate (seconds)Typically within 24 hours
Rate limitsPer-minute limits applyRequests don't count towards rate limits

The 50% discount applies to all token types — input tokens, output tokens, cached tokens, and reasoning tokens. To see batch pricing for a specific model, visit the model's detail page and toggle "Show batch API pricing".

The 50% batch discount applies to text and language models only. Image and video generation are supported in the Batch API but are billed at standard rates. See Batch API documentation for full details.


Voice API Pricing

Voice Agent API (Realtime)

The Voice Agent API enables real-time voice conversations over WebSocket, billed at a flat rate per minute of audio duration.

Details
Pricing$0.05 / minute ($3.00 / hour)
Concurrent sessions100 per team
Max session duration30 minutes
CapabilitiesFunction calling (web search, X search, collections, MCP, custom functions)

When using the Voice Agent API with tools such as function calling, web search, X search, collections, or MCP, you will be charged for the tool invocations in addition to the per-minute voice session cost. See Tool Invocation Costs above for tool pricing details.

Usage is billed by audio duration. If you send 1 hour of audio data to the API, it will be billed as 1 hour of usage, even if the WebSocket connection time is less than 1 hour.

For more details on how to get started, see the Voice Agent API documentation.

Text to Speech API

The Text to Speech API converts text into natural speech, billed per input character.

Details
Pricing$4.20 / 1M characters
Rate limits3,000 RPM, 50 RPS, 100 concurrent sessions per team
CapabilitiesMultiple voices, streaming and batch output, MP3 / WAV / PCM / μ-law / A-law formats

Speech to Text API

The Speech to Text API transcribes audio into text, available as both a REST endpoint for batch transcription and a streaming endpoint for real-time transcription.

RESTStreaming
Pricing$0.10 / hour$0.20 / hour
Rate limits600 RPM, 10 RPS100 concurrent sessions per team
CapabilitiesFile upload, multiple formats, multiple languagesReal-time transcription, interim results, low latency

Files and Collections Pricing

Files and collections stored on the xAI platform are billed based on the amount of storage used. These charges will take effect starting on April 20th, 2026.

ResourceRate
File storage$0.025 / GiB / day
Collection storage$0.10 / GiB / day

Download Costs

Downloading data from files and collections is charged at a flat rate based on the amount of data transferred:

ResourceRate
File downloads$0.20 / GiB downloaded
Collection downloads$0.20 / GiB downloaded

You can view and manage your files and collections through the xAI console or the xAI API.


Usage Guidelines Violation Fee

When your request is deemed to be in violation of our usage guideline by our system, we will still charge for the generation of the request.

For violations that are caught before generation in the Responses API, we will charge a $0.05 usage guideline violation fee per request.


Additional Information Regarding Models

  • No access to realtime events without search tools enabled
    • Grok has no knowledge of current events or data beyond what was present in its training data.
    • To incorporate realtime data with your request, enable server-side search tools (Web Search / X Search). See Web Search and X Search.
  • Chat models
    • No role order limitation: You can mix system, user, or assistant roles in any sequence for your conversation context.
  • Image input models
    • Maximum image size: 20MiB
    • Maximum number of images: No limit
    • Supported image file types: jpg/jpeg or png.
    • Any image/text input order is accepted (e.g. text prompt can precede image prompt)

The knowledge cut-off date of Grok 3 and Grok 4 is November, 2024.


Model Aliases

Some models have aliases to help users automatically migrate to the next version of the same model. In general:

  • <modelname> is aliased to the latest stable version.
  • <modelname>-latest is aliased to the latest version. This is suitable for users who want to access the latest features.
  • <modelname>-<date> refers directly to a specific model release. This will not be updated and is for workflows that demand consistency.

For most users, the aliased <modelname> or <modelname>-latest are recommended, as you would receive the latest features automatically.


Billing and Availability

Your model access might vary depending on various factors such as geographical location, account limitations, etc.

For how the bills are charged, visit Manage Billing for more information.

For the most up-to-date information on your team's model availability, visit Models Page on xAI Console.


Model Input and Output

Each model can have one or multiple input and output capabilities. The input capabilities refer to which type(s) of prompt can the model accept in the request message body. The output capabilities refer to which type(s) of completion will the model generate in the response message body.

This is a prompt example for models with text input capability:

JSON

[
  {
    "role": "system",
    "content": "You are Grok, a chatbot inspired by the Hitchhiker's Guide to the Galaxy."
  },
  {
    "role": "user",
    "content": "What is the meaning of life, the universe, and everything?"
  }
]

This is a prompt example for models with text and image input capabilities:

JSON

[
  {
    "role": "user",
    "content": [
      {
        "type": "image_url",
        "image_url": {
          "url": "data:image/jpeg;base64,<base64_image_string>",
          "detail": "high"
        }
      },
      {
        "type": "text",
        "text": "Describe what's in this image."
      }
    ]
  }
]

This is a prompt example for models with text input and image output capabilities:

JSON

// The entire request body
{
  "model": "grok-4",
  "prompt": "A cat in a tree",
  "n": 4
}

Context Window

The context window determines the maximum amount of tokens accepted by the model in the prompt.

For more information on how token is counted, visit Consumption and Rate Limits.

If you are sending the entire conversation history in the prompt for use cases like chat assistant, the sum of all the prompts in your conversation history must be no greater than the context window.


Cached prompt tokens

Trying to run the same prompt multiple times? You can now use cached prompt tokens to incur less cost on repeated prompts. By reusing stored prompt data, you save on processing expenses for identical requests. Enable caching in your settings and start saving today!

The caching is automatically enabled for all requests without user input. You can view the cached prompt token consumption in the "usage" object.

For details on the pricing, please refer to the pricing table above, or on xAI Console.