API Documentation

Function calling

Connect the xAI models to external tools and systems to build AI assistants and various integrations.


Function calling enables language models to use external tools, which can intimately connect models to digital and physical worlds.

This is a powerful capability that can be used to enable a wide range of use cases.

  • Calling public APIs for actions ranging from looking up football game results to getting real-time satellite positioning data
  • Analyzing internal databases
  • Browsing web pages
  • Executing code
  • Interacting with the physical world (e.g. booking a flight ticket, opening your tesla car door, controlling robot arms)

The request/response flow for function calling can be demonstrated in the following illustration.

Function call request/response flow example

You can think of it as the LLM initiating RPCs (Remote Procedure Calls) to user system. From the LLM's perspective, the "2. Response" is an RPC request from LLM to user system, and the "3. Request" is an RPC response with information that LLM needs.

The whole process looks like this in pseudocode:

We will demonstrate the function calling in the following Python script. First, let's create an API client:


Define tool functions as callback functions to be called when model requests them in response.

Normally, these functions would either retrieve data from a database, or call another API endpoint, or perform some actions. For demonstration purposes, we hardcode to return 59° Fahrenheit/15° Celsius as the temperature, and 15,000 feet as the cloud ceiling.

The parameters definition will be sent in the initial request to Grok, so Grok knows what tools and parameters are available to be called.

To reduce human error, you can define the tools partially using Pydantic.

Function definition using Pydantic:

Function definition using raw dictionary:

Create a string -> function mapping, so we can call the function when model sends it's name. e.g.


With all the functions defined, it's time to send our API request to Grok!

Now before we send it over, let's look at how the generic request body for a new task looks like.

Note how the tool call is referenced three times:

  • By id and name in "previous assistant response to get tool call"
  • By tool_call_id in "tool call return"
  • In the tools field of the request body
Function call request body

Now we compose the request messages in the request body and send it over to Grok. Grok should return a response that asks us for a tool call.


We retrieve the tool function names and arguments that Grok wants to call, run the functions, and add the result to messages.

At this point, you can choose to only respond to tool call with results or add a new user message request.

The tool message would contain the following: { "role": "tool", "content": <json string of tool function's returned object>, "tool_call_id": <tool_call.id included in the tool call response by Grok>}

The request body that we try to assemble and send back to Grok. Note it looks slightly different from the new task request body:

Request body after processing tool call

The corresponding code to append messages:



You can continue the conversation following Step 2. Otherwise you can terminate.


By default, the model will automatically decide whether a function call is necessary and select which functions to call, as determined by the tool_choice: "auto" setting.

We offer three ways to customize the default behavior:

  1. To force the model to always call one or more functions, you can set tool_choice: "required". The model will then always call function. Note this could force the model to hallucinate parameters.
  2. To force the model to call a specific function, you can set tool_choice: {"type": "function", "function": {"name": "my_function"}}.
  3. To disable function calling and force the model to only generate a user-facing message, you can either provide no tools, or set tool_choice: "none".