Chat
Chat is one of the most fundamental capabilities of Large Language Models (LLMs) and serves as an excellent starting point for beginners and seasoned developers alike when exploring the functionalities offered by these models.
Prerequisites
- xAI Account: You need an xAI account to access the API.
- API Key: Ensure that your API key has access to the chat endpoint and the chat model is enabled.
If you don't have these and are unsure of how to create one, follow the Hitchhiker's Guide to Grok.
You can create an API key on the xAI Console API Keys Page.
Set your API key in your environment:
Getting batch responses
The easiest way is to get familiar with chat capability is to request batch response. Batch responses are ideal when you want complete answers at once. You can also stream the response, which is covered in Streaming Response.
The user sends a request to the xAI API endpoint. The API processes this and returns a complete response.
Response:
Conversations
The xAI API is stateless and does not process new request with the context of your previous request history.
However, you can provide previous chat generation prompts and results to a new chat generation request to let the model process your new request with the context in mind.
An example message:
This strategy can be used within function calling, in which the model response will invoke a tool call, the user's program responds to the tool call and continues the conversation by appending tool call result to the message. For more details, check out our guide on Function Calling.
Message role flexibility
Unlike some models from other providers, one of the unique aspects of xAI API is its flexibility with message roles:
- No Order Limitation: You can mix
system
,user
, orassistant
roles in any sequence for your conversation context.
Example 1 - Multiple System Messages:
Example 2 - User Messages First:
Parameters
You can customize the following parameters in the request to achieve different generation results.