grok-3-mini-beta
and grok-3-mini-fast-beta
.grok-3-beta
and grok-3-fast-beta
do not support reasoning.reasoning_content
field in the response completion object (see example below).message.reasoning_content
of the chat completion response.reasoning_effort
parameter controls how much time the model spends thinking before responding. It must be set to one of these values:low
: Minimal thinking time, using fewer tokens for quick responses.high
: Maximum thinking time, leveraging more tokens for complex problems.low
for simple queries that should complete quickly, and high
for harder problems where response latency is less important.
Let me calculate 101 multiplied by 3:
101 * 3 = 303.
I can double-check that: 100 * 3 is 300, and 1 * 3 is 3, so 300 + 3 = 303. Yes, that's correct.
Final Response:
The result of 101 multiplied by 3 is 303.
Number of completion tokens (input):
14
Number of reasoning tokens (input):
310
grok-3-mini-beta
or grok-3-mini-fast-beta
: For tasks that can benefit from logical reasoning (such as meeting scheduling or math problems). Also great for tasks that don't require deep domain knowledge about a specific subject (eg basic customer support bot).grok-3-beta
or grok-3-fast-beta
: For queries requiring deep domain expertise or world knowledge (eg healthcare, legal, finance).curl --location --request POST 'https://api.x.ai/v1/chat/completions' \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data-raw '{
"messages": [
{
"role": "system",
"content": "You are a highly intelligent AI assistant."
},
{
"role": "user",
"content": "What is 101*3?"
}
],
"model": "grok-3-mini-latest",
"stream": false,
"temperature": 0.7,
"reasoning_effort":"high"
}'
{}