docs supports reasoning

This commit is contained in:
Ishaan Jaff 2025-04-11 16:55:19 -07:00
parent 8a40fa0f56
commit 45a5ee9cb4

View file

@ -18,7 +18,7 @@ Supported Providers:
LiteLLM will standardize the `reasoning_content` in the response and `thinking_blocks` in the assistant message.
```python
```python title="Example response from litellm"
"message": {
...
"reasoning_content": "The capital of France is Paris.",
@ -37,7 +37,7 @@ LiteLLM will standardize the `reasoning_content` in the response and `thinking_b
<Tabs>
<TabItem value="sdk" label="SDK">
```python
```python showLineNumbers
from litellm import completion
import os
@ -111,7 +111,7 @@ Here's how to use `thinking` blocks by Anthropic with tool calling.
<Tabs>
<TabItem value="sdk" label="SDK">
```python
```python showLineNumbers
litellm._turn_on_debug()
litellm.modify_params = True
model = "anthropic/claude-3-7-sonnet-20250219" # works across Anthropic, Bedrock, Vertex AI
@ -210,7 +210,7 @@ if tool_calls:
1. Setup config.yaml
```yaml
```yaml showLineNumbers
model_list:
- model_name: claude-3-7-sonnet-thinking
litellm_params:
@ -224,7 +224,7 @@ model_list:
2. Run proxy
```bash
```bash showLineNumbers
litellm --config config.yaml
# RUNNING on http://0.0.0.0:4000
@ -332,7 +332,7 @@ curl http://0.0.0.0:4000/v1/chat/completions \
Set `drop_params=True` to drop the 'thinking' blocks when swapping from Anthropic to Deepseek models. Suggest improvements to this approach [here](https://github.com/BerriAI/litellm/discussions/8927).
```python
```python showLineNumbers
litellm.drop_params = True # 👈 EITHER GLOBALLY or per request
# or per request
@ -373,7 +373,7 @@ You can also pass the `thinking` parameter to Anthropic models.
<Tabs>
<TabItem value="sdk" label="SDK">
```python
```python showLineNumbers
response = litellm.completion(
model="anthropic/claude-3-7-sonnet-20250219",
messages=[{"role": "user", "content": "What is the capital of France?"}],
@ -395,5 +395,92 @@ curl http://0.0.0.0:4000/v1/chat/completions \
}'
```
</TabItem>
</Tabs>
## Checking if a model supports reasoning
<Tabs>
<TabItem label="LiteLLM Python SDK" value="Python">
Use `litellm.supports_reasoning(model="")` -> returns `True` if model supports reasoning and `False` if not.
```python showLineNumbers title="litellm.supports_reasoning() usage"
import litellm
# Example models that support reasoning
assert litellm.supports_reasoning(model="anthropic/claude-3-7-sonnet-20250219") == True
assert litellm.supports_reasoning(model="deepseek/deepseek-chat") == True
# Example models that do not support reasoning
assert litellm.supports_reasoning(model="openai/gpt-3.5-turbo") == False
```
</TabItem>
<TabItem label="LiteLLM Proxy Server" value="proxy">
1. Define models that support reasoning in your `config.yaml`. You can optionally add `supports_reasoning: True` to the `model_info` if LiteLLM does not automatically detect it for your custom model.
```yaml showLineNumbers title="litellm proxy config.yaml"
model_list:
- model_name: claude-3-sonnet-reasoning
litellm_params:
model: anthropic/claude-3-7-sonnet-20250219
api_key: os.environ/ANTHROPIC_API_KEY
- model_name: deepseek-reasoning
litellm_params:
model: deepseek/deepseek-chat
api_key: os.environ/DEEPSEEK_API_KEY
# Example for a custom model where detection might be needed
- model_name: my-custom-reasoning-model
litellm_params:
model: openai/my-custom-model # Assuming it's OpenAI compatible
api_base: http://localhost:8000
api_key: fake-key
model_info:
supports_reasoning: True # Explicitly mark as supporting reasoning
```
2. Run the proxy server:
```bash showLineNumbers title="litellm --config config.yaml"
litellm --config config.yaml
```
3. Call `/model_group/info` to check if your model supports `reasoning`
```shell showLineNumbers title="curl /model_group/info"
curl -X 'GET' \
'http://localhost:4000/model_group/info' \
-H 'accept: application/json' \
-H 'x-api-key: sk-1234'
```
Expected Response
```json showLineNumbers title="response from /model_group/info"
{
"data": [
{
"model_group": "claude-3-sonnet-reasoning",
"providers": ["anthropic"],
"mode": "chat",
"supports_reasoning": true,
},
{
"model_group": "deepseek-reasoning",
"providers": ["deepseek"],
"supports_reasoning": true,
},
{
"model_group": "my-custom-reasoning-model",
"providers": ["openai"],
"supports_reasoning": true,
}
]
}
````
</TabItem>
</Tabs>