mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
116 lines
No EOL
2.5 KiB
Markdown
116 lines
No EOL
2.5 KiB
Markdown
## Use LLM API endpoints in Anthropic Interface
|
|
|
|
Note: This is called `anthropic_interface` because `anthropic` is a known python package and was failing mypy type checking.
|
|
|
|
|
|
## Usage
|
|
---
|
|
|
|
### LiteLLM Python SDK
|
|
|
|
#### Non-streaming example
|
|
```python showLineNumbers title="Example using LiteLLM Python SDK"
|
|
import litellm
|
|
response = await litellm.anthropic.messages.acreate(
|
|
messages=[{"role": "user", "content": "Hello, can you tell me a short joke?"}],
|
|
api_key=api_key,
|
|
model="anthropic/claude-3-haiku-20240307",
|
|
max_tokens=100,
|
|
)
|
|
```
|
|
|
|
Example response:
|
|
```json
|
|
{
|
|
"content": [
|
|
{
|
|
"text": "Hi! this is a very short joke",
|
|
"type": "text"
|
|
}
|
|
],
|
|
"id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
|
|
"model": "claude-3-7-sonnet-20250219",
|
|
"role": "assistant",
|
|
"stop_reason": "end_turn",
|
|
"stop_sequence": null,
|
|
"type": "message",
|
|
"usage": {
|
|
"input_tokens": 2095,
|
|
"output_tokens": 503,
|
|
"cache_creation_input_tokens": 2095,
|
|
"cache_read_input_tokens": 0
|
|
}
|
|
}
|
|
```
|
|
|
|
#### Streaming example
|
|
```python showLineNumbers title="Example using LiteLLM Python SDK"
|
|
import litellm
|
|
response = await litellm.anthropic.messages.acreate(
|
|
messages=[{"role": "user", "content": "Hello, can you tell me a short joke?"}],
|
|
api_key=api_key,
|
|
model="anthropic/claude-3-haiku-20240307",
|
|
max_tokens=100,
|
|
stream=True,
|
|
)
|
|
async for chunk in response:
|
|
print(chunk)
|
|
```
|
|
|
|
### LiteLLM Proxy Server
|
|
|
|
|
|
1. Setup config.yaml
|
|
|
|
```yaml
|
|
model_list:
|
|
- model_name: anthropic-claude
|
|
litellm_params:
|
|
model: claude-3-7-sonnet-latest
|
|
```
|
|
|
|
2. Start proxy
|
|
|
|
```bash
|
|
litellm --config /path/to/config.yaml
|
|
```
|
|
|
|
3. Test it!
|
|
|
|
<Tabs>
|
|
<TabItem label="Anthropic Python SDK" value="python">
|
|
|
|
```python showLineNumbers title="Example using LiteLLM Proxy Server"
|
|
import anthropic
|
|
|
|
# point anthropic sdk to litellm proxy
|
|
client = anthropic.Anthropic(
|
|
base_url="http://0.0.0.0:4000",
|
|
api_key="sk-1234",
|
|
)
|
|
|
|
response = client.messages.create(
|
|
messages=[{"role": "user", "content": "Hello, can you tell me a short joke?"}],
|
|
model="anthropic/claude-3-haiku-20240307",
|
|
max_tokens=100,
|
|
)
|
|
```
|
|
</TabItem>
|
|
<TabItem label="curl" value="curl">
|
|
|
|
```bash showLineNumbers title="Example using LiteLLM Proxy Server"
|
|
curl -L -X POST 'http://0.0.0.0:4000/v1/messages' \
|
|
-H 'content-type: application/json' \
|
|
-H 'x-api-key: $LITELLM_API_KEY' \
|
|
-H 'anthropic-version: 2023-06-01' \
|
|
-d '{
|
|
"model": "anthropic-claude",
|
|
"messages": [
|
|
{
|
|
"role": "user",
|
|
"content": "Hello, can you tell me a short joke?"
|
|
}
|
|
],
|
|
"max_tokens": 100
|
|
}'
|
|
``` |