Adds tests and updates docs for Claude "pre-fill"

This commit is contained in:
spdustin@gmail.com 2024-01-05 22:58:41 +00:00
parent 53e5e1df07
commit 5d074f5b56
3 changed files with 171 additions and 28 deletions

View file

@ -1,42 +1,64 @@
# Anthropic
LiteLLM supports
LiteLLM supports
- `claude-2.1`
- `claude-2`
- `claude-2.1`
- `claude-instant-1`
- `claude-instant-1.2`
## API Keys
```python
import os
```python
import os
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
```
## Sample Usage
## Usage
```python
import os
from litellm import completion
from litellm import completion
# set env - [OPTIONAL] replace with your anthropic key
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
messages = [{"role": "user", "content": "Hey! how's it going?"}]
response = completion(model="claude-instant-1", messages=messages)
print(response)
```
## streaming
## Usage - "Assistant Pre-fill"
You can "put words in Claude's mouth" by including an `assistant` role message as the last item in the `messages` array.
> [!IMPORTANT]
> The returned completion will _not_ include your "pre-fill" text, since it is part of the prompt itself. Make sure to prefix Claude's completion with your pre-fill.
```python
import os
from litellm import completion
# set env - [OPTIONAL] replace with your anthropic key
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
messages = [
{"role": "user", "content": "How do you say 'Hello' in German? Return your answer as a JSON object, like this:\n\n{ \"Hello\": \"Hallo\"}"},
{"role": "assistant", "content": "{"},
]
response = completion(model="claude-2.1", messages=messages)
print(response)
```
## Streaming
Just set `stream=True` when calling completion.
```python
import os
from litellm import completion
from litellm import completion
# set env
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
# set env
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
messages = [{"role": "user", "content": "Hey! how's it going?"}]
response = completion(model="claude-instant-1", messages=messages, stream=True)

View file

@ -21,7 +21,7 @@ os.environ["AWS_REGION_NAME"] = "" # us-east-1, us-east-2, us-west-1, us-west-2
</a>
```python
import os
import os
from litellm import completion
os.environ["AWS_ACCESS_KEY_ID"] = ""
@ -29,14 +29,66 @@ os.environ["AWS_SECRET_ACCESS_KEY"] = ""
os.environ["AWS_REGION_NAME"] = ""
response = completion(
model="anthropic.claude-instant-v1",
model="anthropic.claude-instant-v1",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
```
## Usage - "Assistant Pre-fill"
If you're using Anthropic's Claude with Bedrock, you can "put words in Claude's mouth" by including an `assistant` role message as the last item in the `messages` array.
> [!IMPORTANT]
> The returned completion will _**not**_ include your "pre-fill" text, since it is part of the prompt itself. Make sure to prefix Claude's completion with your pre-fill.
```python
import os
from litellm import completion
os.environ["AWS_ACCESS_KEY_ID"] = ""
os.environ["AWS_SECRET_ACCESS_KEY"] = ""
os.environ["AWS_REGION_NAME"] = ""
messages = [
{"role": "user", "content": "How do you say 'Hello' in German? Return your answer as a JSON object, like this:\n\n{ \"Hello\": \"Hallo\"}"},
{"role": "assistant", "content": "{"},
]
response = completion(model="anthropic.claude-v2", messages=messages)
```
## Usage - "System" messages
If you're using Anthropic's Claude 2.1 with Bedrock, `system` role messages are properly formatted for you.
```python
import os
from litellm import completion
os.environ["AWS_ACCESS_KEY_ID"] = ""
os.environ["AWS_SECRET_ACCESS_KEY"] = ""
os.environ["AWS_REGION_NAME"] = ""
messages = [
{"role": "system", "content": "You are a snarky assistant."},
{"role": "user", "content": "How do I boil water?"},
]
response = completion(model="anthropic.claude-v2:1", messages=messages)
```
### Example prompt sent to Claude
```
You are a snarky assistant.
Human: How do I boil water?
Assistant:
```
## Usage - Streaming
```python
import os
import os
from litellm import completion
os.environ["AWS_ACCESS_KEY_ID"] = ""
@ -44,7 +96,7 @@ os.environ["AWS_SECRET_ACCESS_KEY"] = ""
os.environ["AWS_REGION_NAME"] = ""
response = completion(
model="anthropic.claude-instant-v1",
model="anthropic.claude-instant-v1",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True
)
@ -79,11 +131,11 @@ for chunk in response:
### Passing credentials as parameters - Completion()
Pass AWS credentials as parameters to litellm.completion
```python
import os
import os
from litellm import completion
response = completion(
model="anthropic.claude-instant-v1",
model="anthropic.claude-instant-v1",
messages=[{ "content": "Hello, how are you?","role": "user"}],
aws_access_key_id="",
aws_secret_access_key="",
@ -133,7 +185,7 @@ response = completion(
```
## Supported AWS Bedrock Models
Here's an example of using a bedrock model with LiteLLM
Here's an example of using a bedrock model with LiteLLM
| Model Name | Command |
|--------------------------|------------------------------------------------------------------|