(feat) add bedrock:llama2-70bchat

This commit is contained in:
ishaan-jaff 2023-11-30 14:41:19 -08:00
parent 95634bde76
commit 8c1a9f1c4e
2 changed files with 8 additions and 0 deletions

View file

@ -146,6 +146,7 @@ Here's an example of using a bedrock model with LiteLLM
| AI21 J2-Mid | `completion(model='ai21.j2-mid-v1', messages=messages)` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
| AI21 J2-Ultra | `completion(model='ai21.j2-ultra-v1', messages=messages)` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
| Meta Llama 2 Chat 13b | `completion(model='meta.llama2-13b-chat-v1', messages=messages)` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
| Meta Llama 2 Chat 70b | `completion(model='meta.llama2-70b-chat-v1', messages=messages)` | `os.environ['AWS_ACCESS_KEY_ID']`, `os.environ['AWS_SECRET_ACCESS_KEY']`, `os.environ['AWS_REGION_NAME']` |
## Bedrock Embedding

View file

@ -652,6 +652,13 @@
"litellm_provider": "bedrock",
"mode": "chat"
},
"meta.llama2-70b-chat-v1": {
"max_tokens": 4096,
"input_cost_per_token": 0.00000195,
"output_cost_per_token": 0.00000256,
"litellm_provider": "bedrock",
"mode": "chat"
},
"sagemaker/meta-textgeneration-llama-2-7b": {
"max_tokens": 4096,
"input_cost_per_token": 0.000,