(docs) litellm server deploy on aws

This commit is contained in:
ishaan-jaff 2023-10-25 19:04:37 -07:00
parent 0b0556cfd7
commit 63fc276a2b

View file

@ -15,7 +15,7 @@ A simple, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM AP
[![Deploy](../img/deploy-to-aws.png)](https://docs.litellm.ai/docs/simple_proxy#deploy-on-aws-apprunner)
:::info
We want to learn how we can make the proxy better! Meet the [founders](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version) or
We want to learn how we can make the server better! Meet the [founders](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version) or
join our [discord](https://discord.gg/wuPM9dRgDw)
:::
@ -26,7 +26,7 @@ join our [discord](https://discord.gg/wuPM9dRgDw)
$ git clone https://github.com/BerriAI/litellm.git
```
```shell
$ cd ./litellm/openai-proxy
$ cd ./litellm/litellm_server
```
```shell
@ -95,10 +95,10 @@ This server allows two ways of passing API keys to litellm
On a successfull deploy your Cloud Run Shell will have this output
<Image img={require('../img/cloud_run0.png')} />
### Testing your deployed proxy
### Testing your deployed server
**Assuming the required keys are set as Environment Variables**
https://litellm-7yjrj3ha2q-uc.a.run.app is our example proxy, substitute it with your deployed cloud run app
https://litellm-7yjrj3ha2q-uc.a.run.app is our example server, substitute it with your deployed cloud run app
<Tabs>
<TabItem value="openai" label="OpenAI">
@ -173,20 +173,57 @@ On a successfull deploy https://dashboard.render.com/ should display the followi
## Deploy on AWS Apprunner
1. Fork LiteLLM https://github.com/BerriAI/litellm
2. Navigate to to App Runner on AWS Console: https://console.aws.amazon.com/apprunner/home#/services
3. Follow the steps in the video below
<iframe width="800" height="450" src="https://www.loom.com/embed/5fccced4dde8461a8caeee97addb2231?sid=eac60660-073e-455e-a737-b3d05a5a756a" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
3. Testing your endpoint
Assuming `OPENAI_API_KEY` is set in the environment variables
```shell
curl https://b2w6emmkzp.us-east-1.awsapprunner.com /v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
```
4. Testing your deployed endpoint
**Assuming the required keys are set as Environment Variables** Example: `OPENAI_API_KEY`
https://b2w6emmkzp.us-east-1.awsapprunner.com is our example server, substitute it with your deployed cloud run app
<Tabs>
<TabItem value="openai" label="OpenAI">
```shell
curl https://b2w6emmkzp.us-east-1.awsapprunner.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
```
</TabItem>
<TabItem value="azure" label="Azure">
```shell
curl https://b2w6emmkzp.us-east-1.awsapprunner.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "azure/<your-deployment-name>",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
```
</TabItem>
<TabItem value="anthropic" label="Anthropic">
```shell
curl https://b2w6emmkzp.us-east-1.awsapprunner.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-2",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7,
}'
```
</TabItem>
</Tabs>
## Advanced