(docs) use port 4000

This commit is contained in:
ishaan-jaff 2024-03-08 21:59:00 -08:00
parent 22e9d1073f
commit ea6f42216c
33 changed files with 179 additions and 179 deletions

View file

@ -143,13 +143,13 @@ pip install 'litellm[proxy]'
```shell
$ litellm --model huggingface/bigcode/starcoder
#INFO: Proxy running on http://0.0.0.0:8000
#INFO: Proxy running on http://0.0.0.0:4000
```
### Step 2: Make ChatCompletions Request to Proxy
```python
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
@ -170,7 +170,7 @@ Set budgets and rate limits across multiple projects
### Request
```shell
curl 'http://0.0.0.0:8000/key/generate' \
curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'