docs(openai-proxy-docs): cleanupo

This commit is contained in:
Krrish Dholakia 2023-10-21 18:28:30 -07:00
parent 05490b7b75
commit f3c00a0e37

View file

@ -1,47 +1,45 @@
# litellm-proxy
# openai-proxy
A local, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM APIs.
A simple, fast, and lightweight **OpenAI-compatible server** to call 100+ LLM APIs.
## usage
```shell
$ pip install litellm
$ git clone https://github.com/BerriAI/litellm.git
```
```shell
$ litellm --model ollama/codellama
$ cd ./litellm/openai-proxy
```
#INFO: Ollama running on http://0.0.0.0:8000
```shell
$ uvicorn main:app --host 0.0.0.0 --port 8000
```
## replace openai base
```python
import openai
openai.api_base = "http://0.0.0.0:8000"
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
# call cohere
openai.api_key = "my-cohere-key" # this gets passed as a header
response = openai.ChatCompletion.create(model="command-nightly", messages=[{"role":"user", "content":"Hey!"}])
# call bedrock
response = openai.ChatCompletion.create(
model = "bedrock/anthropic.claude-instant-v1",
messages = [
{
"role": "user",
"content": "Hey!"
}
],
aws_access_key_id="",
aws_secret_access_key="",
aws_region_name="us-west-2",
)
print(response)
```
[**See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.**](https://docs.litellm.ai/docs/proxy_server)
## configure proxy
To save API Keys, change model prompt, etc. you'll need to create a local instance of it:
```shell
$ litellm --create-proxy
```
This will create a local project called `litellm-proxy` in your current directory, that has:
* **proxy_cli.py**: Runs the proxy
* **proxy_server.py**: Contains the API calling logic
- `/chat/completions`: receives `openai.ChatCompletion.create` call.
- `/completions`: receives `openai.Completion.create` call.
- `/models`: receives `openai.Model.list()` call
* **secrets.toml**: Stores your api keys, model configs, etc.
Run it by doing:
```shell
$ cd litellm-proxy
```
```shell
$ python proxy_cli.py --model ollama/llama # replace with your model name
```