update docs

This commit is contained in:
ishaan-jaff 2023-09-09 16:11:11 -07:00
parent 8ed85b0523
commit c548d8ad37
2 changed files with 46 additions and 2 deletions

View file

@ -0,0 +1,44 @@
# OpenAI Proxy Servers (ChatCompletion)
LiteLLM allows you to call your OpenAI ChatCompletion proxy server
### API KEYS
No api keys required
### Example Usage
#### Pre-Requisites
Ensure your proxy server has the following route
```python
@app.route('/chat/completions', methods=["POST"])
def chat_completion():
print("got request for chat completion")
```
In order to use your custom OpenAI Chat Completion proxy with LiteLLM, ensure you set
* `api_base` to your proxy url, example "https://openai-proxy.berriai.repl.co"
* `custom_llm_provider` to `openai` this ensures litellm uses the `openai.ChatCompletion` to your api_base
```python
from litellm import completion
## set ENV variables
os.environ["OPENAI_API_KEY"] = "set it, but it's not used"
messages = [{ "content": "Hello, how are you?","role": "user"}]
response = completion(
model="command-nightly",
messages=[{ "content": "Hello, how are you?","role": "user"}],
api_base="https://openai-proxy.berriai.repl.co",
custom_llm_provider="openai"
temperature=0.2,
max_tokens=80,
)
print(response)
```

View file

@ -24,7 +24,7 @@ print(response)
In order to use litellm to call a hosted vllm server add the following to your completion call
* `custom_llm_provider == "openai"`
* `api_base = "your-hosted-vllm-server/v1"`
* `api_base = "your-hosted-vllm-server"`
```python
import litellm
@ -32,7 +32,7 @@ import litellm
response = completion(
model="facebook/opt-125m", # pass the vllm model name
messages=messages,
api_base="https://hosted-vllm-api.co/v1",
api_base="https://hosted-vllm-api.co",
custom_llm_provider="openai",
temperature=0.2,
max_tokens=80)