(docs) proxy server

This commit is contained in:
ishaan-jaff 2023-11-07 15:30:29 -08:00
parent adf7539be2
commit c972e4acf2

View file

@ -185,6 +185,19 @@ $ litellm --model command-nightly
## Usage ## Usage
#### Replace openai base
```python
import openai
openai.api_base = "http://0.0.0.0:8000"
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
```
### Using with OpenAI compatible projects
LiteLLM allows you to set `openai.api_base` to the proxy server and use all LiteLLM supported LLMs in any OpenAI supported project
<Tabs> <Tabs>
<TabItem value="lm-harness" label="LM-Harness Evals"> <TabItem value="lm-harness" label="LM-Harness Evals">
This tutorial assumes you're using the `big-refactor` branch of LM Harness https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor This tutorial assumes you're using the `big-refactor` branch of LM Harness https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor
@ -323,10 +336,6 @@ print(result)
</TabItem> </TabItem>
</Tabs> </Tabs>
### [TUTORIAL] LM-Evaluation Harness with TGI
## Advanced ## Advanced