diff --git a/docs/my-website/docs/simple_proxy.md b/docs/my-website/docs/simple_proxy.md index bbbf54d0f..3f877f660 100644 --- a/docs/my-website/docs/simple_proxy.md +++ b/docs/my-website/docs/simple_proxy.md @@ -185,6 +185,19 @@ $ litellm --model command-nightly ## Usage +#### Replace openai base + +```python +import openai + +openai.api_base = "http://0.0.0.0:8000" + +print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}])) +``` + +### Using with OpenAI compatible projects +LiteLLM allows you to set `openai.api_base` to the proxy server and use all LiteLLM supported LLMs in any OpenAI supported project + This tutorial assumes you're using the `big-refactor` branch of LM Harness https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor @@ -323,10 +336,6 @@ print(result) - -### [TUTORIAL] LM-Evaluation Harness with TGI - - ## Advanced