From c972e4acf28c62df86d887434fc6f73004c82af4 Mon Sep 17 00:00:00 2001 From: ishaan-jaff Date: Tue, 7 Nov 2023 15:30:29 -0800 Subject: [PATCH] (docs) proxy server --- docs/my-website/docs/simple_proxy.md | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/docs/my-website/docs/simple_proxy.md b/docs/my-website/docs/simple_proxy.md index bbbf54d0f6..3f877f660c 100644 --- a/docs/my-website/docs/simple_proxy.md +++ b/docs/my-website/docs/simple_proxy.md @@ -185,6 +185,19 @@ $ litellm --model command-nightly ## Usage +#### Replace openai base + +```python +import openai + +openai.api_base = "http://0.0.0.0:8000" + +print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}])) +``` + +### Using with OpenAI compatible projects +LiteLLM allows you to set `openai.api_base` to the proxy server and use all LiteLLM supported LLMs in any OpenAI supported project + This tutorial assumes you're using the `big-refactor` branch of LM Harness https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor @@ -323,10 +336,6 @@ print(result) - -### [TUTORIAL] LM-Evaluation Harness with TGI - - ## Advanced