diff --git a/README.md b/README.md index 1546558f0..e973a3f69 100644 --- a/README.md +++ b/README.md @@ -94,7 +94,7 @@ response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content ``` ## OpenAI Proxy - ([Docs](https://docs.litellm.ai/docs/simple_proxy)) -Use LiteLLM in any OpenAI API compatible project. Call 100+ LLMs Huggingface/Bedrock/TogetherAI/etc in the OpenAI ChatCompletions & Completions format +**If you don't want to make code changes to add the litellm package to your code base**, you can use litellm proxy. Create a server to call 100+ LLMs (Huggingface/Bedrock/TogetherAI/etc) in the OpenAI ChatCompletions & Completions format ### Step 1: Start litellm proxy ```shell