From 8e4a167629395b9d809c32052ddd7eaa6f7d141a Mon Sep 17 00:00:00 2001 From: Krish Dholakia Date: Mon, 23 Oct 2023 08:38:01 -0700 Subject: [PATCH] Update README.md --- README.md | 36 ++---------------------------------- 1 file changed, 2 insertions(+), 34 deletions(-) diff --git a/README.md b/README.md index 05ff1bc74..67351eb5e 100644 --- a/README.md +++ b/README.md @@ -14,8 +14,8 @@ - -

1-click OpenAI Proxy

+
+

OpenAI Proxy Server

@@ -85,38 +85,6 @@ for chunk in result: print(chunk['choices'][0]['delta']) ``` -## OpenAI Proxy Server ([Docs](https://docs.litellm.ai/docs/proxy_server)) -Create an OpenAI API compatible server to call any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.) - -This works for async + streaming as well. -```python -litellm --model - -#INFO: litellm proxy running on http://0.0.0.0:8000 -``` -Running your model locally or on a custom endpoint ? Set the `--api-base` parameter [see how](https://docs.litellm.ai/docs/proxy_server) - -### Self-host server ([Docs](https://docs.litellm.ai/docs/proxy_server#deploy-proxy)) - -1. Clone the repo -```shell -git clone https://github.com/BerriAI/litellm.git -``` - -2. Modify `template_secrets.toml` -```shell -[keys] -OPENAI_API_KEY="sk-..." - -[general] -default_model = "gpt-3.5-turbo" -``` - -3. Deploy -```shell -docker build -t litellm . && docker run -p 8000:8000 litellm -``` - ## Supported Provider ([Docs](https://docs.litellm.ai/docs/providers)) | Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | | ------------- | ------------- | ------------- | ------------- | ------------- |