From 0726d8f58c481d441e8d6d9ff225fd26cde84153 Mon Sep 17 00:00:00 2001 From: Ishaan Jaff Date: Tue, 17 Oct 2023 13:54:41 -0700 Subject: [PATCH] Update README.md --- README.md | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/README.md b/README.md index 1989f6e8d..8bd621de4 100644 --- a/README.md +++ b/README.md @@ -87,15 +87,10 @@ Create an OpenAI API compatible server to call any non-openai model (e.g. Huggin This works for async + streaming as well. ```python litellm --model -``` -Running your model locally or on a custom endpoint ? Set the `--api-base` parameter [see how](https://docs.litellm.ai/docs/proxy_server) - -### Multiple LLMs ([Docs](https://docs.litellm.ai/docs/proxy_server#multiple-llms)) -```shell -$ litellm #INFO: litellm proxy running on http://0.0.0.0:8000 ``` +Running your model locally or on a custom endpoint ? Set the `--api-base` parameter [see how](https://docs.litellm.ai/docs/proxy_server) ### Self-host server ([Docs](https://docs.litellm.ai/docs/proxy_server#deploy-proxy))