mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-24 18:24:20 +00:00
Update README.md
This commit is contained in:
parent
c6c6d4396a
commit
5230a683e0
1 changed files with 15 additions and 1 deletions
|
@ -35,7 +35,7 @@ curl http://0.0.0.0:8000/v1/chat/completions \
|
|||
- `/router/completions` - for multiple deployments of the same model (e.g. Azure OpenAI), uses the least used deployment. [Learn more](https://docs.litellm.ai/docs/routing)
|
||||
- `/models` - available models on server
|
||||
|
||||
### Running Locally
|
||||
## Running Locally
|
||||
```shell
|
||||
$ git clone https://github.com/BerriAI/litellm.git
|
||||
```
|
||||
|
@ -46,3 +46,17 @@ $ cd ./litellm/litellm_server
|
|||
```shell
|
||||
$ uvicorn main:app --host 0.0.0.0 --port 8000
|
||||
```
|
||||
|
||||
### Custom Config
|
||||
1. Create + Modify router_config.yaml (save your azure/openai/etc. deployment info)
|
||||
```shell
|
||||
cp ./router_config_template.yaml ./router_config.yaml
|
||||
```
|
||||
2. Build Docker Image
|
||||
```shell
|
||||
docker build -t litellm_server . --build-arg CONFIG_FILE=./router_config.yaml
|
||||
```
|
||||
3. Run Docker Image
|
||||
```shell
|
||||
docker run --name litellm-proxy -e PORT=8000 -p 8000:8000 litellm_server
|
||||
```
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue