docs(deploy.md): add docker instructions to deploy docs

This commit is contained in:
Krrish Dholakia 2023-12-07 09:22:01 -08:00
parent ed07012874
commit c7aaa4adf8
2 changed files with 71 additions and 22 deletions

View file

@ -1,5 +1,75 @@
# Deploying LiteLLM Proxy
### Deploy on Render https://render.com/
## Quick Start Docker Image: Github Container Registry
### Pull the litellm ghcr docker image
See the latest available ghcr docker image here:
https://github.com/berriai/litellm/pkgs/container/litellm
```shell
docker pull ghcr.io/berriai/litellm:main-v1.10.1
```
### Run the Docker Image
```shell
docker run ghcr.io/berriai/litellm:main-v1.10.0
```
#### Run the Docker Image with LiteLLM CLI args
See all supported CLI args [here](https://docs.litellm.ai/docs/proxy/cli):
Here's how you can run the docker image and pass your config to `litellm`
```shell
docker run ghcr.io/berriai/litellm:main-v1.10.0 --config your_config.yaml
```
Here's how you can run the docker image and start litellm on port 8002 with `num_workers=8`
```shell
docker run ghcr.io/berriai/litellm:main-v1.10.0 --port 8002 --num_workers 8
```
#### Run the Docker Image using docker compose
**Step 1**
- (Recommended) Use the example file `docker-compose.example.yml` given in the project root. e.g. https://github.com/BerriAI/litellm/blob/main/docker-compose.example.yml
- Rename the file `docker-compose.example.yml` to `docker-compose.yml`.
Here's an example `docker-compose.yml` file
```yaml
version: "3.9"
services:
litellm:
image: ghcr.io/berriai/litellm:main
ports:
- "8000:8000" # Map the container port to the host, change the host port if necessary
volumes:
- ./litellm-config.yaml:/app/config.yaml # Mount the local configuration file
# You can change the port or number of workers as per your requirements or pass any new supported CLI augument. Make sure the port passed here matches with the container port defined above in `ports` value
command: [ "--config", "/app/config.yaml", "--port", "8000", "--num_workers", "8" ]
# ...rest of your docker-compose config if any
```
**Step 2**
Create a `litellm-config.yaml` file with your LiteLLM config relative to your `docker-compose.yml` file.
Check the config doc [here](https://docs.litellm.ai/docs/proxy/configs)
**Step 3**
Run the command `docker-compose up` or `docker compose up` as per your docker installation.
> Use `-d` flag to run the container in detached mode (background) e.g. `docker compose up -d`
Your LiteLLM container should be running now on the defined port e.g. `8000`.
## Deploy on Render https://render.com/
<iframe width="840" height="500" src="https://www.loom.com/embed/805964b3c8384b41be180a61442389a3" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

View file

@ -97,27 +97,6 @@ class PrismaClient:
await self.db.disconnect()
# ### CUSTOM FILE ###
# def get_instance_fn(value: str, config_file_path: Optional[str]=None):
# try:
# # Split the path by dots to separate module from instance
# parts = value.split(".")
# # The module path is all but the last part, and the instance is the last part
# module_path = ".".join(parts[:-1])
# instance_name = parts[-1]
# if config_file_path is not None:
# directory = os.path.dirname(config_file_path)
# module_path = os.path.join(directory, module_path)
# # Dynamically import the module
# module = importlib.import_module(module_path)
# # Get the instance from the module
# instance = getattr(module, instance_name)
# return instance
# except ImportError as e:
# print(e)
# raise ImportError(f"Could not import file at {value}")
def get_instance_fn(value: str, config_file_path: Optional[str] = None) -> Any:
try: