litellm-mirror/litellm/proxy
2023-11-04 22:31:53 -07:00
..
.gitignore fix(gitmodules): remapping to new proxy 2023-10-12 21:23:53 -07:00
__init__.py update proxy cli 2023-09-28 16:24:41 -07:00
api_log.json bump: version 0.8.4 → 0.8.5 2023-10-14 16:43:06 -07:00
config.yaml refactor(proxy_server.py): print statement showing how to add debug for logs 2023-11-03 17:41:14 -07:00
cost.log feat(proxy_server): adds create-proxy feature 2023-10-12 18:27:07 -07:00
costs.json (feat) proxy_server new cost.json on litellm. Track daily cost & num_requests 2023-10-12 11:37:35 -07:00
openapi.json (feat) add swagger.json for litellm proxy 2023-10-13 20:41:04 -07:00
proxy_cli.py bump: version 0.13.1.dev2 → 0.13.1.dev3 2023-11-04 22:31:53 -07:00
proxy_server.py fix(main.py): fixing print_verbose 2023-11-04 14:41:34 -07:00
README.md docs(proxy): added readme 2023-10-12 21:09:40 -07:00
start.sh Refactor start script and Dockerfile, switch to bash entrypoint 2023-10-17 22:53:06 +08:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai 

openai.api_base = "http://0.0.0.0:8000"

print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.

configure proxy

To save API Keys, change model prompt, etc. you'll need to create a local instance of it:

$ litellm --create-proxy

This will create a local project called litellm-proxy in your current directory, that has:

  • proxy_cli.py: Runs the proxy
  • proxy_server.py: Contains the API calling logic
    • /chat/completions: receives openai.ChatCompletion.create call.
    • /completions: receives openai.Completion.create call.
    • /models: receives openai.Model.list() call
  • secrets.toml: Stores your api keys, model configs, etc.

Run it by doing:

$ cd litellm-proxy
$ python proxy_cli.py --model ollama/llama # replace with your model name