litellm-mirror/litellm/proxy
2023-11-21 16:26:05 -08:00
..
example_config_yaml (docs) proxy queue config yaml 2023-11-21 16:22:00 -08:00
queue refactor(rq_worker.py): put rq worker behind function call (prevent default import) 2023-11-21 13:51:42 -08:00
tests refactor: fixing linting issues 2023-11-11 18:52:28 -08:00
.gitignore fix(gitmodules): remapping to new proxy 2023-10-12 21:23:53 -07:00
__init__.py update proxy cli 2023-09-28 16:24:41 -07:00
config.yaml (docs) proxy starting config.yaml 2023-11-21 08:36:34 -08:00
openapi.json (feat) add swagger.json for litellm proxy 2023-10-13 20:41:04 -07:00
proxy_cli.py (fix) proxy server LiteLLM warning 2023-11-21 08:50:31 -08:00
proxy_server.py (feat)proxy: readon config per request 2023-11-21 16:26:05 -08:00
README.md (docs) update readme proxy server 2023-11-17 17:40:44 -08:00
schema.prisma (feat) proxy: add config col to prisma config 2023-11-21 16:22:26 -08:00
start.sh fix(factory.py): fixing llama-2 non-chat models prompt templating 2023-11-07 21:33:54 -08:00
test_openai_request.py (docs) test proxy 2023-11-17 10:19:12 -08:00
utils.py fix(factory.py): fixing llama-2 non-chat models prompt templating 2023-11-07 21:33:54 -08:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.