litellm/litellm/proxy
2024-03-18 20:31:23 -07:00
..
_experimental (feat) new ui build 2024-03-16 19:35:00 -07:00
db (fix) dynamo db test - new model_spend params 2024-02-17 19:13:04 -08:00
example_config_yaml
hooks feat(batch_redis_get.py): batch redis GET requests for a given key + call type 2024-03-15 14:54:16 -07:00
proxy_load_test (fix) locustfile used in load test 2024-03-15 12:38:37 -07:00
queue
secret_managers fix(utils.py): initial commit for aws secret manager support 2024-03-16 14:37:46 -07:00
tests Merge branch 'main' into litellm_faster_api_key_checking 2024-03-09 18:45:03 -08:00
.gitignore
__init__.py
_new_secret_config.yaml feat(batch_redis_get.py): batch redis GET requests for a given key + call type 2024-03-15 14:54:16 -07:00
_types.py Merge pull request #2556 from BerriAI/litellm_aws_secret_manager_support 2024-03-16 18:41:58 -07:00
admin_ui.py
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
custom_callbacks.py (feat) fix custom handler bug 2024-02-28 14:48:55 -08:00
enterprise feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint 2024-02-17 19:13:04 -08:00
health_check.py
lambda.py
llamaguard_prompt.txt feat(llama_guard.py): allow user to define custom unsafe content categories 2024-02-17 17:42:47 -08:00
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json
otel_config.yaml
post_call_rules.py
proxy_cli.py (docs) use port 4000 2024-03-08 21:59:00 -08:00
proxy_config.yaml (docs) load testing proxy 2024-03-14 15:20:36 -07:00
proxy_server.py (fix) if litellm-proxy-budget set use it 2024-03-18 20:31:23 -07:00
README.md
schema.prisma fix(blocked_user_list.py): check if end user blocked in db 2024-03-16 13:03:52 -07:00
start.sh
utils.py fix(proxy_server.py): fix key caching logic 2024-03-13 19:10:24 -07:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.