mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
* fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check * fix(lowest_tpm_rpm_v2.py): return headers in correct format * test: update test * build(deps): bump cookie and express in /docs/my-website (#6566) Bumps [cookie](https://github.com/jshttp/cookie) and [express](https://github.com/expressjs/express). These dependencies needed to be updated together. Updates `cookie` from 0.6.0 to 0.7.1 - [Release notes](https://github.com/jshttp/cookie/releases) - [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1) Updates `express` from 4.20.0 to 4.21.1 - [Release notes](https://github.com/expressjs/express/releases) - [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md) - [Commits](https://github.com/expressjs/express/compare/4.20.0...4.21.1) --- updated-dependencies: - dependency-name: cookie dependency-type: indirect - dependency-name: express dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * docs(virtual_keys.md): update Dockerfile reference (#6554) Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com> * (proxy fix) - call connect on prisma client when running setup (#6534) * critical fix - call connect on prisma client when running setup * fix test_proxy_server_prisma_setup * fix test_proxy_server_prisma_setup * Add 3.5 haiku (#6588) * feat: add claude-3-5-haiku-20241022 entries * feat: add claude-3-5-haiku-20241022 and vertex_ai/claude-3-5-haiku@20241022 models * add missing entries, remove vision * remove image token costs * Litellm perf improvements 3 (#6573) * perf: move writing key to cache, to background task * perf(litellm_pre_call_utils.py): add otel tracing for pre-call utils adds 200ms on calls with pgdb connected * fix(litellm_pre_call_utils.py'): rename call_type to actual call used * perf(proxy_server.py): remove db logic from _get_config_from_file was causing db calls to occur on every llm request, if team_id was set on key * fix(auth_checks.py): add check for reducing db calls if user/team id does not exist in db reduces latency/call by ~100ms * fix(proxy_server.py): minor fix on existing_settings not incl alerting * fix(exception_mapping_utils.py): map databricks exception string * fix(auth_checks.py): fix auth check logic * test: correctly mark flaky test * fix(utils.py): handle auth token error for tokenizers.from_pretrained * build: fix map * build: fix map * build: fix json for model map * test: remove eol model * fix(proxy_server.py): fix db config loading logic * fix(proxy_server.py): fix order of config / db updates, to ensure fields not overwritten * test: skip test if required env var is missing * test: fix test --------- Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: paul-gauthier <69695708+paul-gauthier@users.noreply.github.com> |
||
---|---|---|
.. | ||
_experimental | ||
analytics_endpoints | ||
auth | ||
common_utils | ||
config_management_endpoints | ||
db | ||
example_config_yaml | ||
fine_tuning_endpoints | ||
guardrails | ||
health_endpoints | ||
hooks | ||
management_endpoints | ||
management_helpers | ||
openai_files_endpoints | ||
pass_through_endpoints | ||
proxy_load_test | ||
rerank_endpoints | ||
spend_tracking | ||
ui_crud_endpoints | ||
vertex_ai_endpoints | ||
.gitignore | ||
__init__.py | ||
_logging.py | ||
_new_secret_config.yaml | ||
_super_secret_config.yaml | ||
_types.py | ||
cached_logo.jpg | ||
caching_routes.py | ||
custom_sso.py | ||
enterprise | ||
health_check.py | ||
lambda.py | ||
litellm_pre_call_utils.py | ||
llamaguard_prompt.txt | ||
logo.jpg | ||
openapi.json | ||
post_call_rules.py | ||
prisma_migration.py | ||
proxy_cli.py | ||
proxy_config.yaml | ||
proxy_server.py | ||
README.md | ||
route_llm_request.py | ||
schema.prisma | ||
start.sh | ||
utils.py |
litellm-proxy
A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.
usage
$ pip install litellm
$ litellm --model ollama/codellama
#INFO: Ollama running on http://0.0.0.0:8000
replace openai base
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)
See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.
Folder Structure
Routes
proxy_server.py
- all openai-compatible routes -/v1/chat/completion
,/v1/embedding
+ model info routes -/v1/models
,/v1/model/info
,/v1/model_group_info
routes.health_endpoints/
-/health
,/health/liveliness
,/health/readiness
management_endpoints/key_management_endpoints.py
- all/key/*
routesmanagement_endpoints/team_endpoints.py
- all/team/*
routesmanagement_endpoints/internal_user_endpoints.py
- all/user/*
routesmanagement_endpoints/ui_sso.py
- all/sso/*
routes