mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 36s
* fix(create_user_button.tsx): allow admin to set models user has access to, on invite Enables controlling model access on invite * feat(auth_checks.py): enforce 'no-model-access' special model name on backend prevent user from calling models if default key has no model access * fix(chat_ui.tsx): allow user to input custom model * fix(chat_ui.tsx): pull available models based on models key has access to * style(create_user_button.tsx): move default model inside 'personal key creation' accordion * fix(chat_ui.tsx): fix linting error * test(test_auth_checks.py): add unit-test for special model name * docs(internal_user_endpoints.py): update docstring * fix test_moderations_bad_model * Litellm dev 02 27 2025 p6 (#8891) * fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads * fix(sagemaker/handler.py): support passing model id on async streaming * fix(litellm_pre_call_utils.py): Fixes https://github.com/BerriAI/litellm/issues/7237 * Fix calling claude via invoke route + response_format support for claude on invoke route (#8908) * fix(anthropic_claude3_transformation.py): fix amazon anthropic claude 3 tool calling transformation on invoke route move to using anthropic config as base * fix(utils.py): expose anthropic config via providerconfigmanager * fix(llm_http_handler.py): support json mode on async completion calls * fix(invoke_handler/make_call): support json mode for anthropic called via bedrock invoke * fix(anthropic/): handle 'response_format: {"type": "text"}` + migrate amazon claude 3 invoke config to inherit from anthropic config Prevents error when passing in 'response_format: {"type": "text"} * test: fix test * fix(utils.py): fix base invoke provider check * fix(anthropic_claude3_transformation.py): don't pass 'stream' param * fix: fix linting errors * fix(converse_transformation.py): handle response_format type=text for converse * converse_transformation: pass 'description' if set in response_format (#8907) * test(test_bedrock_completion.py): e2e test ensuring tool description is passed in * fix(converse_transformation.py): pass description, if set * fix(transformation.py): Fixes https://github.com/BerriAI/litellm/issues/8767#issuecomment-2689887663 * Fix bedrock passing `response_format: {"type": "text"}` (#8900) * fix(converse_transformation.py): ignore type: text, value in response_format no-op for bedrock * fix(converse_transformation.py): handle adding response format value to tools * fix(base_invoke_transformation.py): fix 'get_bedrock_invoke_provider' to handle cross-region-inferencing models * test(test_bedrock_completion.py): add unit testing for bedrock invoke provider logic * test: update test * fix(exception_mapping_utils.py): add context window exceeded error handling for databricks provider route * fix(fireworks_ai/): support passing tools + response_format together * fix: cleanup * fix(base_invoke_transformation.py): fix imports * (Feat) - Show Error Logs on LiteLLM UI (#8904) * fix test_moderations_bad_model * use async_post_call_failure_hook * basic logging errors in DB * show status on ui * show status on ui * ui show request / response side by side * stash fixes * working, track raw request * track error info in metadata * fix showing error / request / response logs * show traceback on error viewer * ui with traceback of error * fix async_post_call_failure_hook * fix(http_parsing_utils.py): orjson can throw errors on some emoji's in text, default to json.loads * test_get_error_information * fix code quality * rename proxy track cost callback test * _should_store_errors_in_spend_logs * feature flag error logs * Revert "_should_store_errors_in_spend_logs" This reverts commit |
||
---|---|---|
.. | ||
_experimental | ||
analytics_endpoints | ||
auth | ||
batches_endpoints | ||
common_utils | ||
config_management_endpoints | ||
db | ||
example_config_yaml | ||
fine_tuning_endpoints | ||
guardrails | ||
health_endpoints | ||
hooks | ||
management_endpoints | ||
management_helpers | ||
openai_files_endpoints | ||
pass_through_endpoints | ||
rerank_endpoints | ||
spend_tracking | ||
types_utils | ||
ui_crud_endpoints | ||
vertex_ai_endpoints | ||
.gitignore | ||
__init__.py | ||
_logging.py | ||
_new_new_secret_config.yaml | ||
_new_secret_config.yaml | ||
_super_secret_config.yaml | ||
_types.py | ||
cached_logo.jpg | ||
caching_routes.py | ||
custom_sso.py | ||
custom_validate.py | ||
enterprise | ||
health_check.py | ||
lambda.py | ||
litellm_pre_call_utils.py | ||
llamaguard_prompt.txt | ||
logo.jpg | ||
model_config.yaml | ||
openapi.json | ||
post_call_rules.py | ||
prisma_migration.py | ||
proxy_cli.py | ||
proxy_config.yaml | ||
proxy_server.py | ||
README.md | ||
route_llm_request.py | ||
schema.prisma | ||
start.sh | ||
utils.py |
litellm-proxy
A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.
usage
$ pip install litellm
$ litellm --model ollama/codellama
#INFO: Ollama running on http://0.0.0.0:8000
replace openai base
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)
See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.
Folder Structure
Routes
proxy_server.py
- all openai-compatible routes -/v1/chat/completion
,/v1/embedding
+ model info routes -/v1/models
,/v1/model/info
,/v1/model_group_info
routes.health_endpoints/
-/health
,/health/liveliness
,/health/readiness
management_endpoints/key_management_endpoints.py
- all/key/*
routesmanagement_endpoints/team_endpoints.py
- all/team/*
routesmanagement_endpoints/internal_user_endpoints.py
- all/user/*
routesmanagement_endpoints/ui_sso.py
- all/sso/*
routes