mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-27 11:43:54 +00:00
* fix(main.py): pass default azure api version as alternative in completion call Fixes api error caused due to api version Closes https://github.com/BerriAI/litellm/issues/5584 * Fixed gemini-1.5-flash pricing (#5590) * add /key/list endpoint * bump: version 1.44.21 → 1.44.22 * docs architecture * Fixed gemini-1.5-flash pricing --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> * fix(bedrock/chat.py): fix converse api stop sequence param mapping Fixes https://github.com/BerriAI/litellm/issues/5592 * fix(databricks/cost_calculator.py): handle databricks model name changes Fixes https://github.com/BerriAI/litellm/issues/5597 * fix(azure.py): support azure api version 2024-08-01-preview Closes https://github.com/BerriAI/litellm/issues/5377 * fix(proxy/_types.py): allow dev keys to call cohere /rerank endpoint Fixes issue where only admin could call rerank endpoint * fix(azure.py): check if model is gpt-4o * fix(proxy/_types.py): support /v1/rerank on non-admin routes as well * fix(cost_calculator.py): fix split on `/` logic in cost calculator --------- Co-authored-by: F1bos <44951186+F1bos@users.noreply.github.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> |
||
---|---|---|
.. | ||
_experimental | ||
analytics_endpoints | ||
auth | ||
common_utils | ||
config_management_endpoints | ||
db | ||
example_config_yaml | ||
fine_tuning_endpoints | ||
guardrails | ||
health_endpoints | ||
hooks | ||
management_endpoints | ||
management_helpers | ||
openai_files_endpoints | ||
out | ||
pass_through_endpoints | ||
proxy_load_test | ||
queue | ||
rerank_endpoints | ||
spend_tracking | ||
tests | ||
ui_crud_endpoints | ||
vertex_ai_endpoints | ||
.gitignore | ||
__init__.py | ||
_logging.py | ||
_new_secret_config.yaml | ||
_super_secret_config.yaml | ||
_types.py | ||
admin_ui.py | ||
cached_logo.jpg | ||
caching_routes.py | ||
custom_callbacks.py | ||
custom_callbacks1.py | ||
custom_guardrail.py | ||
custom_handler.py | ||
enterprise | ||
health_check.py | ||
lambda.py | ||
litellm_pre_call_utils.py | ||
llamaguard_prompt.txt | ||
logo.jpg | ||
openapi.json | ||
otel_config.yaml | ||
post_call_rules.py | ||
prisma_migration.py | ||
proxy_cli.py | ||
proxy_config.yaml | ||
proxy_server.py | ||
README.md | ||
route_llm_request.py | ||
schema.prisma | ||
start.sh | ||
utils.py |
litellm-proxy
A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.
usage
$ pip install litellm
$ litellm --model ollama/codellama
#INFO: Ollama running on http://0.0.0.0:8000
replace openai base
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)
See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.