litellm/litellm/proxy
Krish Dholakia e0d81434ed
LiteLLM minor fixes + improvements (31/08/2024) (#5464)
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints

* test(test_streaming.py): skip model due to end of life

* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits

Closes https://github.com/BerriAI/litellm/issues/4096
2024-09-01 13:31:42 -07:00
..
_experimental feat(team_endpoints.py): return team member budgets in /team/info call 2024-08-28 19:14:01 -07:00
analytics_endpoints show correct key aliases on ui 2024-06-21 14:36:38 -07:00
auth allow pass through routes as LLM API routes 2024-08-30 16:08:44 -07:00
common_utils add gcs bucket base 2024-08-30 10:41:39 -07:00
config_management_endpoints feat(ui): for adding pass-through endpoints 2024-08-15 21:58:11 -07:00
db (feat) stop eagerly evaluating fstring 2024-03-25 09:01:42 -07:00
example_config_yaml call spend logs endpoint 2024-08-30 16:35:07 -07:00
fine_tuning_endpoints use native endpoints 2024-08-03 16:52:43 -07:00
guardrails Merge pull request #5392 from BerriAI/litellm_add_native_cohere_rerank 2024-08-27 17:29:37 -07:00
health_endpoints fix email health checks 2024-08-06 15:59:26 -07:00
hooks v0 add rerank on litellm proxy 2024-08-27 17:28:39 -07:00
management_endpoints add set / update tags for a team 2024-08-29 13:05:00 -07:00
management_helpers feat(team_endpoints.py): return team member budgets in /team/info call 2024-08-28 19:14:01 -07:00
openai_files_endpoints fix(files_endpoints.py): fix multiple args error 2024-08-22 16:42:44 -07:00
out bump: version 1.43.15 → 1.43.16 2024-08-15 23:04:30 -07:00
pass_through_endpoints use helper class for pass through success handler 2024-08-30 15:52:47 -07:00
proxy_load_test (fix) locust load test use uuid 2024-03-25 15:36:30 -07:00
queue docs(scheduler.md): add request prioritization to docs 2024-05-31 19:35:47 -07:00
rerank_endpoints v0 add rerank on litellm proxy 2024-08-27 17:28:39 -07:00
secret_managers fix(aws_secret_manager.py): fix litellm license check 2024-07-03 22:07:48 -07:00
spend_tracking feat(proxy_server.py): support disabling storing master key hash in db, for spend tracking 2024-08-21 12:35:37 -07:00
tests vertex forward all headers from vertex 2024-08-30 11:05:23 -07:00
ui_crud_endpoints ui - add Create, get, delete endpoints for IP Addresses 2024-07-09 15:12:08 -07:00
vertex_ai_endpoints LiteLLM minor fixes + improvements (31/08/2024) (#5464) 2024-09-01 13:31:42 -07:00
.gitignore
__init__.py
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_secret_config.yaml LiteLLM minor fixes + improvements (31/08/2024) (#5464) 2024-09-01 13:31:42 -07:00
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py allow settings tags per team 2024-08-29 13:03:49 -07:00
admin_ui.py
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
caching_routes.py feat - refactor team endpoints 2024-06-15 11:40:36 -07:00
custom_callbacks.py (feat) fix custom handler bug 2024-02-28 14:48:55 -08:00
custom_callbacks1.py v0 add rerank on litellm proxy 2024-08-27 17:28:39 -07:00
custom_guardrail.py v0 add rerank on litellm proxy 2024-08-27 17:28:39 -07:00
custom_handler.py feat(proxy_server.py): support custom llm handler on proxy 2024-07-25 19:35:52 -07:00
enterprise feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint 2024-02-17 19:13:04 -08:00
health_check.py fix(health_check.py): return 'missing mode' error message, if error with health check, and mode is missing 2024-08-16 17:24:29 -07:00
lambda.py
litellm_pre_call_utils.py fix team based tag routing 2024-08-29 14:37:44 -07:00
llamaguard_prompt.txt feat(llama_guard.py): allow user to define custom unsafe content categories 2024-02-17 17:42:47 -08:00
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json
otel_config.yaml
post_call_rules.py
prisma_migration.py fix entrypoint 2024-08-26 20:32:23 -07:00
proxy_cli.py feat(proxy_server.py): support azure batch api endpoints 2024-08-22 15:21:43 -07:00
proxy_config.yaml Merge pull request #5463 from BerriAI/litellm_track_error_per_model 2024-08-31 16:36:04 -07:00
proxy_server.py Merge pull request #5457 from BerriAI/litellm_track_spend_logs_for_vertex_pass_through_endpoints 2024-08-31 16:30:15 -07:00
README.md
route_llm_request.py v0 add rerank on litellm proxy 2024-08-27 17:28:39 -07:00
schema.prisma fix created_at and updated_at not existing error 2024-08-26 21:04:39 -07:00
start.sh
utils.py fix(proxy/utils.py): fix model dump to exclude none values 2024-08-28 12:02:44 -07:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.