litellm-mirror/litellm/proxy
Ishaan Jaff 670ecda4e2
(fixes) gcs bucket key based logging (#6044)
* fixes for gcs bucket logging

* fix StandardCallbackDynamicParams

* fix - gcs logging when payload is not serializable

* add test_add_callback_via_key_litellm_pre_call_utils_gcs_bucket

* working success callbacks

* linting fixes

* fix linting error

* add type hints to functions

* fixes for dynamic success and failure logging

* fix for test_async_chat_openai_stream
2024-10-04 11:56:10 +05:30
..
_experimental refactor: move all testing to top-level of repo 2024-09-28 21:08:14 -07:00
analytics_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
auth Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
common_utils Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
config_management_endpoints feat(ui): for adding pass-through endpoints 2024-08-15 21:58:11 -07:00
db Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
example_config_yaml Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
fine_tuning_endpoints use native endpoints 2024-08-03 16:52:43 -07:00
guardrails Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
health_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
hooks Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
management_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
management_helpers (feat proxy slack alerting) - allow opting in to getting key / internal user alerts (#5990) 2024-10-01 10:49:22 -07:00
openai_files_endpoints Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
pass_through_endpoints (fixes) gcs bucket key based logging (#6044) 2024-10-04 11:56:10 +05:30
proxy_load_test Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
rerank_endpoints LiteLLM Minor Fixes & Improvements (09/26/2024) (#5925) (#5937) 2024-09-27 17:54:13 -07:00
spend_tracking Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
ui_crud_endpoints ui - add Create, get, delete endpoints for IP Addresses 2024-07-09 15:12:08 -07:00
vertex_ai_endpoints Litellm Minor Fixes & Improvements (10/03/2024) (#6049) 2024-10-03 18:02:28 -04:00
.gitignore
__init__.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_secret_config.yaml fix(utils.py): return openai streaming prompt caching tokens (#6051) 2024-10-03 22:20:13 -04:00
_super_secret_config.yaml docs(enterprise.md): cleanup docs 2024-07-15 14:52:08 -07:00
_types.py (fixes) gcs bucket key based logging (#6044) 2024-10-04 11:56:10 +05:30
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
caching_routes.py feat - refactor team endpoints 2024-06-15 11:40:36 -07:00
custom_sso.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
enterprise feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint 2024-02-17 19:13:04 -08:00
health_check.py LiteLLM Minor Fixes and Improvements (09/14/2024) (#5697) 2024-09-14 10:32:39 -07:00
lambda.py
litellm_pre_call_utils.py LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
llamaguard_prompt.txt feat(llama_guard.py): allow user to define custom unsafe content categories 2024-02-17 17:42:47 -08:00
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json
post_call_rules.py (docs) add example post call rules to proxy 2024-01-15 20:58:50 -08:00
prisma_migration.py refactor secret managers 2024-09-03 10:58:02 -07:00
proxy_cli.py LiteLLM Minor Fixes & Improvements (10/02/2024) (#6023) 2024-10-02 22:00:28 -04:00
proxy_config.yaml (fixes) gcs bucket key based logging (#6044) 2024-10-04 11:56:10 +05:30
proxy_server.py Litellm Minor Fixes & Improvements (10/03/2024) (#6049) 2024-10-03 18:02:28 -04:00
README.md [Feat-Proxy] Allow using custom sso handler (#5809) 2024-09-20 19:14:33 -07:00
route_llm_request.py OpenAI /v1/realtime api support (#6047) 2024-10-03 17:11:22 -04:00
schema.prisma [Feat UI sso] store 'provider' in user metadata (#5856) 2024-09-23 17:49:36 -07:00
start.sh
utils.py Litellm Minor Fixes & Improvements (10/03/2024) (#6049) 2024-10-03 18:02:28 -04:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.


Folder Structure

Routes

  • proxy_server.py - all openai-compatible routes - /v1/chat/completion, /v1/embedding + model info routes - /v1/models, /v1/model/info, /v1/model_group_info routes.
  • health_endpoints/ - /health, /health/liveliness, /health/readiness
  • management_endpoints/key_management_endpoints.py - all /key/* routes
  • management_endpoints/team_endpoints.py - all /team/* routes
  • management_endpoints/internal_user_endpoints.py - all /user/* routes
  • management_endpoints/ui_sso.py - all /sso/* routes