litellm-mirror/litellm/proxy
2024-07-05 14:49:34 -07:00
..
_experimental fix(anthropic.py): fix anthropic tool calling + streaming 2024-07-04 16:30:24 -07:00
analytics_endpoints show correct key aliases on ui 2024-06-21 14:36:38 -07:00
auth fix(litellm_license.py): add better error logs 2024-07-04 21:07:10 -07:00
common_utils feat - allow looking up model_id in model info 2024-07-04 13:13:42 -07:00
db (feat) stop eagerly evaluating fstring 2024-03-25 09:01:42 -07:00
example_config_yaml feat(bedrock_httpx.py): moves to using httpx client for bedrock cohere calls 2024-05-11 13:43:08 -07:00
guardrails fix lakera ai testing 2024-07-03 18:58:36 -07:00
health_endpoints add options for /health/readiness and liveliness 2024-06-19 12:13:35 -07:00
hooks fix(dynamic_rate_limiter.py): add rpm allocation, priority + quota reservation to docs 2024-07-01 23:35:42 -07:00
management_endpoints feat(internal_user_endpoints.py): expose /user/delete endpoint 2024-07-04 17:01:16 -07:00
management_helpers feat - refactor team endpoints 2024-06-15 11:40:36 -07:00
pass_through_endpoints feat - setting up auth on pass through endpoint 2024-06-29 08:38:44 -07:00
proxy_load_test (fix) locust load test use uuid 2024-03-25 15:36:30 -07:00
queue docs(scheduler.md): add request prioritization to docs 2024-05-31 19:35:47 -07:00
secret_managers fix(aws_secret_manager.py): fix litellm license check 2024-07-03 22:07:48 -07:00
spend_tracking add better debugging on /spend/report 2024-06-29 18:01:25 -07:00
tests test - pass through langfuse requests 2024-06-28 17:28:21 -07:00
.gitignore fix(gitmodules): remapping to new proxy 2023-10-12 21:23:53 -07:00
__init__.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_secret_config.yaml fix(proxy_server.py): support langfuse logging for rejected requests on /v1/chat/completions 2024-07-05 13:07:09 -07:00
_super_secret_config.yaml fix(anthropic.py): fix anthropic tool calling + streaming 2024-07-04 16:30:24 -07:00
_types.py Merge pull request #4386 from BerriAI/litellm_user_delete_endpoint 2024-07-04 16:38:31 -07:00
admin_ui.py (feat) use cli args to start streamlit 2024-01-23 15:58:14 -08:00
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
caching_routes.py feat - refactor team endpoints 2024-06-15 11:40:36 -07:00
custom_callbacks.py (feat) fix custom handler bug 2024-02-28 14:48:55 -08:00
custom_callbacks1.py feat - async_post_call_streaming_hook 2024-05-23 09:30:53 -07:00
enterprise feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint 2024-02-17 19:13:04 -08:00
health_check.py test - /health endpoints 2024-04-13 10:09:18 -07:00
lambda.py Add mangum. 2023-11-23 00:04:47 -05:00
litellm_pre_call_utils.py Merge branch 'main' into litellm_fix_in_mem_usage 2024-06-27 21:12:06 -07:00
llamaguard_prompt.txt feat(llama_guard.py): allow user to define custom unsafe content categories 2024-02-17 17:42:47 -08:00
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json (feat) add swagger.json for litellm proxy 2023-10-13 20:41:04 -07:00
otel_config.yaml (feat) proxy: otel logging 2023-12-01 21:04:08 -08:00
post_call_rules.py (docs) add example post call rules to proxy 2024-01-15 20:58:50 -08:00
prisma_migration.py fix(prisma_migration.py): support decrypting variables in a python script 2024-06-28 16:31:37 -07:00
proxy_cli.py fix(proxy_cli.py): run aws kms decrypt before starting proxy server 2024-06-28 16:03:56 -07:00
proxy_config.yaml init guardrails on proxy 2024-07-03 14:18:12 -07:00
proxy_server.py fix(proxy_server.py): fix callback check order 2024-07-05 14:06:33 -07:00
README.md (docs) update readme proxy server 2023-11-17 17:40:44 -08:00
schema.prisma Merge pull request #4084 from BerriAI/litellm_batch_add_team_members 2024-06-10 20:26:35 -07:00
start.sh fix(factory.py): fixing llama-2 non-chat models prompt templating 2023-11-07 21:33:54 -08:00
utils.py fix(utils.py): log failure to sync failure callbacks as well 2024-07-05 14:49:34 -07:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.