litellm-mirror/litellm/proxy
2024-07-10 14:55:10 -07:00
..
_experimental ui new build 2024-07-09 16:33:00 -07:00
analytics_endpoints show correct key aliases on ui 2024-06-21 14:36:38 -07:00
auth style(litellm_license.py): add debug statement for litellm license 2024-07-09 22:43:33 -07:00
common_utils fix - setting rpm/tpm 2024-07-08 07:43:00 -07:00
db (feat) stop eagerly evaluating fstring 2024-03-25 09:01:42 -07:00
example_config_yaml feat(bedrock_httpx.py): moves to using httpx client for bedrock cohere calls 2024-05-11 13:43:08 -07:00
guardrails feat - control guardrails per api key 2024-07-05 19:39:07 -07:00
health_endpoints use ProxyErrorTypes, 2024-07-08 12:47:53 -07:00
hooks fix(presidio_pii_masking.py): fix presidio unset url check + add same check for langfuse 2024-07-06 17:50:55 -07:00
management_endpoints use ProxyErrorTypes, 2024-07-08 12:47:53 -07:00
management_helpers fix(utils.py): change update to upsert 2024-07-08 15:49:29 -06:00
openai_files_endpoints add /files endpoints 2024-07-10 14:55:10 -07:00
pass_through_endpoints feat - setting up auth on pass through endpoint 2024-06-29 08:38:44 -07:00
proxy_load_test (fix) locust load test use uuid 2024-03-25 15:36:30 -07:00
queue docs(scheduler.md): add request prioritization to docs 2024-05-31 19:35:47 -07:00
secret_managers fix(aws_secret_manager.py): fix litellm license check 2024-07-03 22:07:48 -07:00
spend_tracking SpendLogsPayload- track user ip 2024-07-08 10:16:58 -07:00
tests test - pass through langfuse requests 2024-06-28 17:28:21 -07:00
ui_crud_endpoints ui - add Create, get, delete endpoints for IP Addresses 2024-07-09 15:12:08 -07:00
.gitignore fix(gitmodules): remapping to new proxy 2023-10-12 21:23:53 -07:00
__init__.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_logging.py fix(_logging.py): fix timestamp format for json logs 2024-06-20 15:20:21 -07:00
_new_secret_config.yaml fix(vertex_httpx.py): add sync vertex image gen support 2024-07-09 13:33:54 -07:00
_super_secret_config.yaml fix(anthropic.py): fix anthropic tool calling + streaming 2024-07-04 16:30:24 -07:00
_types.py add "/v1/assistants/{assistant_id}", as openai route 2024-07-10 11:42:02 -07:00
admin_ui.py (feat) use cli args to start streamlit 2024-01-23 15:58:14 -08:00
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
caching_routes.py feat - refactor team endpoints 2024-06-15 11:40:36 -07:00
custom_callbacks.py (feat) fix custom handler bug 2024-02-28 14:48:55 -08:00
custom_callbacks1.py feat - async_post_call_streaming_hook 2024-05-23 09:30:53 -07:00
enterprise feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint 2024-02-17 19:13:04 -08:00
health_check.py test - /health endpoints 2024-04-13 10:09:18 -07:00
lambda.py Add mangum. 2023-11-23 00:04:47 -05:00
litellm_pre_call_utils.py feat - support /create assistants endpoint 2024-07-09 10:03:47 -07:00
llamaguard_prompt.txt feat(llama_guard.py): allow user to define custom unsafe content categories 2024-02-17 17:42:47 -08:00
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json (feat) add swagger.json for litellm proxy 2023-10-13 20:41:04 -07:00
otel_config.yaml (feat) proxy: otel logging 2023-12-01 21:04:08 -08:00
post_call_rules.py (docs) add example post call rules to proxy 2024-01-15 20:58:50 -08:00
prisma_migration.py fix(prisma_migration.py): support decrypting variables in a python script 2024-06-28 16:31:37 -07:00
proxy_cli.py fix(proxy_cli.py): bump default azure api version 2024-07-08 16:28:22 -07:00
proxy_config.yaml fix add assistant settings on config 2024-07-09 10:05:32 -07:00
proxy_server.py add /files endpoints 2024-07-10 14:55:10 -07:00
README.md (docs) update readme proxy server 2023-11-17 17:40:44 -08:00
schema.prisma SpendLogsPayload- track user ip 2024-07-08 10:16:58 -07:00
start.sh fix(factory.py): fixing llama-2 non-chat models prompt templating 2023-11-07 21:33:54 -08:00
utils.py fix show exact prisma exception when starting proxy 2024-07-09 18:20:09 -07:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.