litellm-mirror/litellm/proxy
Krish Dholakia c6bb6e325b
Merge pull request #3740 from BerriAI/litellm_return_rejected_response
feat(proxy_server.py): allow admin to return rejected response as string to user
2024-05-20 17:48:21 -07:00
..
_experimental ui - new build 2024-05-17 21:58:10 -07:00
auth fix(litellm_license.py): fix json handling 2024-05-17 15:48:39 -07:00
db (feat) stop eagerly evaluating fstring 2024-03-25 09:01:42 -07:00
example_config_yaml feat(bedrock_httpx.py): moves to using httpx client for bedrock cohere calls 2024-05-11 13:43:08 -07:00
hooks fix(proxy_server.py): fixes for making rejected responses work with streaming 2024-05-20 12:32:19 -07:00
proxy_load_test (fix) locust load test use uuid 2024-03-25 15:36:30 -07:00
queue refactor: add black formatting 2023-12-25 14:11:20 +05:30
secret_managers fix(utils.py): initial commit for aws secret manager support 2024-03-16 14:37:46 -07:00
tests test -base64 cache hits 2024-04-10 16:46:56 -07:00
.gitignore fix(gitmodules): remapping to new proxy 2023-10-12 21:23:53 -07:00
__init__.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
_logging.py feat(proxy_cli.py): support json logs on proxy 2024-05-20 09:18:12 -07:00
_new_secret_config.yaml fix(langfuse.py): fix langfuse environ check 2024-04-24 13:21:00 -07:00
_super_secret_config.yaml fix(proxy_server.py): fixes for making rejected responses work with streaming 2024-05-20 12:32:19 -07:00
_types.py feat(proxy_server.py): refactor returning rejected message, to work with error logging 2024-05-20 11:14:36 -07:00
admin_ui.py (feat) use cli args to start streamlit 2024-01-23 15:58:14 -08:00
cached_logo.jpg (feat) use hosted images for custom branding 2024-02-22 14:51:40 -08:00
custom_callbacks.py (feat) fix custom handler bug 2024-02-28 14:48:55 -08:00
enterprise feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint 2024-02-17 19:13:04 -08:00
health_check.py test - /health endpoints 2024-04-13 10:09:18 -07:00
lambda.py Add mangum. 2023-11-23 00:04:47 -05:00
llamaguard_prompt.txt feat(llama_guard.py): allow user to define custom unsafe content categories 2024-02-17 17:42:47 -08:00
logo.jpg (feat) admin ui custom branding 2024-02-21 17:34:42 -08:00
openapi.json (feat) add swagger.json for litellm proxy 2023-10-13 20:41:04 -07:00
otel_config.yaml (feat) proxy: otel logging 2023-12-01 21:04:08 -08:00
post_call_rules.py (docs) add example post call rules to proxy 2024-01-15 20:58:50 -08:00
proxy_cli.py feat(proxy_cli.py): support json logs on proxy 2024-05-20 09:18:12 -07:00
proxy_config.yaml edit dev config.yaml 2024-05-11 13:24:59 -07:00
proxy_server.py Merge pull request #3740 from BerriAI/litellm_return_rejected_response 2024-05-20 17:48:21 -07:00
README.md (docs) update readme proxy server 2023-11-17 17:40:44 -08:00
schema.prisma feat(proxy_server.py): add CRUD endpoints for 'end_user' management 2024-05-08 18:50:36 -07:00
start.sh fix(factory.py): fixing llama-2 non-chat models prompt templating 2023-11-07 21:33:54 -08:00
utils.py Merge pull request #3740 from BerriAI/litellm_return_rejected_response 2024-05-20 17:48:21 -07:00

litellm-proxy

A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.

usage

$ pip install litellm
$ litellm --model ollama/codellama 

#INFO: Ollama running on http://0.0.0.0:8000

replace openai base

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.