mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
The problem we were having is non-admin users trying to use `/engines/{model}/chat/completions` were getting an HTTP 401 error. ```shell $ curl -sSL 'http://localhost:4000/engines/gpt-35-turbo-0125/chat/completions' \ --header "Authorization: Bearer ${LITELLM_KEY}" \ --header 'Content-Type: application/json' \ --data ' { "model": "gpt-35-turbo-0125", "messages": [ { "role": "user", "content": "Write a poem about LiteLLM" } ] }' \ | jq '.' { "error": { "message": "Authentication Error, Only proxy admin can be used to generate, delete, update info for new keys/users/teams. Route=/engines/gpt-35-turbo-0125/chat/completions. Your role=unknown. Your user_id=someone@company.com", "type": "auth_error", "param": "None", "code": 401 } } ``` This seems to be related to code in `user_api_key_auth` that checks that the URL matches a list of routes that are allowed for non-admin users, where the list of routes is in `LiteLLMRoutes.openai_routes.value`. The problem is that the route `/engines/{model}/chat/completions` is not in that list and furthermore, that wouldn't even work if it were, because the comparison is done with `request.url.path` and that will have the actual model name in it (e.g.: `gpt-35-turbo-0125`), rather than `{model}`. I added a new list `LiteLLMRoutes.openai_route_names` and added the route **names** to that list. Then I added a check in `user_api_key_auth` to see if the route name is in the list of route names. |
||
---|---|---|
.. | ||
_experimental | ||
auth | ||
db | ||
example_config_yaml | ||
hooks | ||
proxy_load_test | ||
queue | ||
secret_managers | ||
tests | ||
.gitignore | ||
__init__.py | ||
_new_secret_config.yaml | ||
_super_secret_config.yaml | ||
_types.py | ||
admin_ui.py | ||
cached_logo.jpg | ||
custom_callbacks.py | ||
enterprise | ||
health_check.py | ||
lambda.py | ||
llamaguard_prompt.txt | ||
logo.jpg | ||
openapi.json | ||
otel_config.yaml | ||
post_call_rules.py | ||
proxy_cli.py | ||
proxy_config.yaml | ||
proxy_server.py | ||
README.md | ||
schema.prisma | ||
start.sh | ||
utils.py |
litellm-proxy
A local, fast, and lightweight OpenAI-compatible server to call 100+ LLM APIs.
usage
$ pip install litellm
$ litellm --model ollama/codellama
#INFO: Ollama running on http://0.0.0.0:8000
replace openai base
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:8000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)
See how to call Huggingface,Bedrock,TogetherAI,Anthropic, etc.