Eliminates these warning when running tests:
```
$ cd litellm/tests
pytest test_key_generate_prisma.py -x -vv
...
====================================================================== warnings summary =======================================================================
...
test_key_generate_prisma.py::test_generate_and_call_with_expired_key
test_key_generate_prisma.py::test_key_with_no_permissions
/Users/abramowi/Code/OpenSource/litellm/litellm/proxy/proxy_server.py:2934: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
expires = datetime.utcnow() + timedelta(seconds=duration_s)
...
```
I don't think that this helps with the issue that I'm seeing, but I
think it might be nice to have this model listed in the openai_routes
list so that it's documented that it's a valid chat_completion route.
The problem we were having is non-admin users trying to use
`/engines/{model}/chat/completions` were getting an HTTP 401 error.
```shell
$ curl -sSL 'http://localhost:4000/engines/gpt-35-turbo-0125/chat/completions' \
--header "Authorization: Bearer ${LITELLM_KEY}" \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-35-turbo-0125",
"messages": [
{
"role": "user",
"content": "Write a poem about LiteLLM"
}
]
}' \
| jq '.'
{
"error": {
"message": "Authentication Error, Only proxy admin can be used to generate, delete, update info for new keys/users/teams. Route=/engines/gpt-35-turbo-0125/chat/completions. Your role=unknown. Your user_id=someone@company.com",
"type": "auth_error",
"param": "None",
"code": 401
}
}
```
This seems to be related to code in `user_api_key_auth` that checks that the URL
matches a list of routes that are allowed for non-admin users, where the list of
routes is in `LiteLLMRoutes.openai_routes.value`. The problem is that the route
`/engines/{model}/chat/completions` is not in that list and furthermore, that
wouldn't even work if it were, because the comparison is done with
`request.url.path` and that will have the actual model name in it (e.g.:
`gpt-35-turbo-0125`), rather than `{model}`.
I added a new list `LiteLLMRoutes.openai_route_names` and added the route
**names** to that list. Then I added a check in `user_api_key_auth` to see if
the route name is in the list of route names.