Commit graph

95 commits

Author SHA1 Message Date
Krrish Dholakia
3560f0ef2c refactor: move all testing to top-level of repo
Closes https://github.com/BerriAI/litellm/issues/486
2024-09-28 21:08:14 -07:00
Ishaan Jaff
8bf7573fd8
(fix proxy) model_group/info support rerank models (#5955)
* fix /model_group/info on rerank

* add test test_proxy_model_group_info_rerank
2024-09-28 10:54:43 -07:00
Ishaan Jaff
f6cdb4ca0d
[Perf improvement Proxy] Use Dual Cache for getting key and team objects (#5903)
* use dual cache - perf

* fix auth checks

* fix budget checks for keys

* fix get / set team tests
2024-09-25 19:56:17 -07:00
Ishaan Jaff
7cbcf538c6
[Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL (#5881)
* fix use previous internal usage caching logic

* fix test_dual_cache_uses_redis

* redis track event_metadata in service logging

* show otel error on _get_parent_otel_span_from_kwargs

* track parent otel span on internal usage cache

* update_request_status

* fix internal usage cache

* fix linting

* fix test internal usage cache

* fix linting error

* show event metadata in redis set

* fix test_get_team_redis

* fix test_get_team_redis

* test_proxy_logging_setup
2024-09-25 10:57:08 -07:00
Krish Dholakia
60709a0753
LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689)
* refactor: cleanup unused variables + fix pyright errors

* feat(health_check.py): Closes https://github.com/BerriAI/litellm/issues/5686

* fix(o1_reasoning.py): add stricter check for o-1 reasoning model

* refactor(mistral/): make it easier to see mistral transformation logic

* fix(openai.py): fix openai o-1 model param mapping

Fixes https://github.com/BerriAI/litellm/issues/5685

* feat(main.py): infer finetuned gemini model from base model

Fixes https://github.com/BerriAI/litellm/issues/5678

* docs(vertex.md): update docs to call finetuned gemini models

* feat(proxy_server.py): allow admin to hide proxy model aliases

Closes https://github.com/BerriAI/litellm/issues/5692

* docs(load_balancing.md): add docs on hiding alias models from proxy config

* fix(base.py): don't raise notimplemented error

* fix(user_api_key_auth.py): fix model max budget check

* fix(router.py): fix elif

* fix(user_api_key_auth.py): don't set team_id to empty str

* fix(team_endpoints.py): fix response type

* test(test_completion.py): handle predibase error

* test(test_proxy_server.py): fix test

* fix(o1_transformation.py): fix max_completion_token mapping

* test(test_image_generation.py): mark flaky test
2024-09-14 10:02:55 -07:00
Krish Dholakia
4657a40ef1
LiteLLM Minor Fixes and Improvements (09/12/2024) (#5658)
* fix(factory.py): handle tool call content as list

Fixes https://github.com/BerriAI/litellm/issues/5652

* fix(factory.py): enforce stronger typing

* fix(router.py): return model alias in /v1/model/info and /v1/model_group/info

* fix(user_api_key_auth.py): move noisy warning message to debug

cleanup logs

* fix(types.py): cleanup pydantic v2 deprecated param

Fixes https://github.com/BerriAI/litellm/issues/5649

* docs(gemini.md): show how to pass inline data to gemini api

Fixes https://github.com/BerriAI/litellm/issues/5674
2024-09-12 23:04:06 -07:00
Ishaan Jaff
57ebe4649e add test for using success and failure 2024-09-09 16:44:37 -07:00
Krrish Dholakia
64952ab044 fix: fix tests 2024-08-24 19:32:22 -07:00
Krish Dholakia
509ae0ca71
Merge pull request #5308 from BerriAI/litellm_team_admin_permissions
feat(user_api_key_auth.py): allow team admin to add new members to team
2024-08-21 14:21:22 -07:00
Krrish Dholakia
7aec6f0f2a fix(litellm_pre_call_utils.py): handle dynamic keys via api correctly 2024-08-21 13:37:21 -07:00
Krrish Dholakia
5ba517819c test(test_proxy_server.py): fix test to specify user role 2024-08-21 08:37:04 -07:00
Krrish Dholakia
a61f3e7656 refactor(team_endpoints.py): refactor auth checks for team member endpoints to ui team admin to manage it 2024-08-20 16:57:18 -07:00
Krrish Dholakia
19083a4d31 feat(_types.py): allow team admin to delete member from team 2024-08-20 16:25:13 -07:00
Krrish Dholakia
fa6c9bf42e feat(user_api_key_auth.py): allow team admin to add new members to team 2024-08-20 14:01:12 -07:00
Krrish Dholakia
bc0023a409 feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
New Feature
2024-08-17 10:46:59 -07:00
Krrish Dholakia
2b7a64ee28 test(test_proxy_server.py): skip local test 2024-08-13 21:36:16 -07:00
Krrish Dholakia
0d0a793e20 test(test_proxy_server.py): refactor test to work on ci/cd 2024-08-13 21:27:59 -07:00
Krrish Dholakia
b1cf46faaa fix(langfuse.py'): cleanup 2024-08-12 23:22:29 -07:00
Krish Dholakia
375dbecb86
Merge branch 'main' into litellm_key_logging 2024-08-12 23:17:21 -07:00
Krrish Dholakia
93a1335e46 fix(litellm_pre_call_utils.py): support routing to logging project by api key 2024-08-12 21:21:40 -07:00
Krrish Dholakia
f322ffc413 refactor(test_users.py): refactor test for user info to use mock endpoints 2024-08-12 19:35:07 -07:00
Krrish Dholakia
d1d28487f7 refactor(test_users.py): refactor test for user info to use mock endpoints 2024-08-12 18:48:43 -07:00
Krrish Dholakia
7b6db63d30 fix(router.py): fallback on 400-status code requests 2024-08-09 12:16:49 -07:00
Krrish Dholakia
1d39c0fb7d fix(management_helpers/utils.py): use user_default max_budget, budget duration on new user upsert during team member add
Fixes https://github.com/BerriAI/litellm/issues/5106
2024-08-08 19:14:43 -07:00
Krrish Dholakia
6af9d9d2b3 test(test_proxy_server.py): unit testing to make sure internal user params don't impact admin 2024-08-08 17:59:30 -07:00
Krish Dholakia
baf01b47d8
Merge branch 'main' into litellm_personal_user_budgets 2024-08-07 19:59:50 -07:00
Krrish Dholakia
ec0b511119 fix: use more descriptive flag 2024-08-07 18:59:46 -07:00
Krrish Dholakia
b7e31638fd fix(internal_user_endpoints.py): respect 'max_user_budget' for new internal user's 2024-08-07 18:50:40 -07:00
Ishaan Jaff
2c3e068435 fix test_team_update_redis 2024-08-07 15:37:02 -07:00
Krrish Dholakia
6974b45c75 test: fix testing 2024-07-31 11:50:03 -07:00
Ishaan Jaff
a5ce084a2c fix test_team_disable_guardrails 2024-07-31 11:49:10 -07:00
Ishaan Jaff
bcc4adff46 fix test_team_disable_guardrails 2024-07-31 11:48:36 -07:00
Krrish Dholakia
fe0b55f2ca fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
2024-07-26 19:04:08 -07:00
Krrish Dholakia
6ab2527fdc feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
2024-07-24 18:14:49 -07:00
Krrish Dholakia
8b3c8102a7 feat(auth_checks.py): Allow admin to disable team from turning on/off guardrails. 2024-07-20 18:39:05 -07:00
Krrish Dholakia
ec03e675c9 fix(proxy/utils.py): fix failure logging for rejected requests. + unit tests 2024-07-16 17:15:20 -07:00
Ishaan Jaff
ae86722a8d test proxy server.py 2024-06-15 15:09:49 -07:00
Ishaan Jaff
7dcf8fc67e fix test litellm_parent_otel_span 2024-06-07 14:07:58 -07:00
Ishaan Jaff
d9dacc1f43
Merge pull request #4065 from BerriAI/litellm_use_common_func
[Refactor] - Refactor proxy_server.py to use common function for `add_litellm_data_to_request`
2024-06-07 14:02:17 -07:00
Ishaan Jaff
7ef7bc8a9a fix simplify - pass litellm_parent_otel_span 2024-06-07 13:48:21 -07:00
Ishaan Jaff
630bc803e2 fix proxy server test 2024-06-07 12:54:39 -07:00
Ishaan Jaff
923cbed6ab test fix - proxy server chat completion 2024-06-07 11:53:03 -07:00
yujonglee
2f81f8f2df simple test 2024-06-04 13:56:28 +09:00
yujonglee
0d8a7d5cf0 use inmemory exporter for testing 2024-06-04 09:04:19 +09:00
yujonglee
c5e9e89288 remove mocks 2024-06-02 19:49:34 +09:00
Marc Abramowitz
b1bf49f0a1 Make test_load_router_config pass
by mocking the necessary things in the test.

Now all the tests in `test_proxy_server.py` pass! 🎉

```shell
$ env -i PATH=$PATH poetry run pytest litellm/tests/test_proxy_server.py --disable-warnings
====================================== test session starts ======================================
platform darwin -- Python 3.12.3, pytest-7.4.4, pluggy-1.5.0
rootdir: /Users/abramowi/Code/OpenSource/litellm
plugins: anyio-4.3.0, asyncio-0.23.6, mock-3.14.0
asyncio: mode=Mode.STRICT
collected 12 items

litellm/tests/test_proxy_server.py s..........s                                           [100%]

========================== 10 passed, 2 skipped, 48 warnings in 10.70s ==========================
```
2024-05-11 16:55:57 -07:00
Marc Abramowitz
9167ff0d75 Set fake env vars for client_no_auth fixture
This allows all of the tests in `test_proxy_server.py` to pass, with the
exception of `test_load_router_config`, without needing to set up real
environment variables.

Before:

```shell
$ env -i PATH=$PATH poetry run pytest litellm/tests/test_proxy_server.py -k 'not test_load_router_config' --disable-warnings
...
========================================================== short test summary info ===========================================================
ERROR litellm/tests/test_proxy_server.py::test_bedrock_embedding - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_chat_completion - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_chat_completion_azure - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_chat_completion_optional_params - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_embedding - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_engines_model_chat_completions - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_health - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_img_gen - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_openai_deployments_model_chat_completions_azure - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
========================================== 2 skipped, 1 deselected, 39 warnings, 9 errors in 3.24s ===========================================
```

After:

```shell
$ env -i PATH=$PATH poetry run pytest litellm/tests/test_proxy_server.py -k 'not test_load_router_config' --disable-warnings
============================================================ test session starts =============================================================
platform darwin -- Python 3.12.3, pytest-7.4.4, pluggy-1.5.0
rootdir: /Users/abramowi/Code/OpenSource/litellm
plugins: anyio-4.3.0, asyncio-0.23.6, mock-3.14.0
asyncio: mode=Mode.STRICT
collected 12 items / 1 deselected / 11 selected

litellm/tests/test_proxy_server.py s.........s                                                                                         [100%]

========================================== 9 passed, 2 skipped, 1 deselected, 48 warnings in 8.42s ===========================================
```
2024-05-11 15:22:30 -07:00
Marc Abramowitz
4ce4927c0c Add test_engines_model_chat_completions 2024-05-03 17:56:39 -07:00
Marc Abramowitz
14e7c9b01c Improve mocking in test_proxy_server
Mock the calls to the backend and assert that the correct parameters are passed
to the backend.
2024-05-02 13:36:23 -07:00
Marc Abramowitz
a79fd772f4 Simplify mock_patch_acompletion 2024-05-02 12:47:27 -07:00