Commit graph

95 commits

Author SHA1 Message Date
Krrish Dholakia
ea96eebe85 refactor: move all testing to top-level of repo
Closes https://github.com/BerriAI/litellm/issues/486
2024-09-28 21:08:14 -07:00
Ishaan Jaff
1a6dde8779 (fix proxy) model_group/info support rerank models (#5955)
* fix /model_group/info on rerank

* add test test_proxy_model_group_info_rerank
2024-09-28 10:54:43 -07:00
Ishaan Jaff
4ea2bf50b5 [Perf improvement Proxy] Use Dual Cache for getting key and team objects (#5903)
* use dual cache - perf

* fix auth checks

* fix budget checks for keys

* fix get / set team tests
2024-09-25 19:56:17 -07:00
Ishaan Jaff
4d253e473a [Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL (#5881)
* fix use previous internal usage caching logic

* fix test_dual_cache_uses_redis

* redis track event_metadata in service logging

* show otel error on _get_parent_otel_span_from_kwargs

* track parent otel span on internal usage cache

* update_request_status

* fix internal usage cache

* fix linting

* fix test internal usage cache

* fix linting error

* show event metadata in redis set

* fix test_get_team_redis

* fix test_get_team_redis

* test_proxy_logging_setup
2024-09-25 10:57:08 -07:00
Krish Dholakia
713d762411 LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689)
* refactor: cleanup unused variables + fix pyright errors

* feat(health_check.py): Closes https://github.com/BerriAI/litellm/issues/5686

* fix(o1_reasoning.py): add stricter check for o-1 reasoning model

* refactor(mistral/): make it easier to see mistral transformation logic

* fix(openai.py): fix openai o-1 model param mapping

Fixes https://github.com/BerriAI/litellm/issues/5685

* feat(main.py): infer finetuned gemini model from base model

Fixes https://github.com/BerriAI/litellm/issues/5678

* docs(vertex.md): update docs to call finetuned gemini models

* feat(proxy_server.py): allow admin to hide proxy model aliases

Closes https://github.com/BerriAI/litellm/issues/5692

* docs(load_balancing.md): add docs on hiding alias models from proxy config

* fix(base.py): don't raise notimplemented error

* fix(user_api_key_auth.py): fix model max budget check

* fix(router.py): fix elif

* fix(user_api_key_auth.py): don't set team_id to empty str

* fix(team_endpoints.py): fix response type

* test(test_completion.py): handle predibase error

* test(test_proxy_server.py): fix test

* fix(o1_transformation.py): fix max_completion_token mapping

* test(test_image_generation.py): mark flaky test
2024-09-14 10:02:55 -07:00
Krish Dholakia
91c918fd70 LiteLLM Minor Fixes and Improvements (09/12/2024) (#5658)
* fix(factory.py): handle tool call content as list

Fixes https://github.com/BerriAI/litellm/issues/5652

* fix(factory.py): enforce stronger typing

* fix(router.py): return model alias in /v1/model/info and /v1/model_group/info

* fix(user_api_key_auth.py): move noisy warning message to debug

cleanup logs

* fix(types.py): cleanup pydantic v2 deprecated param

Fixes https://github.com/BerriAI/litellm/issues/5649

* docs(gemini.md): show how to pass inline data to gemini api

Fixes https://github.com/BerriAI/litellm/issues/5674
2024-09-12 23:04:06 -07:00
Ishaan Jaff
a279f34fd4 add test for using success and failure 2024-09-09 16:44:37 -07:00
Krrish Dholakia
a3d403ec63 fix: fix tests 2024-08-24 19:32:22 -07:00
Krish Dholakia
a583b95d85 Merge pull request #5308 from BerriAI/litellm_team_admin_permissions
feat(user_api_key_auth.py): allow team admin to add new members to team
2024-08-21 14:21:22 -07:00
Krrish Dholakia
ac5c6c8751 fix(litellm_pre_call_utils.py): handle dynamic keys via api correctly 2024-08-21 13:37:21 -07:00
Krrish Dholakia
748943f910 test(test_proxy_server.py): fix test to specify user role 2024-08-21 08:37:04 -07:00
Krrish Dholakia
e32a68c94b refactor(team_endpoints.py): refactor auth checks for team member endpoints to ui team admin to manage it 2024-08-20 16:57:18 -07:00
Krrish Dholakia
c305eb3321 feat(_types.py): allow team admin to delete member from team 2024-08-20 16:25:13 -07:00
Krrish Dholakia
64affd0d6b feat(user_api_key_auth.py): allow team admin to add new members to team 2024-08-20 14:01:12 -07:00
Krrish Dholakia
29bedae79f feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
New Feature
2024-08-17 10:46:59 -07:00
Krrish Dholakia
691e53c764 test(test_proxy_server.py): skip local test 2024-08-13 21:36:16 -07:00
Krrish Dholakia
72b6d37244 test(test_proxy_server.py): refactor test to work on ci/cd 2024-08-13 21:27:59 -07:00
Krrish Dholakia
46d8f694c1 fix(langfuse.py'): cleanup 2024-08-12 23:22:29 -07:00
Krish Dholakia
a778b4317f Merge branch 'main' into litellm_key_logging 2024-08-12 23:17:21 -07:00
Krrish Dholakia
9fcb6f8f57 fix(litellm_pre_call_utils.py): support routing to logging project by api key 2024-08-12 21:21:40 -07:00
Krrish Dholakia
dd10896f32 refactor(test_users.py): refactor test for user info to use mock endpoints 2024-08-12 19:35:07 -07:00
Krrish Dholakia
4bbabb4039 refactor(test_users.py): refactor test for user info to use mock endpoints 2024-08-12 18:48:43 -07:00
Krrish Dholakia
482acc7ee1 fix(router.py): fallback on 400-status code requests 2024-08-09 12:16:49 -07:00
Krrish Dholakia
a70e9661fd fix(management_helpers/utils.py): use user_default max_budget, budget duration on new user upsert during team member add
Fixes https://github.com/BerriAI/litellm/issues/5106
2024-08-08 19:14:43 -07:00
Krrish Dholakia
856ede4a05 test(test_proxy_server.py): unit testing to make sure internal user params don't impact admin 2024-08-08 17:59:30 -07:00
Krish Dholakia
7d28b6ebc3 Merge branch 'main' into litellm_personal_user_budgets 2024-08-07 19:59:50 -07:00
Krrish Dholakia
182d63853b fix: use more descriptive flag 2024-08-07 18:59:46 -07:00
Krrish Dholakia
e60b2d9258 fix(internal_user_endpoints.py): respect 'max_user_budget' for new internal user's 2024-08-07 18:50:40 -07:00
Ishaan Jaff
55feece2b5 fix test_team_update_redis 2024-08-07 15:37:02 -07:00
Krrish Dholakia
030092e555 test: fix testing 2024-07-31 11:50:03 -07:00
Ishaan Jaff
b2b5e32437 fix test_team_disable_guardrails 2024-07-31 11:49:10 -07:00
Ishaan Jaff
b97b213c99 fix test_team_disable_guardrails 2024-07-31 11:48:36 -07:00
Krrish Dholakia
1562cba823 fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
2024-07-26 19:04:08 -07:00
Krrish Dholakia
487035c970 feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
2024-07-24 18:14:49 -07:00
Krrish Dholakia
a351b7cc3e feat(auth_checks.py): Allow admin to disable team from turning on/off guardrails. 2024-07-20 18:39:05 -07:00
Krrish Dholakia
b022099712 fix(proxy/utils.py): fix failure logging for rejected requests. + unit tests 2024-07-16 17:15:20 -07:00
Ishaan Jaff
ad143bd350 test proxy server.py 2024-06-15 15:09:49 -07:00
Ishaan Jaff
6ce970e7cd fix test litellm_parent_otel_span 2024-06-07 14:07:58 -07:00
Ishaan Jaff
80def35a04 Merge pull request #4065 from BerriAI/litellm_use_common_func
[Refactor] - Refactor proxy_server.py to use common function for `add_litellm_data_to_request`
2024-06-07 14:02:17 -07:00
Ishaan Jaff
8106a6dc9b fix simplify - pass litellm_parent_otel_span 2024-06-07 13:48:21 -07:00
Ishaan Jaff
308c4b3b75 fix proxy server test 2024-06-07 12:54:39 -07:00
Ishaan Jaff
2eec379d92 test fix - proxy server chat completion 2024-06-07 11:53:03 -07:00
yujonglee
6652227c25 simple test 2024-06-04 13:56:28 +09:00
yujonglee
4eaa98076e use inmemory exporter for testing 2024-06-04 09:04:19 +09:00
yujonglee
3109c53a6a remove mocks 2024-06-02 19:49:34 +09:00
Marc Abramowitz
be20684413 Make test_load_router_config pass
by mocking the necessary things in the test.

Now all the tests in `test_proxy_server.py` pass! 🎉

```shell
$ env -i PATH=$PATH poetry run pytest litellm/tests/test_proxy_server.py --disable-warnings
====================================== test session starts ======================================
platform darwin -- Python 3.12.3, pytest-7.4.4, pluggy-1.5.0
rootdir: /Users/abramowi/Code/OpenSource/litellm
plugins: anyio-4.3.0, asyncio-0.23.6, mock-3.14.0
asyncio: mode=Mode.STRICT
collected 12 items

litellm/tests/test_proxy_server.py s..........s                                           [100%]

========================== 10 passed, 2 skipped, 48 warnings in 10.70s ==========================
```
2024-05-11 16:55:57 -07:00
Marc Abramowitz
fbd2fa2739 Set fake env vars for client_no_auth fixture
This allows all of the tests in `test_proxy_server.py` to pass, with the
exception of `test_load_router_config`, without needing to set up real
environment variables.

Before:

```shell
$ env -i PATH=$PATH poetry run pytest litellm/tests/test_proxy_server.py -k 'not test_load_router_config' --disable-warnings
...
========================================================== short test summary info ===========================================================
ERROR litellm/tests/test_proxy_server.py::test_bedrock_embedding - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_chat_completion - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_chat_completion_azure - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_chat_completion_optional_params - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_embedding - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_engines_model_chat_completions - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_health - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_img_gen - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
ERROR litellm/tests/test_proxy_server.py::test_openai_deployments_model_chat_completions_azure - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY enviro...
========================================== 2 skipped, 1 deselected, 39 warnings, 9 errors in 3.24s ===========================================
```

After:

```shell
$ env -i PATH=$PATH poetry run pytest litellm/tests/test_proxy_server.py -k 'not test_load_router_config' --disable-warnings
============================================================ test session starts =============================================================
platform darwin -- Python 3.12.3, pytest-7.4.4, pluggy-1.5.0
rootdir: /Users/abramowi/Code/OpenSource/litellm
plugins: anyio-4.3.0, asyncio-0.23.6, mock-3.14.0
asyncio: mode=Mode.STRICT
collected 12 items / 1 deselected / 11 selected

litellm/tests/test_proxy_server.py s.........s                                                                                         [100%]

========================================== 9 passed, 2 skipped, 1 deselected, 48 warnings in 8.42s ===========================================
```
2024-05-11 15:22:30 -07:00
Marc Abramowitz
82f5f4d69a Add test_engines_model_chat_completions 2024-05-03 17:56:39 -07:00
Marc Abramowitz
286af9a495 Improve mocking in test_proxy_server
Mock the calls to the backend and assert that the correct parameters are passed
to the backend.
2024-05-02 13:36:23 -07:00
Marc Abramowitz
3f437525dd Simplify mock_patch_acompletion 2024-05-02 12:47:27 -07:00