Marc Abramowitz
4194bafae0
Add nicer test ids when using pytest -v
...
Replace:
```
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route0] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route10] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route11] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route12] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route13] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route14] PASSED
````
with:
```
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'audio_transcriptions', 'path': '/audio/transcriptions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'audio_transcriptions', 'path': '/v1/audio/transcriptions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/engines/{model}/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/openai/deployments/{model}/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/v1/chat/completions'}] PASSED
```
2024-05-16 11:34:22 -07:00
Marc Abramowitz
cf71857354
Add more routes to test_generate_and_call_with_valid_key
2024-05-16 10:44:36 -07:00
Marc Abramowitz
dc52c83b88
Add more routes to test_generate_and_call_with_valid_key
2024-05-16 10:05:35 -07:00
Marc Abramowitz
d5b2e8e7e8
Make test_generate_and_call_with_valid_key parametrized
...
This allows us to test the same code with different routes.
For example, it lets us test the `/engines/{model}/chat/completions`
route, which https://github.com/BerriAI/litellm/pull/3663 fixes.
2024-05-16 09:54:10 -07:00
lj
603705661a
Update model config in test_config.py
2024-05-16 16:51:36 +08:00
Krish Dholakia
57d425aed7
Merge pull request #3666 from BerriAI/litellm_jwt_fix
...
feat(proxy_server.py): JWT-Auth improvements
2024-05-15 22:22:44 -07:00
Krrish Dholakia
600b6f7e1d
feat(proxy_server.py): support 'user_id_upsert' flag for jwt_auth
2024-05-15 22:19:59 -07:00
Krrish Dholakia
99653d2d3e
feat(handle_jwt.py): add support for 'team_id_default
...
allows admin to set a default team id for spend-tracking + permissions
2024-05-15 21:33:35 -07:00
Ishaan Jaff
bb86d2510f
(ci/cd) run again
2024-05-15 21:07:55 -07:00
Krrish Dholakia
f48cd87cf3
feat(proxy_server.py): make team_id optional for jwt token auth (only enforced, if set)
...
Allows users to use jwt auth for internal chat apps
2024-05-15 21:05:14 -07:00
Ishaan Jaff
7aac76b485
Merge pull request #3662 from BerriAI/litellm_feat_predibase_exceptions
...
[Fix] Mask API Keys from Predibase AuthenticationErrors
2024-05-15 20:45:40 -07:00
Krish Dholakia
25e4b34574
Merge pull request #3660 from BerriAI/litellm_proxy_ui_general_settings
...
feat(proxy_server.py): Enabling Admin to control general settings on proxy ui
2024-05-15 20:36:42 -07:00
Ishaan Jaff
e49fa9bd2c
(ci/cd) run again
2024-05-15 20:29:23 -07:00
Krrish Dholakia
594ca947c8
fix(parallel_request_limiter.py): fix max parallel request limiter on retries
2024-05-15 20:16:11 -07:00
Ishaan Jaff
c2a306c4dd
(ci/cd) run again
2024-05-15 20:03:30 -07:00
Ishaan Jaff
136746abc9
fix test config
2024-05-15 19:42:39 -07:00
Ishaan Jaff
d208dedb35
(ci/cd) run again
2024-05-15 17:39:21 -07:00
Ishaan Jaff
240b183d7a
ci/cd run again
2024-05-15 17:31:14 -07:00
Ishaan Jaff
ed0a815c2b
test - exceptions predibase
2024-05-15 16:53:41 -07:00
Ishaan Jaff
f138c15859
(ci/cd) fix test_vertex_ai_stream
2024-05-15 16:32:40 -07:00
Ishaan Jaff
f2e8b2500f
fix function calling mistral large latest
2024-05-15 16:05:17 -07:00
Ishaan Jaff
e518b1e6c1
fix - vertex exception test
2024-05-15 15:37:59 -07:00
Ishaan Jaff
6d8ea641ec
(ci/cd) fix test_content_policy_exception_azure
2024-05-15 14:47:39 -07:00
Ishaan Jaff
371043d683
fix - test mistral/large _parallel_function_call
2024-05-15 14:31:00 -07:00
Ishaan Jaff
3e831b4e1a
fix debug logs on router test
2024-05-15 14:28:17 -07:00
Ishaan Jaff
fdf7a4d8c8
fix - test_lowest_latency_routing_first_pick
2024-05-15 14:24:13 -07:00
Ishaan Jaff
5177e4408e
Merge pull request #3651 from BerriAI/litellm_improve_load_balancing
...
[Feat] Proxy + router - don't cooldown on 4XX error that are not 429, 408, 401
2024-05-15 10:24:34 -07:00
Ishaan Jaff
ae80148c12
test - router cooldowns
2024-05-15 09:43:30 -07:00
Krrish Dholakia
f43da3597d
test: fix test
2024-05-15 08:51:40 -07:00
Krrish Dholakia
1840919ebd
fix(main.py): testing fix
2024-05-15 08:23:00 -07:00
Krrish Dholakia
8117af664c
fix(huggingface_restapi.py): fix task extraction from model name
2024-05-15 07:28:19 -07:00
Krrish Dholakia
900bb9aba8
test(test_token_counter.py): fix load test
2024-05-15 07:12:43 -07:00
Ishaan Jaff
0bac40b0f2
ci/cd run again
2024-05-14 21:53:14 -07:00
Ishaan Jaff
6290de36df
(ci/cd) run again
2024-05-14 21:39:09 -07:00
Krrish Dholakia
73b6b5e804
test(test_token_counter.py): fix token counting test
2024-05-14 21:35:28 -07:00
Ishaan Jaff
faa58c7938
(ci/cd) run again
2024-05-14 20:45:07 -07:00
Ishaan Jaff
6d1ae5b9c4
(ci/cd) run again
2024-05-14 20:18:12 -07:00
Krrish Dholakia
298fd9b25c
fix(main.py): ignore model_config param
2024-05-14 19:03:17 -07:00
Krrish Dholakia
a1dd341ca1
fix(utils.py): default claude-3 to tiktoken (0.8s faster than hf tokenizer)
2024-05-14 18:37:14 -07:00
Krish Dholakia
b04a8d878a
Revert "Logfire Integration"
2024-05-14 17:38:47 -07:00
Krrish Dholakia
c0d701a51e
test(test_config.py): fix linting error
2024-05-14 17:32:31 -07:00
Krrish Dholakia
4e30d7cf5e
build(model_prices_and_context_window.json): add gemini 1.5 flash model info
2024-05-14 17:30:31 -07:00
Ishaan Jaff
aa1615c757
Merge pull request #3626 from BerriAI/litellm_reset_spend_per_team_api_key
...
feat - reset spend per team, api_key [Only Master Key]
2024-05-14 11:49:07 -07:00
Krish Dholakia
adaafd72be
Merge pull request #3599 from taralika/patch-1
...
Ignore 0 failures and 0s latency in daily slack reports
2024-05-14 11:47:46 -07:00
alisalim17
765c382b2a
Merge remote-tracking branch 'upstream/main'
2024-05-14 22:32:57 +04:00
Ishaan Jaff
ca41e6590e
test - auth on /reset/spend
2024-05-14 11:28:35 -07:00
Krrish Dholakia
7557b3e2ff
fix(init.py): set 'default_fallbacks' as a litellm_setting
2024-05-14 11:15:53 -07:00
Ishaan Jaff
0c8f5e5649
Merge pull request #3266 from antonioloison/litellm_add_disk_cache
...
[Feature] Add cache to disk
2024-05-14 09:24:01 -07:00
alisalim17
18bf68298f
Merge remote-tracking branch 'upstream/main'
2024-05-14 18:42:20 +04:00
Anand Taralika
bd2e4cdfe0
Fixed the test alert sequence
...
Also fixed the issue that MagicMock does not create asynchronous mocks by default.
2024-05-13 22:43:12 -07:00