litellm-mirror/litellm
Marc Abramowitz 156d04ec43 Add nicer test ids when using pytest -v
Replace:

```
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route0] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route10] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route11] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route12] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route13] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route14] PASSED
````

with:

```
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'audio_transcriptions', 'path': '/audio/transcriptions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'audio_transcriptions', 'path': '/v1/audio/transcriptions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/engines/{model}/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/openai/deployments/{model}/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/v1/chat/completions'}] PASSED
```
2024-05-16 11:34:22 -07:00
..
assistants feat(assistants/main.py): support litellm.get_assistants() and litellm.get_messages() 2024-05-04 21:30:28 -07:00
deprecated_litellm_server refactor: add black formatting 2023-12-25 14:11:20 +05:30
integrations fix(slack_alerting.py): fix timezone utc issue 2024-05-14 22:54:33 -07:00
llms fix(huggingface_restapi.py): fix task extraction from model name 2024-05-15 07:28:19 -07:00
proxy Add "/engines/{model}/chat/completions" to openai_routes 2024-05-16 10:03:23 -07:00
router_strategy fix(lowest_latency.py): allow ttl to be a float 2024-05-15 09:59:21 -07:00
tests Add nicer test ids when using pytest -v 2024-05-16 11:34:22 -07:00
types fix(types/init.py): don't import openai assistants types by default 2024-05-15 08:50:31 -07:00
__init__.py docs(prod.md): add 'disable load_dotenv' tutorial to docs 2024-05-14 19:13:22 -07:00
_logging.py feat(utils.py): json logs for raw request sent by litellm 2024-04-29 19:21:19 -07:00
_redis.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
_service_logger.py fix(test_lowest_tpm_rpm_routing_v2.py): unit testing for usage-based-routing-v2 2024-04-18 21:38:00 -07:00
_version.py (fix) ci/cd don't let importing litellm._version block starting proxy 2024-02-01 16:23:16 -08:00
budget_manager.py feat(proxy_server.py): return litellm version in response headers 2024-05-08 16:00:08 -07:00
caching.py Merge pull request #3266 from antonioloison/litellm_add_disk_cache 2024-05-14 09:24:01 -07:00
cost.json
exceptions.py fix - show litellm_debug_info 2024-05-15 13:07:04 -07:00
main.py test: fix test 2024-05-15 08:51:40 -07:00
model_prices_and_context_window_backup.json fix(router.py): fix validation error for default fallback 2024-05-15 13:23:00 -07:00
requirements.txt Add symlink and only copy in source dir to stay under 50MB compressed limit for Lambdas. 2023-11-22 23:07:33 -05:00
router.py fix(router.py): fix validation error for default fallback 2024-05-15 13:23:00 -07:00
timeout.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
utils.py fix - show litellm_debug_info 2024-05-15 13:07:04 -07:00