litellm-mirror/litellm/proxy/tests
Krish Dholakia 3933fba41f
LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793)
* fix(model_prices_and_context_window.json): add cost tracking for more vertex llama3.1 model

8b and 70b models

* fix(proxy/utils.py): handle data being none on pre-call hooks

* fix(proxy/): create views on initial proxy startup

fixes base case, where user starts proxy for first time

 Fixes https://github.com/BerriAI/litellm/issues/5756

* build(config.yml): fix vertex version for test

* feat(ui/): support enabling/disabling slack alerting

Allows admin to turn on/off slack alerting through ui

* feat(rerank/main.py): support langfuse logging

* fix(proxy/utils.py): fix linting errors

* fix(langfuse.py): log clean metadata

* test(tests): replace deprecated openai model
2024-09-20 08:19:52 -07:00
..
llama_index_data (test) llama index VectorStoreIndex 2024-02-09 16:49:03 -08:00
bursty_load_test_completion.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
error_log.txt (test) load test embedding: proxy 2023-11-24 17:14:44 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
load_test_completion.py (fix) add some better load testing 2024-03-22 19:48:54 -07:00
load_test_embedding.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
load_test_embedding_100.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
load_test_embedding_proxy.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
load_test_q.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
request_log.txt (test) load test embedding: proxy 2023-11-24 17:14:44 -08:00
test_anthropic_context_caching.py fix using prompt caching on proxy 2024-08-15 20:12:11 -07:00
test_anthropic_sdk.py update tests 2024-07-22 14:44:47 -07:00
test_async.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
test_gemini_context_caching.py docs cachedContent endpoint 2024-08-08 16:06:23 -07:00
test_langchain_request.py (test) proxy - log metadata to langfuse 2024-01-01 11:54:16 +05:30
test_llamaindex.py (test) llama index VectorStoreIndex 2024-02-09 16:49:03 -08:00
test_mistral_sdk.py example mistral sdk 2024-07-25 19:48:54 -07:00
test_openai_embedding.py test - re-order embedding responses 2024-04-08 12:02:40 -07:00
test_openai_exception_request.py (test) proxy - add openai exception mapping error 2024-01-15 09:56:20 -08:00
test_openai_js.js (test) large request 2024-02-12 21:49:47 -08:00
test_openai_request.py (docs) also test gpt-4 vision enhancements 2024-01-17 18:46:41 -08:00
test_openai_request_with_traceparent.py test - propogate trace IDs across services 2024-06-11 14:00:25 -07:00
test_openai_simple_embedding.py test -base64 cache hits 2024-04-10 16:46:56 -07:00
test_openai_tts_request.py docs on using vertex tts 2024-08-23 17:57:49 -07:00
test_pass_through_langfuse.py test - pass through langfuse requests 2024-06-28 17:28:21 -07:00
test_q.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
test_simple_traceparent_openai.py doc - OTEL trace propogation 2024-06-11 14:25:33 -07:00
test_vertex_sdk_forward_headers.py LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793) 2024-09-20 08:19:52 -07:00
test_vtx_embedding.py add test vtx embedding 2024-08-21 17:05:47 -07:00
test_vtx_sdk_embedding.py use litellm proxy with vertex ai sdk 2024-08-21 17:47:01 -07:00