litellm-mirror/litellm/proxy/tests
Ishaan Jaff fd87ae69b8
[Vertex Multimodal embeddings] Fixes to work with Langchain OpenAI Embedding (#5949)
* fix parallel request limiter - use one cache update call

* ci/cd run again

* run ci/cd again

* use docker username password

* fix config.yml

* fix config

* fix config

* fix config.yml

* ci/cd run again

* use correct typing for batch set cache

* fix async_set_cache_pipeline

* fix only check user id tpm / rpm limits when limits set

* fix test_openai_azure_embedding_with_oidc_and_cf

* add InstanceImage type

* fix vertex image transform

* add langchain vertex test request

* add new vertex test

* update multimodal embedding tests

* add test_vertexai_multimodal_embedding_base64image_in_input

* simplify langchain mm embedding usage

* add langchain example for multimodal embeddings on vertex

* fix linting error
2024-09-27 18:04:03 -07:00
..
llama_index_data (test) llama index VectorStoreIndex 2024-02-09 16:49:03 -08:00
bursty_load_test_completion.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
error_log.txt (test) load test embedding: proxy 2023-11-24 17:14:44 -08:00
large_text.py fix(router.py): check for context window error when handling 400 status code errors 2024-03-26 08:08:15 -07:00
load_test_completion.py (fix) add some better load testing 2024-03-22 19:48:54 -07:00
load_test_embedding.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
load_test_embedding_100.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
load_test_embedding_proxy.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
load_test_q.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
request_log.txt (test) load test embedding: proxy 2023-11-24 17:14:44 -08:00
test_anthropic_context_caching.py fix using prompt caching on proxy 2024-08-15 20:12:11 -07:00
test_anthropic_sdk.py update tests 2024-07-22 14:44:47 -07:00
test_async.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
test_gemini_context_caching.py docs cachedContent endpoint 2024-08-08 16:06:23 -07:00
test_langchain_embedding.py [Vertex Multimodal embeddings] Fixes to work with Langchain OpenAI Embedding (#5949) 2024-09-27 18:04:03 -07:00
test_langchain_request.py (test) proxy - log metadata to langfuse 2024-01-01 11:54:16 +05:30
test_llamaindex.py (test) llama index VectorStoreIndex 2024-02-09 16:49:03 -08:00
test_mistral_sdk.py example mistral sdk 2024-07-25 19:48:54 -07:00
test_openai_embedding.py test - re-order embedding responses 2024-04-08 12:02:40 -07:00
test_openai_exception_request.py (test) proxy - add openai exception mapping error 2024-01-15 09:56:20 -08:00
test_openai_js.js (test) large request 2024-02-12 21:49:47 -08:00
test_openai_request.py (docs) also test gpt-4 vision enhancements 2024-01-17 18:46:41 -08:00
test_openai_request_with_traceparent.py test - propogate trace IDs across services 2024-06-11 14:00:25 -07:00
test_openai_simple_embedding.py test -base64 cache hits 2024-04-10 16:46:56 -07:00
test_openai_tts_request.py docs on using vertex tts 2024-08-23 17:57:49 -07:00
test_pass_through_langfuse.py test - pass through langfuse requests 2024-06-28 17:28:21 -07:00
test_q.py refactor: add black formatting 2023-12-25 14:11:20 +05:30
test_simple_traceparent_openai.py doc - OTEL trace propogation 2024-06-11 14:25:33 -07:00
test_vertex_sdk_forward_headers.py LiteLLM Minor Fixes & Improvements (09/19/2024) (#5793) 2024-09-20 08:19:52 -07:00
test_vtx_embedding.py add test vtx embedding 2024-08-21 17:05:47 -07:00
test_vtx_sdk_embedding.py use litellm proxy with vertex ai sdk 2024-08-21 17:47:01 -07:00