mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
* fix parallel request limiter - use one cache update call * ci/cd run again * run ci/cd again * use docker username password * fix config.yml * fix config * fix config * fix config.yml * ci/cd run again * use correct typing for batch set cache * fix async_set_cache_pipeline * fix only check user id tpm / rpm limits when limits set * fix test_openai_azure_embedding_with_oidc_and_cf * add InstanceImage type * fix vertex image transform * add langchain vertex test request * add new vertex test * update multimodal embedding tests * add test_vertexai_multimodal_embedding_base64image_in_input * simplify langchain mm embedding usage * add langchain example for multimodal embeddings on vertex * fix linting error |
||
---|---|---|
.. | ||
llama_index_data | ||
bursty_load_test_completion.py | ||
error_log.txt | ||
large_text.py | ||
load_test_completion.py | ||
load_test_embedding.py | ||
load_test_embedding_100.py | ||
load_test_embedding_proxy.py | ||
load_test_q.py | ||
request_log.txt | ||
test_anthropic_context_caching.py | ||
test_anthropic_sdk.py | ||
test_async.py | ||
test_gemini_context_caching.py | ||
test_langchain_embedding.py | ||
test_langchain_request.py | ||
test_llamaindex.py | ||
test_mistral_sdk.py | ||
test_openai_embedding.py | ||
test_openai_exception_request.py | ||
test_openai_js.js | ||
test_openai_request.py | ||
test_openai_request_with_traceparent.py | ||
test_openai_simple_embedding.py | ||
test_openai_tts_request.py | ||
test_pass_through_langfuse.py | ||
test_q.py | ||
test_simple_traceparent_openai.py | ||
test_vertex_sdk_forward_headers.py | ||
test_vtx_embedding.py | ||
test_vtx_sdk_embedding.py |