mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
* feat(litellm_pre_call_utils.py): support forwarding request headers to backend llm api * fix(litellm_pre_call_utils.py): handle custom litellm key header * test(router_code_coverage.py): check if all router functions are dire… (#6186) * test(router_code_coverage.py): check if all router functions are directly tested prevent regressions * docs(configs.md): document all environment variables (#6185) * docs: make it easier to find anthropic/openai prompt caching doc * aded codecov yml (#6207) * fix codecov.yaml * run ci/cd again * (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) * use folder for caching * fix importing caching * fix clickhouse pyright * fix linting * fix correctly pass kwargs and args * fix test case for embedding * fix linting * fix embedding caching logic * fix refactor handle utils.py * fix test_embedding_caching_azure_individual_items_reordered * (feat) prometheus have well defined latency buckets (#6211) * fix prometheus have well defined latency buckets * use a well define latency bucket * use types file for prometheus logging * add test for LATENCY_BUCKETS * fix prom testing * fix config.yml * (refactor caching) use LLMCachingHandler for caching streaming responses (#6210) * use folder for caching * fix importing caching * fix clickhouse pyright * fix linting * fix correctly pass kwargs and args * fix test case for embedding * fix linting * fix embedding caching logic * fix refactor handle utils.py * refactor async set stream cache * fix linting * bump (#6187) * update code cov yaml * fix config.yml * add caching component to code cov * fix config.yml ci/cd * add coverage for proxy auth * (refactor caching) use common `_retrieve_from_cache` helper (#6212) * use folder for caching * fix importing caching * fix clickhouse pyright * fix linting * fix correctly pass kwargs and args * fix test case for embedding * fix linting * fix embedding caching logic * fix refactor handle utils.py * refactor async set stream cache * fix linting * refactor - use _retrieve_from_cache * refactor use _convert_cached_result_to_model_response * fix linting errors * bump: version 1.49.2 → 1.49.3 * fix code cov components * test(test_router_helpers.py): add router component unit tests * test: add additional router tests * test: add more router testing * test: add more router testing + more mock functions * ci(router_code_coverage.py): fix check --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: yujonglee <yujonglee.dev@gmail.com> * bump: version 1.49.3 → 1.49.4 * (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220) * (refactor) use _assemble_complete_response_from_streaming_chunks * add unit test for test_assemble_complete_response_from_streaming_chunks_1 * fix assemble complete_streaming_response * config add logging_testing * add logging_coverage in codecov * test test_assemble_complete_response_from_streaming_chunks_3 * add unit tests for _assemble_complete_response_from_streaming_chunks * fix remove unused / junk function * add test for streaming_chunks when error assembling * (refactor) OTEL - use safe_set_attribute for setting attributes (#6226) * otel - use safe_set_attribute for setting attributes * fix OTEL only use safe_set_attribute * (fix) prompt caching cost calculation OpenAI, Azure OpenAI (#6231) * fix prompt caching cost calculation * fix testing for prompt cache cost calc * fix(allowed_model_region): allow us as allowed region (#6234) * test(router_code_coverage.py): check if all router functions are dire… (#6186) * test(router_code_coverage.py): check if all router functions are directly tested prevent regressions * docs(configs.md): document all environment variables (#6185) * docs: make it easier to find anthropic/openai prompt caching doc * aded codecov yml (#6207) * fix codecov.yaml * run ci/cd again * (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) * use folder for caching * fix importing caching * fix clickhouse pyright * fix linting * fix correctly pass kwargs and args * fix test case for embedding * fix linting * fix embedding caching logic * fix refactor handle utils.py * fix test_embedding_caching_azure_individual_items_reordered * (feat) prometheus have well defined latency buckets (#6211) * fix prometheus have well defined latency buckets * use a well define latency bucket * use types file for prometheus logging * add test for LATENCY_BUCKETS * fix prom testing * fix config.yml * (refactor caching) use LLMCachingHandler for caching streaming responses (#6210) * use folder for caching * fix importing caching * fix clickhouse pyright * fix linting * fix correctly pass kwargs and args * fix test case for embedding * fix linting * fix embedding caching logic * fix refactor handle utils.py * refactor async set stream cache * fix linting * bump (#6187) * update code cov yaml * fix config.yml * add caching component to code cov * fix config.yml ci/cd * add coverage for proxy auth * (refactor caching) use common `_retrieve_from_cache` helper (#6212) * use folder for caching * fix importing caching * fix clickhouse pyright * fix linting * fix correctly pass kwargs and args * fix test case for embedding * fix linting * fix embedding caching logic * fix refactor handle utils.py * refactor async set stream cache * fix linting * refactor - use _retrieve_from_cache * refactor use _convert_cached_result_to_model_response * fix linting errors * bump: version 1.49.2 → 1.49.3 * fix code cov components * test(test_router_helpers.py): add router component unit tests * test: add additional router tests * test: add more router testing * test: add more router testing + more mock functions * ci(router_code_coverage.py): fix check --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: yujonglee <yujonglee.dev@gmail.com> * bump: version 1.49.3 → 1.49.4 * (refactor) use helper function `_assemble_complete_response_from_streaming_chunks` to assemble complete responses in caching and logging callbacks (#6220) * (refactor) use _assemble_complete_response_from_streaming_chunks * add unit test for test_assemble_complete_response_from_streaming_chunks_1 * fix assemble complete_streaming_response * config add logging_testing * add logging_coverage in codecov * test test_assemble_complete_response_from_streaming_chunks_3 * add unit tests for _assemble_complete_response_from_streaming_chunks * fix remove unused / junk function * add test for streaming_chunks when error assembling * (refactor) OTEL - use safe_set_attribute for setting attributes (#6226) * otel - use safe_set_attribute for setting attributes * fix OTEL only use safe_set_attribute * fix(allowed_model_region): allow us as allowed region --------- Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: yujonglee <yujonglee.dev@gmail.com> * fix(litellm_pre_call_utils.py): support 'us' region routing + fix header forwarding to filter on `x-` headers * docs(customer_routing.md): fix region-based routing example * feat(azure.py): handle empty arguments function call - azure Closes https://github.com/BerriAI/litellm/issues/6241 * feat(guardrails_ai.py): support guardrails ai integration Adds support for on-prem guardrails via guardrails ai * fix(proxy/utils.py): prevent sql injection attack Fixes https://huntr.com/bounties/a4f6d357-5b44-4e00-9cac-f1cc351211d2 * fix: fix linting errors * fix(litellm_pre_call_utils.py): don't log litellm api key in proxy server request headers * fix(litellm_pre_call_utils.py): don't forward stainless headers * docs(guardrails_ai.md): add guardrails ai quick start to docs * test: handle flaky test --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: yujonglee <yujonglee.dev@gmail.com> Co-authored-by: Marcus Elwin <marcus@elwin.com> |
||
---|---|---|
.. | ||
.litellm_cache | ||
example_config_yaml | ||
test_configs | ||
test_model_response_typing | ||
adroit-crow-413218-bc47f303efc9.json | ||
azure_fine_tune.jsonl | ||
batch_job_results_furniture.jsonl | ||
conftest.py | ||
data_map.txt | ||
eagle.wav | ||
gettysburg.wav | ||
large_text.py | ||
log.txt | ||
messages_with_counts.py | ||
model_cost.json | ||
openai_batch_completions.jsonl | ||
openai_batch_completions_router.jsonl | ||
speech_vertex.mp3 | ||
stream_chunk_testdata.py | ||
test_acompletion.py | ||
test_acooldowns_router.py | ||
test_add_function_to_prompt.py | ||
test_add_update_models.py | ||
test_alangfuse.py | ||
test_alerting.py | ||
test_amazing_s3_logs.py | ||
test_amazing_vertex_completion.py | ||
test_anthropic_prompt_caching.py | ||
test_aproxy_startup.py | ||
test_arize_ai.py | ||
test_assistants.py | ||
test_async_fn.py | ||
test_async_opentelemetry.py | ||
test_audio_speech.py | ||
test_auth_checks.py | ||
test_azure_content_safety.py | ||
test_azure_openai.py | ||
test_azure_perf.py | ||
test_bad_params.py | ||
test_banned_keyword_list.py | ||
test_batch_completion_return_exceptions.py | ||
test_batch_completions.py | ||
test_bedrock_completion.py | ||
test_blocked_user_list.py | ||
test_braintrust.py | ||
test_budget_manager.py | ||
test_caching.py | ||
test_caching_ssl.py | ||
test_clarifai_completion.py | ||
test_class.py | ||
test_clickhouse_logger.py | ||
test_cohere_completion.py | ||
test_completion.py | ||
test_completion_cost.py | ||
test_completion_with_retries.py | ||
test_config.py | ||
test_cost_calc.py | ||
test_custom_api_logger.py | ||
test_custom_callback_input.py | ||
test_custom_callback_router.py | ||
test_custom_llm.py | ||
test_custom_logger.py | ||
test_datadog.py | ||
test_deployed_proxy_keygen.py | ||
test_dynamic_rate_limit_handler.py | ||
test_dynamodb_logs.py | ||
test_embedding.py | ||
test_exceptions.py | ||
test_file_types.py | ||
test_fine_tuning_api.py | ||
test_function_call_parsing.py | ||
test_function_calling.py | ||
test_function_setup.py | ||
test_gcs_bucket.py | ||
test_get_llm_provider.py | ||
test_get_model_file.py | ||
test_get_model_info.py | ||
test_get_model_list.py | ||
test_get_optional_params_embeddings.py | ||
test_get_optional_params_functions_not_supported.py | ||
test_get_secret.py | ||
test_google_ai_studio_gemini.py | ||
test_guardrails_ai.py | ||
test_guardrails_config.py | ||
test_health_check.py | ||
test_helicone_integration.py | ||
test_hf_prompt_templates.py | ||
test_image_generation.py | ||
test_img_resize.py | ||
test_jwt.py | ||
test_key_generate_dynamodb.py | ||
test_key_generate_prisma.py | ||
test_lakera_ai_prompt_injection.py | ||
test_langchain_ChatLiteLLM.py | ||
test_langsmith.py | ||
test_least_busy_routing.py | ||
test_litellm_max_budget.py | ||
test_literalai.py | ||
test_llm_guard.py | ||
test_load_test_router_s3.py | ||
test_loadtest_router.py | ||
test_logfire.py | ||
test_logging.py | ||
test_longer_context_fallback.py | ||
test_lowest_cost_routing.py | ||
test_lowest_latency_routing.py | ||
test_lunary.py | ||
test_max_tpm_rpm_limiter.py | ||
test_mem_usage.py | ||
test_mock_request.py | ||
test_model_alias_map.py | ||
test_model_max_token_adjust.py | ||
test_multiple_deployments.py | ||
test_ollama.py | ||
test_ollama_local.py | ||
test_ollama_local_chat.py | ||
test_openai_batches_and_files.py | ||
test_openai_moderations_hook.py | ||
test_opik.py | ||
test_parallel_request_limiter.py | ||
test_pass_through_endpoints.py | ||
test_presidio_masking.py | ||
test_profiling_router.py | ||
test_prometheus.py | ||
test_prometheus_service.py | ||
test_prompt_caching.py | ||
test_prompt_factory.py | ||
test_prompt_injection_detection.py | ||
test_promptlayer_integration.py | ||
test_provider_specific_config.py | ||
test_proxy_custom_auth.py | ||
test_proxy_custom_logger.py | ||
test_proxy_encrypt_decrypt.py | ||
test_proxy_exception_mapping.py | ||
test_proxy_gunicorn.py | ||
test_proxy_pass_user_config.py | ||
test_proxy_reject_logging.py | ||
test_proxy_routes.py | ||
test_proxy_server.py | ||
test_proxy_server_caching.py | ||
test_proxy_server_cost.py | ||
test_proxy_server_keys.py | ||
test_proxy_server_langfuse.py | ||
test_proxy_server_spend.py | ||
test_proxy_setting_guardrails.py | ||
test_proxy_token_counter.py | ||
test_proxy_utils.py | ||
test_pydantic.py | ||
test_pydantic_namespaces.py | ||
test_python_38.py | ||
test_register_model.py | ||
test_rerank.py | ||
test_router.py | ||
test_router_batch_completion.py | ||
test_router_caching.py | ||
test_router_client_init.py | ||
test_router_cooldowns.py | ||
test_router_custom_routing.py | ||
test_router_debug_logs.py | ||
test_router_fallback_handlers.py | ||
test_router_fallbacks.py | ||
test_router_get_deployments.py | ||
test_router_init.py | ||
test_router_max_parallel_requests.py | ||
test_router_pattern_matching.py | ||
test_router_policy_violation.py | ||
test_router_retries.py | ||
test_router_tag_routing.py | ||
test_router_timeout.py | ||
test_router_utils.py | ||
test_router_with_fallbacks.py | ||
test_rules.py | ||
test_sagemaker.py | ||
test_scheduler.py | ||
test_secret_detect_hook.py | ||
test_secret_manager.py | ||
test_simple_shuffle.py | ||
test_spend_calculate_endpoint.py | ||
test_spend_logs.py | ||
test_stream_chunk_builder.py | ||
test_streaming.py | ||
test_supabase_integration.py | ||
test_team_config.py | ||
test_text_completion.py | ||
test_timeout.py | ||
test_together_ai.py | ||
test_token_counter.py | ||
test_tpm_rpm_routing_v2.py | ||
test_traceloop.py | ||
test_triton.py | ||
test_update_spend.py | ||
test_user_api_key_auth.py | ||
test_utils.py | ||
test_validate_environment.py | ||
test_wandb.py | ||
test_whisper.py | ||
user_cost.json | ||
vertex_ai.jsonl | ||
vertex_key.json |