* fix track custom llm provider on pass through routes
* fix use correct provider for google ai studio
* fix tracking custom llm provider on pass through route
* ui fix get provider logo
* update tests to track custom llm provider
* test_anthropic_streaming_with_headers
* Potential fix for code scanning alert no. 2263: Incomplete URL substring sanitization
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* test_openai_assistants_e2e_operations
* test openai assistants pass through
* fix GET request on pass through handler
* _make_non_streaming_http_request
* _is_assistants_api_request
* test_openai_assistants_e2e_operations
* test_openai_assistants_e2e_operations
* openai_proxy_route
* docs openai pass through
* docs openai pass through
* docs openai pass through
* test pass through handler
* Potential fix for code scanning alert no. 2240: Incomplete URL substring sanitization
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
* add initial test for assembly ai
* start using PassthroughEndpointRouter
* migrate to lllm passthrough endpoints
* add assembly ai as a known provider
* fix PassthroughEndpointRouter
* fix set_pass_through_credentials
* working EU request to assembly ai pass through endpoint
* add e2e test assembly
* test_assemblyai_routes_with_bad_api_key
* clean up pass through endpoint router
* e2e testing for assembly ai pass through
* test assembly ai e2e testing
* delete assembly ai models
* fix code quality
* ui working assembly ai api base flow
* fix install assembly ai
* update model call details with kwargs for pass through logging
* fix tracking assembly ai model in response
* _handle_assemblyai_passthrough_logging
* fix test_initialize_deployment_for_pass_through_unsupported_provider
* TestPassthroughEndpointRouter
* _get_assembly_transcript
* fix assembly ai pt logging tests
* fix assemblyai_proxy_route
* fix _get_assembly_region_from_url
* add assembly ai pass through request
* fix assembly pass through
* fix test_assemblyai_basic_transcribe
* fix assemblyai auth check
* test_assemblyai_transcribe_with_non_admin_key
* working assembly ai test
* working assembly ai proxy route
* use helper func to pass through logging
* clean up logging assembly ai
* test: update test to handle gemini token counter change
* fix(factory.py): fix bedrock http:// handling
* add unit testing for assembly pt handler
* docs assembly ai pass through endpoint
* fix proxy_pass_through_endpoint_tests
* fix standard_passthrough_logging_object
* fix ASSEMBLYAI_API_KEY
* test test_assemblyai_proxy_route_basic_post
* test_assemblyai_proxy_route_get_transcript
* fix is is_assemblyai_route
* test_is_assemblyai_route
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* test: initial commit enforcing testing on all anthropic pass through functions
prevents future regressions
* test(test_unit_test_anthropic_pass_through.py): add unit test for '_get_user_from_metadata' function
* test(test_unit_test_anthropic_passthrough.py): add unit test for handle_logging_anthropic_collected_chunks
* test(test_unit_test_anthropic_pass_through): add coverage for all anthropic pass through functions
* feat(pass_through_endpoints.py): fix anthropic end user cost tracking
* fix(anthropic/chat/transformation.py): use returned provider model for anthropic
handles anthropic `-latest` tag in request body throwing cost calculation errors
ensures we can be accurate in our model cost tracking
* feat(model_prices_and_context_window.json): add gemini-2.0-flash-thinking-exp pricing
* test: update test to use assumption that user_api_key_dict can get anthropic user id
* test: fix test
* fix: fix test
* fix(anthropic_pass_through.py): uncomment previous anthropic end-user cost tracking code block
can't guarantee user api key dict always has end user id - too many code paths
* fix(user_api_key_auth.py): this allows end user id from request body to always be read and set in auth object
* fix(auth_check.py): fix linting error
* test: fix auth check
* fix(auth_utils.py): fix get end user id to handle metadata = None
* run pass through logging async
* fix use thread_pool_executor for pass through logging
* test_pass_through_request_logging_failure_with_stream
* fix anthropic pt logging test
* test_pass_through_request_logging_failure
* feat - allow using gemini js SDK with LiteLLM
* add auth for gemini_proxy_route
* basic local test for js
* test cost tagging gemini js requests
* add js sdk test for gemini with litellm
* add docs on gemini JS SDK
* run node.js tests
* fix google ai studio tests
* fix vertex js spend test
* feat - allow tagging vertex JS SDK request
* add unit testing for passing headers for pass through endpoints
* fix allow using vertex_ai as the primary way for pass through vertex endpoints
* docs on vertex js pass tags
* add e2e test for vertex pass through with spend tags
* add e2e tests for streaming vertex JS with tags
* fix vertex ai testing
* stash gemini JS test
* add vertex js sdj example
* handle vertex pass through separately
* tes vertex JS sdk
* fix vertex_proxy_route
* use PassThroughStreamingHandler
* fix PassThroughStreamingHandler
* use common _create_vertex_response_logging_payload_for_generate_content
* test vertex js
* add working vertex jest tests
* move basic bass through test
* use good name for test
* test vertex
* test_chunk_processor_yields_raw_bytes
* unit tests for streaming
* test_convert_raw_bytes_to_str_lines
* run unit tests 1st
* simplify local
* docs add usage example for js
* use get_litellm_virtual_key
* add unit tests for vertex pass through
* allow passing _litellm_metadata in pass through endpoints
* fix _create_anthropic_response_logging_payload
* include litellm_call_id in logging
* add e2e testing for anthropic spend logs
* add testing for spend logs payload
* add example with anthropic python SDK
* use 1 file for AnthropicPassthroughLoggingHandler
* add support for anthropic streaming usage tracking
* ci/cd run again
* fix - add real streaming for anthropic pass through
* remove unused function stream_response
* working anthropic streaming logging
* fix code quality
* fix use 1 file for vertex success handler
* use helper for _handle_logging_vertex_collected_chunks
* enforce vertex streaming to use sse for streaming
* test test_basic_vertex_ai_pass_through_streaming_with_spendlog
* fix type hints
* add comment
* fix linting
* add pass through logging unit testing
* fix(model_prices_and_context_window.json): add cost tracking for more vertex llama3.1 model
8b and 70b models
* fix(proxy/utils.py): handle data being none on pre-call hooks
* fix(proxy/): create views on initial proxy startup
fixes base case, where user starts proxy for first time
Fixes https://github.com/BerriAI/litellm/issues/5756
* build(config.yml): fix vertex version for test
* feat(ui/): support enabling/disabling slack alerting
Allows admin to turn on/off slack alerting through ui
* feat(rerank/main.py): support langfuse logging
* fix(proxy/utils.py): fix linting errors
* fix(langfuse.py): log clean metadata
* test(tests): replace deprecated openai model