* run pass through logging async
* fix use thread_pool_executor for pass through logging
* test_pass_through_request_logging_failure_with_stream
* fix anthropic pt logging test
* test_pass_through_request_logging_failure
* feat - allow using gemini js SDK with LiteLLM
* add auth for gemini_proxy_route
* basic local test for js
* test cost tagging gemini js requests
* add js sdk test for gemini with litellm
* add docs on gemini JS SDK
* run node.js tests
* fix google ai studio tests
* fix vertex js spend test
* feat - allow tagging vertex JS SDK request
* add unit testing for passing headers for pass through endpoints
* fix allow using vertex_ai as the primary way for pass through vertex endpoints
* docs on vertex js pass tags
* add e2e test for vertex pass through with spend tags
* add e2e tests for streaming vertex JS with tags
* fix vertex ai testing
* stash gemini JS test
* add vertex js sdj example
* handle vertex pass through separately
* tes vertex JS sdk
* fix vertex_proxy_route
* use PassThroughStreamingHandler
* fix PassThroughStreamingHandler
* use common _create_vertex_response_logging_payload_for_generate_content
* test vertex js
* add working vertex jest tests
* move basic bass through test
* use good name for test
* test vertex
* test_chunk_processor_yields_raw_bytes
* unit tests for streaming
* test_convert_raw_bytes_to_str_lines
* run unit tests 1st
* simplify local
* docs add usage example for js
* use get_litellm_virtual_key
* add unit tests for vertex pass through
* allow passing _litellm_metadata in pass through endpoints
* fix _create_anthropic_response_logging_payload
* include litellm_call_id in logging
* add e2e testing for anthropic spend logs
* add testing for spend logs payload
* add example with anthropic python SDK
* use 1 file for AnthropicPassthroughLoggingHandler
* add support for anthropic streaming usage tracking
* ci/cd run again
* fix - add real streaming for anthropic pass through
* remove unused function stream_response
* working anthropic streaming logging
* fix code quality
* fix use 1 file for vertex success handler
* use helper for _handle_logging_vertex_collected_chunks
* enforce vertex streaming to use sse for streaming
* test test_basic_vertex_ai_pass_through_streaming_with_spendlog
* fix type hints
* add comment
* fix linting
* add pass through logging unit testing
* fix(model_prices_and_context_window.json): add cost tracking for more vertex llama3.1 model
8b and 70b models
* fix(proxy/utils.py): handle data being none on pre-call hooks
* fix(proxy/): create views on initial proxy startup
fixes base case, where user starts proxy for first time
Fixes https://github.com/BerriAI/litellm/issues/5756
* build(config.yml): fix vertex version for test
* feat(ui/): support enabling/disabling slack alerting
Allows admin to turn on/off slack alerting through ui
* feat(rerank/main.py): support langfuse logging
* fix(proxy/utils.py): fix linting errors
* fix(langfuse.py): log clean metadata
* test(tests): replace deprecated openai model