* stash gemini JS test
* add vertex js sdj example
* handle vertex pass through separately
* tes vertex JS sdk
* fix vertex_proxy_route
* use PassThroughStreamingHandler
* fix PassThroughStreamingHandler
* use common _create_vertex_response_logging_payload_for_generate_content
* test vertex js
* add working vertex jest tests
* move basic bass through test
* use good name for test
* test vertex
* test_chunk_processor_yields_raw_bytes
* unit tests for streaming
* test_convert_raw_bytes_to_str_lines
* run unit tests 1st
* simplify local
* docs add usage example for js
* use get_litellm_virtual_key
* add unit tests for vertex pass through
* use 1 file for AnthropicPassthroughLoggingHandler
* add support for anthropic streaming usage tracking
* ci/cd run again
* fix - add real streaming for anthropic pass through
* remove unused function stream_response
* working anthropic streaming logging
* fix code quality
* fix use 1 file for vertex success handler
* use helper for _handle_logging_vertex_collected_chunks
* enforce vertex streaming to use sse for streaming
* test test_basic_vertex_ai_pass_through_streaming_with_spendlog
* fix type hints
* add comment
* fix linting
* add pass through logging unit testing
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints
* test(test_streaming.py): skip model due to end of life
* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits
Closes https://github.com/BerriAI/litellm/issues/4096