* fix(key_management_endpoints.py): fix user-membership check when creating team key
* docs: add deprecation notice on original `/v1/messages` endpoint + add better swagger tags on pass-through endpoints
* fix(gemini/): fix image_url handling for gemini
Fixes https://github.com/BerriAI/litellm/issues/6897
* fix(teams.tsx): fix member add when role is 'user'
* fix(team_endpoints.py): /team/member_add
fix adding several new members to team
* test(test_vertex.py): remove redundant test
* test(test_proxy_server.py): fix team member add tests
* feat - allow tagging vertex JS SDK request
* add unit testing for passing headers for pass through endpoints
* fix allow using vertex_ai as the primary way for pass through vertex endpoints
* docs on vertex js pass tags
* add e2e test for vertex pass through with spend tags
* add e2e tests for streaming vertex JS with tags
* fix vertex ai testing
* stash gemini JS test
* add vertex js sdj example
* handle vertex pass through separately
* tes vertex JS sdk
* fix vertex_proxy_route
* use PassThroughStreamingHandler
* fix PassThroughStreamingHandler
* use common _create_vertex_response_logging_payload_for_generate_content
* test vertex js
* add working vertex jest tests
* move basic bass through test
* use good name for test
* test vertex
* test_chunk_processor_yields_raw_bytes
* unit tests for streaming
* test_convert_raw_bytes_to_str_lines
* run unit tests 1st
* simplify local
* docs add usage example for js
* use get_litellm_virtual_key
* add unit tests for vertex pass through
* use 1 file for AnthropicPassthroughLoggingHandler
* add support for anthropic streaming usage tracking
* ci/cd run again
* fix - add real streaming for anthropic pass through
* remove unused function stream_response
* working anthropic streaming logging
* fix code quality
* fix use 1 file for vertex success handler
* use helper for _handle_logging_vertex_collected_chunks
* enforce vertex streaming to use sse for streaming
* test test_basic_vertex_ai_pass_through_streaming_with_spendlog
* fix type hints
* add comment
* fix linting
* add pass through logging unit testing
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints
* test(test_streaming.py): skip model due to end of life
* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits
Closes https://github.com/BerriAI/litellm/issues/4096