Krish Dholakia
053b0e741f
Add Google AI Studio /v1/files
upload API support ( #9645 )
...
Read Version from pyproject.toml / read-version (push) Successful in 16s
Helm unit test / unit-test (push) Successful in 23s
* test: fix import for test
* fix: fix bad error string
* docs: cleanup files docs
* fix(files/main.py): cleanup error string
* style: initial commit with a provider/config pattern for files api
google ai studio files api onboarding
* fix: test
* feat(gemini/files/transformation.py): support gemini files api response transformation
* fix(gemini/files/transformation.py): return file id as gemini uri
allows id to be passed in to chat completion request, just like openai
* feat(llm_http_handler.py): support async route for files api on llm_http_handler
* fix: fix linting errors
* fix: fix model info check
* fix: fix ruff errors
* fix: fix linting errors
* Revert "fix: fix linting errors"
This reverts commit 926a5a527f
.
* fix: fix linting errors
* test: fix test
* test: fix tests
2025-04-02 08:56:58 -07:00
Krrish Dholakia
88e9edf7db
refactor: update method signature
2025-03-12 15:23:38 -07:00
Krish Dholakia
3671829e39
Complete 'requests' library removal ( #7350 )
...
Read Version from pyproject.toml / read-version (push) Successful in 12s
* refactor: initial commit moving watsonx_text to base_llm_http_handler + clarifying new provider directory structure
* refactor(watsonx/completion/handler.py): move to using base llm http handler
removes 'requests' library usage
* fix(watsonx_text/transformation.py): fix result transformation
migrates to transformation.py, for usage with base llm http handler
* fix(streaming_handler.py): migrate watsonx streaming to transformation.py
ensures streaming works with base llm http handler
* fix(streaming_handler.py): fix streaming linting errors and remove watsonx conditional logic
* fix(watsonx/): fix chat route post completion route refactor
* refactor(watsonx/embed): refactor watsonx to use base llm http handler for embedding calls as well
* refactor(base.py): remove requests library usage from litellm
* build(pyproject.toml): remove requests library usage
* fix: fix linting errors
* fix: fix linting errors
* fix(types/utils.py): fix validation errors for modelresponsestream
* fix(replicate/handler.py): fix linting errors
* fix(litellm_logging.py): handle modelresponsestream object
* fix(streaming_handler.py): fix modelresponsestream args
* fix: remove unused imports
* test: fix test
* fix: fix test
* test: fix test
* test: fix tests
* test: fix test
* test: fix patch target
* test: fix test
2024-12-22 07:21:25 -08:00
Ishaan Jaff
c7f14e936a
(code quality) run ruff rule to ban unused imports ( #7313 )
...
* remove unused imports
* fix AmazonConverseConfig
* fix test
* fix import
* ruff check fixes
* test fixes
* fix testing
* fix imports
2024-12-19 12:33:42 -08:00
Ishaan Jaff
7a5dd29fe0
(fix) unable to pass input_type parameter to Voyage AI embedding mode ( #7276 )
...
Read Version from pyproject.toml / read-version (push) Successful in 46s
* VoyageEmbeddingConfig
* fix voyage logic to get params
* add voyage embedding transformation
* add get_provider_embedding_config
* use BaseEmbeddingConfig
* voyage clean up
* use llm http handler for embedding transformations
* test_voyage_ai_embedding_extra_params
* add voyage async
* test_voyage_ai_embedding_extra_params
* add async for llm http handler
* update BaseLLMEmbeddingTest
* test_voyage_ai_embedding_extra_params
* fix linting
* fix get_provider_embedding_config
* fix anthropic text test
* update location of base/chat/transformation
* fix import path
* fix IBMWatsonXAIConfig
2024-12-17 19:23:49 -08:00