Krish Dholakia
71f659d26b
Complete 'requests' library removal ( #7350 )
...
* refactor: initial commit moving watsonx_text to base_llm_http_handler + clarifying new provider directory structure
* refactor(watsonx/completion/handler.py): move to using base llm http handler
removes 'requests' library usage
* fix(watsonx_text/transformation.py): fix result transformation
migrates to transformation.py, for usage with base llm http handler
* fix(streaming_handler.py): migrate watsonx streaming to transformation.py
ensures streaming works with base llm http handler
* fix(streaming_handler.py): fix streaming linting errors and remove watsonx conditional logic
* fix(watsonx/): fix chat route post completion route refactor
* refactor(watsonx/embed): refactor watsonx to use base llm http handler for embedding calls as well
* refactor(base.py): remove requests library usage from litellm
* build(pyproject.toml): remove requests library usage
* fix: fix linting errors
* fix: fix linting errors
* fix(types/utils.py): fix validation errors for modelresponsestream
* fix(replicate/handler.py): fix linting errors
* fix(litellm_logging.py): handle modelresponsestream object
* fix(streaming_handler.py): fix modelresponsestream args
* fix: remove unused imports
* test: fix test
* fix: fix test
* test: fix test
* test: fix tests
* test: fix test
* test: fix patch target
* test: fix test
2024-12-22 07:21:25 -08:00
Ishaan Jaff
b0738fd439
(code refactor) - Add BaseRerankConfig
. Use BaseRerankConfig
for cohere/rerank
and azure_ai/rerank
( #7319 )
...
* add base rerank config
* working sync cohere rerank
* update rerank types
* update base rerank config
* remove old rerank
* add new cohere handler.py
* add cohere rerank transform
* add get_provider_rerank_config
* add rerank to base llm http handler
* add rerank utils
* add arerank to llm http handler.py
* add AzureAIRerankConfig
* updates rerank config
* update test rerank
* fix unused imports
* update get_provider_rerank_config
* test_basic_rerank_caching
* fix unused import
* test rerank
2024-12-19 17:03:34 -08:00
Ishaan Jaff
62a1cdec47
(code quality) run ruff rule to ban unused imports ( #7313 )
...
* remove unused imports
* fix AmazonConverseConfig
* fix test
* fix import
* ruff check fixes
* test fixes
* fix testing
* fix imports
2024-12-19 12:33:42 -08:00
Ishaan Jaff
c7b288ce30
(fix) unable to pass input_type parameter to Voyage AI embedding mode ( #7276 )
...
* VoyageEmbeddingConfig
* fix voyage logic to get params
* add voyage embedding transformation
* add get_provider_embedding_config
* use BaseEmbeddingConfig
* voyage clean up
* use llm http handler for embedding transformations
* test_voyage_ai_embedding_extra_params
* add voyage async
* test_voyage_ai_embedding_extra_params
* add async for llm http handler
* update BaseLLMEmbeddingTest
* test_voyage_ai_embedding_extra_params
* fix linting
* fix get_provider_embedding_config
* fix anthropic text test
* update location of base/chat/transformation
* fix import path
* fix IBMWatsonXAIConfig
2024-12-17 19:23:49 -08:00
Krish Dholakia
a9aeb21d0b
fix(acompletion): support fallbacks on acompletion ( #7184 )
...
* fix(acompletion): support fallbacks on acompletion
allows health checks for wildcard routes to use fallback models
* test: update cohere generate api testing
* add max tokens to health check (#7000 )
* fix: fix health check test
* test: update testing
---------
Co-authored-by: Cameron <561860+wallies@users.noreply.github.com>
2024-12-11 19:20:54 -08:00
Krish Dholakia
93000bd8d3
Litellm merge pr ( #7161 )
...
* build: merge branch
* test: fix openai naming
* fix(main.py): fix openai renaming
* style: ignore function length for config factory
* fix(sagemaker/): fix routing logic
* fix: fix imports
* fix: fix override
2024-12-10 22:49:26 -08:00
Ishaan Jaff
1b377d5229
(Refactor) Code Quality improvement - Use Common base handler for Cohere /generate API ( #7122 )
...
* use validate_environment in common utils
* use transform request / response for cohere
* remove unused file
* use cohere base_llm_http_handler
* working cohere generate api on llm http handler
* streaming cohere generate api
* fix get_model_response_iterator
* fix streaming handler
* fix get_model_response_iterator
* test_cohere_generate_api_completion
* fix linting error
* fix testing cohere raising error
* fix get_model_response_iterator type
* add testing cohere generate api
2024-12-10 10:44:42 -08:00
Ishaan Jaff
9c2316b7ec
(Refactor) Code Quality improvement - Use Common base handler for cloudflare/
provider ( #7127 )
...
* add get_complete_url to base config
* cloudflare - refactor to following existing pattern
* migrate cloudflare chat completions to base llm http handler
* fix unused import
* fix fake stream in cloudflare
* fix cloudflare transformation
* fix naming for BaseModelResponseIterator
* add async cloudflare streaming test
* test cloudflare
* add handler.py
* add handler.py in cohere handler.py
2024-12-10 10:12:22 -08:00
Ishaan Jaff
28ff38e35d
(Refactor) Code Quality improvement - Use Common base handler for clarifai/
( #7125 )
...
* use base_llm_http_handler for clarifai
* fix clarifai completion
* handle faking streaming base llm http handler
* add fake streaming for clarifai
* add FakeStreamResponseIterator for base model iterator
* fix get_model_response_iterator
* fix base model iterator
* fix base model iterator
* add support for faking sync streams clarfiai
* add fake streaming for clarifai
* remove unused code
* fix import
* fix llm http handler
* test_async_completion_clarifai
* fix clarifai tests
* fix linting
2024-12-09 21:04:48 -08:00
Ishaan Jaff
c5e0407703
(Refactor) Code Quality improvement - use Common base handler for Cohere ( #7117 )
...
* fix use new format for Cohere config
* fix base llm http handler
* Litellm code qa common config (#7116 )
* feat(base_llm): initial commit for common base config class
Addresses code qa critique https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* feat(base_llm/): add transform request/response abstract methods to base config class
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* use base transform helpers
* use base_llm_http_handler for cohere
* working cohere using base llm handler
* add async cohere chat completion support on base handler
* fix completion code
* working sync cohere stream
* add async support cohere_chat
* fix types get_model_response_iterator
* async / sync tests cohere
* feat cohere using base llm class
* fix linting errors
* fix _abc error
* add cohere params to transformation
* remove old cohere file
* fix type error
* fix merge conflicts
* fix cohere merge conflicts
* fix linting error
* fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post
* fix passing cohere specific params
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2024-12-09 17:45:29 -08:00