Ishaan Jaff
57bc03b30b
[Feat] Add reasoning_effort support for xai/grok-3-mini-beta
model family ( #9932 )
...
* add BaseReasoningEffortTests
* BaseReasoningLLMTests
* fix test rename
* docs update thinking / reasoning content docs
2025-04-11 19:17:09 -07:00
Ishaan Jaff
f9ce754817
[Feat] Add litellm.supports_reasoning() util to track if an llm supports reasoning ( #9923 )
...
* add supports_reasoning for xai models
* add "supports_reasoning": true for o1 series models
* add supports_reasoning util
* add litellm.supports_reasoning
* add supports reasoning for claude 3-7 models
* add deepseek as supports reasoning
* test_supports_reasoning
* add supports reasoning to model group info
* add supports_reasoning
* docs supports reasoning
* fix supports_reasoning test
* "supports_reasoning": false,
* fix test
* supports_reasoning
2025-04-11 17:56:04 -07:00
Krish Dholakia
90a4dfab3c
fix(xai/chat/transformation.py): filter out 'name' param for xai non-… ( #9761 )
...
* fix(xai/chat/transformation.py): filter out 'name' param for xai non-user roles
Fixes https://github.com/BerriAI/litellm/issues/9720
* test fix test_hf_chat_template
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
2025-04-04 20:37:08 -07:00
Ishaan Jaff
2f38e72026
test commit on main
2025-01-16 20:52:55 -08:00
Ishaan Jaff
616211daee
ci/cd run again
2025-01-05 14:11:27 -08:00
Krish Dholakia
3671829e39
Complete 'requests' library removal ( #7350 )
...
Read Version from pyproject.toml / read-version (push) Successful in 12s
* refactor: initial commit moving watsonx_text to base_llm_http_handler + clarifying new provider directory structure
* refactor(watsonx/completion/handler.py): move to using base llm http handler
removes 'requests' library usage
* fix(watsonx_text/transformation.py): fix result transformation
migrates to transformation.py, for usage with base llm http handler
* fix(streaming_handler.py): migrate watsonx streaming to transformation.py
ensures streaming works with base llm http handler
* fix(streaming_handler.py): fix streaming linting errors and remove watsonx conditional logic
* fix(watsonx/): fix chat route post completion route refactor
* refactor(watsonx/embed): refactor watsonx to use base llm http handler for embedding calls as well
* refactor(base.py): remove requests library usage from litellm
* build(pyproject.toml): remove requests library usage
* fix: fix linting errors
* fix: fix linting errors
* fix(types/utils.py): fix validation errors for modelresponsestream
* fix(replicate/handler.py): fix linting errors
* fix(litellm_logging.py): handle modelresponsestream object
* fix(streaming_handler.py): fix modelresponsestream args
* fix: remove unused imports
* test: fix test
* fix: fix test
* test: fix test
* test: fix tests
* test: fix test
* test: fix patch target
* test: fix test
2024-12-22 07:21:25 -08:00
Krish Dholakia
5bbf906c83
Litellm code qa common config ( #7113 )
...
Read Version from pyproject.toml / read-version (push) Successful in 44s
* feat(base_llm): initial commit for common base config class
Addresses code qa critique https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* feat(base_llm/): add transform request/response abstract methods to base config class
* feat(cohere-+-clarifai): refactor integrations to use common base config class
* fix: fix linting errors
* refactor(anthropic/): move anthropic + vertex anthropic to use base config
* test: fix xai test
* test: fix tests
* fix: fix linting errors
* test: comment out WIP test
* fix(transformation.py): fix is pdf used check
* fix: fix linting error
2024-12-09 15:58:25 -08:00
Ishaan Jaff
5652c375b3
(feat) add XAI ChatCompletion Support ( #6373 )
...
* init commit for XAI
* add full logic for xai chat completion
* test_completion_xai
* docs xAI
* add xai/grok-beta
* test_xai_chat_config_get_openai_compatible_provider_info
* test_xai_chat_config_map_openai_params
* add xai streaming test
2024-11-01 20:37:09 +05:30