Ishaan Jaff
85acdb9193
[Feat] Add max_completion_tokens
param ( #5691 )
...
* add max_completion_tokens
* add max_completion_tokens
* add max_completion_tokens support for OpenAI models
* add max_completion_tokens param
* add max_completion_tokens for bedrock converse models
* add test for converse maxTokens
* fix openai o1 param mapping test
* move test optional params
* add max_completion_tokens for anthropic api
* fix conftest
* add max_completion tokens for vertex ai partner models
* add max_completion_tokens for fireworks ai
* add max_completion_tokens for hf rest api
* add test for param mapping
* add param mapping for vertex, gemini + testing
* predibase is the most unstable and unusable llm api in prod, can't handle our ci/cd
* add max_completion_tokens to openai supported params
* fix fireworks ai param mapping
2024-09-14 14:57:01 -07:00
Krish Dholakia
60709a0753
LiteLLM Minor Fixes and Improvements (09/13/2024) ( #5689 )
...
* refactor: cleanup unused variables + fix pyright errors
* feat(health_check.py): Closes https://github.com/BerriAI/litellm/issues/5686
* fix(o1_reasoning.py): add stricter check for o-1 reasoning model
* refactor(mistral/): make it easier to see mistral transformation logic
* fix(openai.py): fix openai o-1 model param mapping
Fixes https://github.com/BerriAI/litellm/issues/5685
* feat(main.py): infer finetuned gemini model from base model
Fixes https://github.com/BerriAI/litellm/issues/5678
* docs(vertex.md): update docs to call finetuned gemini models
* feat(proxy_server.py): allow admin to hide proxy model aliases
Closes https://github.com/BerriAI/litellm/issues/5692
* docs(load_balancing.md): add docs on hiding alias models from proxy config
* fix(base.py): don't raise notimplemented error
* fix(user_api_key_auth.py): fix model max budget check
* fix(router.py): fix elif
* fix(user_api_key_auth.py): don't set team_id to empty str
* fix(team_endpoints.py): fix response type
* test(test_completion.py): handle predibase error
* test(test_proxy_server.py): fix test
* fix(o1_transformation.py): fix max_completion_token mapping
* test(test_image_generation.py): mark flaky test
2024-09-14 10:02:55 -07:00
Ishaan Jaff
14dc7b3b54
fix linting
2024-09-12 14:15:18 -07:00
Ishaan Jaff
a5a0773b19
fix handle o1 not supporting system message
2024-09-12 14:09:13 -07:00
Krish Dholakia
4ac66bd843
LiteLLM Minor Fixes and Improvements (09/07/2024) ( #5580 )
...
* fix(litellm_logging.py): set completion_start_time_float to end_time_float if none
Fixes https://github.com/BerriAI/litellm/issues/5500
* feat(_init_.py): add new 'openai_text_completion_compatible_providers' list
Fixes https://github.com/BerriAI/litellm/issues/5558
Handles correctly routing fireworks ai calls when done via text completions
* fix: fix linting errors
* fix: fix linting errors
* fix(openai.py): fix exception raised
* fix(openai.py): fix error handling
* fix(_redis.py): allow all supported arguments for redis cluster (#5554 )
* Revert "fix(_redis.py): allow all supported arguments for redis cluster (#5554 )" (#5583 )
This reverts commit f2191ef4cb
.
* fix(router.py): return model alias w/ underlying deployment on router.get_model_list()
Fixes https://github.com/BerriAI/litellm/issues/5524#issuecomment-2336410666
* test: handle flaky tests
---------
Co-authored-by: Jonas Dittrich <58814480+Kakadus@users.noreply.github.com>
2024-09-09 18:54:17 -07:00
Ishaan Jaff
a30917311e
fix import
2024-09-05 14:42:56 -07:00
Ishaan Jaff
81ee1653af
use correct type hints for audio transcriptions
2024-09-05 09:12:27 -07:00