Ishaan Jaff
|
dee6de0105
|
(testing) Router add testing coverage (#6253)
* test: add more router code coverage
* test: additional router testing coverage
* fix: fix linting error
* test: fix tests for ci/cd
* test: fix test
* test: handle flaky tests
---------
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
|
2024-10-16 07:32:27 -07:00 |
|
Krish Dholakia
|
bcd1a52834
|
Litellm dev 10 14 2024 (#6221)
* fix(__init__.py): expose DualCache, RedisCache, InMemoryCache on root
abstract internal file refactors from impacting users
* feat(utils.py): handle invalid openai parallel tool calling response
Fixes https://community.openai.com/t/model-tries-to-call-unknown-function-multi-tool-use-parallel/490653
* docs(bedrock.md): clarify all bedrock models are supported
Closes https://github.com/BerriAI/litellm/issues/6168#issuecomment-2412082236
|
2024-10-14 22:11:14 -07:00 |
|
Ishaan Jaff
|
49f6a7660b
|
run ci/cd again
|
2024-10-14 11:50:42 +05:30 |
|
Krish Dholakia
|
d7abcc0d54
|
build(config.yml): add codecov to repo (#6172)
* build(config.yml): add codecov to repo
ensures all commits have testing coverage
* build(config.yml): fix ci config
* build: fix .yml
* build(config.yml): fix ci/cd
* ci(config.yml): specify module to measure code coverage for
* ci(config.yml): update config.yml version
* ci: trigger new run
* ci(config.yml): store combine
* build(config.yml): check files before combine
* ci(config.yml): fix check
* ci(config.yml): add codecov coverage to ci/cd
* ci(config.yml): add codecov to router tests
* ci(config.yml): wait for router testing to complete before running codecov upload
* ci(config.yml): handle multiple coverage.xml's
* fix(router.py): cleanup print stack
* ci(config.yml): fix config
* ci(config.yml): fix config
|
2024-10-12 14:48:17 -07:00 |
|
Krish Dholakia
|
69544ebe08
|
LiteLLM Minor Fixes & Improvements (10/09/2024) (#6139)
* fix(utils.py): don't return 'none' response headers
Fixes https://github.com/BerriAI/litellm/issues/6123
* fix(vertex_and_google_ai_studio_gemini.py): support parsing out additional properties and strict value for tool calls
Fixes https://github.com/BerriAI/litellm/issues/6136
* fix(cost_calculator.py): set default character value to none
Fixes https://github.com/BerriAI/litellm/issues/6133#issuecomment-2403290196
* fix(google.py): fix cost per token / cost per char conversion
Fixes https://github.com/BerriAI/litellm/issues/6133#issuecomment-2403370287
* build(model_prices_and_context_window.json): update gemini pricing
Fixes https://github.com/BerriAI/litellm/issues/6133
* build(model_prices_and_context_window.json): update gemini pricing
* fix(litellm_logging.py): fix streaming caching logging when 'turn_off_message_logging' enabled
Stores unredacted response in cache
* build(model_prices_and_context_window.json): update gemini-1.5-flash pricing
* fix(cost_calculator.py): fix default prompt_character count logic
Fixes error in gemini cost calculation
* fix(cost_calculator.py): fix cost calc for tts models
|
2024-10-10 00:42:11 -07:00 |
|
Ishaan Jaff
|
8f03e61017
|
trigger ci/cd run
|
2024-10-08 20:16:37 +05:30 |
|
Krish Dholakia
|
12b173fdc5
|
LiteLLM Minor Fixes & Improvements (10/07/2024) (#6101)
* fix(utils.py): support dropping temperature param for azure o1 models
* fix(main.py): handle azure o1 streaming requests
o1 doesn't support streaming, fake it to ensure code works as expected
* feat(utils.py): expose `hosted_vllm/` endpoint, with tool handling for vllm
Fixes https://github.com/BerriAI/litellm/issues/6088
* refactor(internal_user_endpoints.py): cleanup unused params + update docstring
Closes https://github.com/BerriAI/litellm/issues/6100
* fix(main.py): expose custom image generation api support
Fixes https://github.com/BerriAI/litellm/issues/6097
* fix: fix linting errors
* docs(custom_llm_server.md): add docs on custom api for image gen calls
* fix(types/utils.py): handle dict type
* fix(types/utils.py): fix linting errors
|
2024-10-07 22:17:22 -07:00 |
|
Krish Dholakia
|
578f9c91af
|
fix(utils.py): fix fix pydantic obj to schema creation for vertex en… (#6071)
* fix(utils.py): fix fix pydantic obj to schema creation for vertex endpoints
Fixes https://github.com/BerriAI/litellm/issues/6027
* test(test_completion.pyu): skip test - avoid hitting gemini rate limits
* fix(common_utils.py): fix ruff linting error
|
2024-10-06 00:25:55 -04:00 |
|
Ishaan Jaff
|
5d467cea50
|
ci/cd run again
|
2024-10-04 17:19:26 +05:30 |
|
Ishaan Jaff
|
4bc7e740f3
|
ci/cd run again
|
2024-09-28 21:08:15 -07:00 |
|
Ishaan Jaff
|
83f1d18be3
|
ci/cd run again
|
2024-09-28 21:08:15 -07:00 |
|
Krrish Dholakia
|
bc6ed7a06f
|
fix(router.py): skip setting model_group response headers for now
current implementation increases redis cache calls by 3x
|
2024-09-28 21:08:15 -07:00 |
|
Krrish Dholakia
|
389d4ee58c
|
fix(utils.py): guarantee openai-compatible headers always exist in response
Fixes https://github.com/BerriAI/litellm/issues/5957
|
2024-09-28 21:08:15 -07:00 |
|
Krrish Dholakia
|
eabf2477f2
|
fix(return-openai-compatible-headers): v0 is openai, azure, anthropic
Fixes https://github.com/BerriAI/litellm/issues/5957
|
2024-09-28 21:08:15 -07:00 |
|
Krrish Dholakia
|
ea96eebe85
|
refactor: move all testing to top-level of repo
Closes https://github.com/BerriAI/litellm/issues/486
|
2024-09-28 21:08:14 -07:00 |
|