Krrish Dholakia
3560f0ef2c
refactor: move all testing to top-level of repo
...
Closes https://github.com/BerriAI/litellm/issues/486
2024-09-28 21:08:14 -07:00
Krish Dholakia
0b30e212da
LiteLLM Minor Fixes & Improvements (09/27/2024) ( #5938 )
...
* fix(langfuse.py): prevent double logging requester metadata
Fixes https://github.com/BerriAI/litellm/issues/5935
* build(model_prices_and_context_window.json): add mistral pixtral cost tracking
Closes https://github.com/BerriAI/litellm/issues/5837
* handle streaming for azure ai studio error
* [Perf Proxy] parallel request limiter - use one cache update call (#5932 )
* fix parallel request limiter - use one cache update call
* ci/cd run again
* run ci/cd again
* use docker username password
* fix config.yml
* fix config
* fix config
* fix config.yml
* ci/cd run again
* use correct typing for batch set cache
* fix async_set_cache_pipeline
* fix only check user id tpm / rpm limits when limits set
* fix test_openai_azure_embedding_with_oidc_and_cf
* fix(groq/chat/transformation.py): Fixes https://github.com/BerriAI/litellm/issues/5839
* feat(anthropic/chat.py): return 'retry-after' headers from anthropic
Fixes https://github.com/BerriAI/litellm/issues/4387
* feat: raise validation error if message has tool calls without passing `tools` param for anthropic/bedrock
Closes https://github.com/BerriAI/litellm/issues/5747
* [Feature]#5940, add max_workers parameter for the batch_completion (#5947 )
* handle streaming for azure ai studio error
* bump: version 1.48.2 → 1.48.3
* docs(data_security.md): add legal/compliance faq's
Make it easier for companies to use litellm
* docs: resolve imports
* [Feature]#5940, add max_workers parameter for the batch_completion method
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: josearangos <josearangos@Joses-MacBook-Pro.local>
* fix(converse_transformation.py): fix default message value
* fix(utils.py): fix get_model_info to handle finetuned models
Fixes issue for standard logging payloads, where model_map_value was null for finetuned openai models
* fix(litellm_pre_call_utils.py): add debug statement for data sent after updating with team/key callbacks
* fix: fix linting errors
* fix(anthropic/chat/handler.py): fix cache creation input tokens
* fix(exception_mapping_utils.py): fix missing imports
* fix(anthropic/chat/handler.py): fix usage block translation
* test: fix test
* test: fix tests
* style(types/utils.py): trigger new build
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Jose Alberto Arango Sanchez <jose.arangos@udea.edu.co>
Co-authored-by: josearangos <josearangos@Joses-MacBook-Pro.local>
2024-09-27 22:52:57 -07:00
Krish Dholakia
4657a40ef1
LiteLLM Minor Fixes and Improvements (09/12/2024) ( #5658 )
...
* fix(factory.py): handle tool call content as list
Fixes https://github.com/BerriAI/litellm/issues/5652
* fix(factory.py): enforce stronger typing
* fix(router.py): return model alias in /v1/model/info and /v1/model_group/info
* fix(user_api_key_auth.py): move noisy warning message to debug
cleanup logs
* fix(types.py): cleanup pydantic v2 deprecated param
Fixes https://github.com/BerriAI/litellm/issues/5649
* docs(gemini.md): show how to pass inline data to gemini api
Fixes https://github.com/BerriAI/litellm/issues/5674
2024-09-12 23:04:06 -07:00
Krish Dholakia
4ac66bd843
LiteLLM Minor Fixes and Improvements (09/07/2024) ( #5580 )
...
* fix(litellm_logging.py): set completion_start_time_float to end_time_float if none
Fixes https://github.com/BerriAI/litellm/issues/5500
* feat(_init_.py): add new 'openai_text_completion_compatible_providers' list
Fixes https://github.com/BerriAI/litellm/issues/5558
Handles correctly routing fireworks ai calls when done via text completions
* fix: fix linting errors
* fix: fix linting errors
* fix(openai.py): fix exception raised
* fix(openai.py): fix error handling
* fix(_redis.py): allow all supported arguments for redis cluster (#5554 )
* Revert "fix(_redis.py): allow all supported arguments for redis cluster (#5554 )" (#5583 )
This reverts commit f2191ef4cb
.
* fix(router.py): return model alias w/ underlying deployment on router.get_model_list()
Fixes https://github.com/BerriAI/litellm/issues/5524#issuecomment-2336410666
* test: handle flaky tests
---------
Co-authored-by: Jonas Dittrich <58814480+Kakadus@users.noreply.github.com>
2024-09-09 18:54:17 -07:00
Krish Dholakia
fa2d9002b5
security - Prevent sql injection in /team/update
query ( #5513 )
...
* fix(team_endpoints.py): replace `.get_data()` usage with prisma interface
Prevent sql injection in `/team/update` query
Fixes https://huntr.com/bounties/a4f6d357-5b44-4e00-9cac-f1cc351211d2
* fix(vertex_ai_non_gemini.py): handle message being a pydantic model
2024-09-04 16:03:02 -07:00
Krrish Dholakia
2b40f2eaed
test(test_function_calling.py): fix test
2024-08-26 12:18:50 -07:00
Krrish Dholakia
f36e7e0754
fix(ollama_chat.py): fix passing assistant message with tool call param
...
Fixes https://github.com/BerriAI/litellm/issues/5319
2024-08-22 10:00:03 -07:00
Krrish Dholakia
c5e030481a
fix: rerun ci/cd
2024-08-21 22:28:35 -07:00
Krrish Dholakia
5a31005b85
test(test_function_calling.py): remove redundant gemini test (causing ratelimit errors)
2024-08-21 21:48:14 -07:00
Krrish Dholakia
6d6ab30ed7
test: test_function_calling.py
2024-08-21 21:12:15 -07:00
Krrish Dholakia
8812da04e3
fix(vertex_httpx.py): Fix tool calling with empty param list
...
Fixes https://github.com/BerriAI/litellm/issues/5055
2024-08-21 09:03:34 -07:00
Krrish Dholakia
88b415c9eb
fix(factory.py): fix merging consecutive tool blocks for bedrock converse
...
Fixes https://github.com/BerriAI/litellm/issues/5277
2024-08-20 08:53:34 -07:00
Ishaan Jaff
f2e8b2500f
fix function calling mistral large latest
2024-05-15 16:05:17 -07:00
Ishaan Jaff
371043d683
fix - test mistral/large _parallel_function_call
2024-05-15 14:31:00 -07:00
Krrish Dholakia
20456968e9
fix(openai.py): creat MistralConfig with response_format mapping for mistral api
2024-05-13 13:29:58 -07:00
Krrish Dholakia
a9f3fd4030
test(test_function_calling.py): remove flaky groq test
2024-04-19 16:41:23 -07:00
Krrish Dholakia
0d2b400e91
test(test_function_calling.py): handle for when model returns a text response
2024-04-17 18:32:34 -07:00
Ishaan Jaff
ea575ef62d
fix test groq function call
2024-04-15 08:40:39 -07:00
Ishaan Jaff
017127a704
test - groq tool calling
2024-04-15 08:13:05 -07:00
ishaan-jaff
5cfe4f7ab3
(fix) test_function_caling.py
2024-02-28 18:07:22 -08:00
ishaan-jaff
2601600b33
(feat) track mistral model supports function calling
2024-02-28 17:15:50 -08:00
Krrish Dholakia
788e24bd83
fix(utils.py): fix streaming logic
2024-02-26 14:26:58 -08:00
Krrish Dholakia
4905929de3
refactor: add black formatting
2023-12-25 14:11:20 +05:30
Krrish Dholakia
b6bc75e27a
fix(utils.py): fix parallel tool calling when streaming
2023-11-29 10:56:21 -08:00
ishaan-jaff
4a364bcbc0
(test) tool/function calling + streaming
2023-11-18 16:23:29 -08:00
ishaan-jaff
4d755c9f2f
(test) function calling
2023-11-18 15:15:02 -08:00
ishaan-jaff
d2bac07b48
(test) parallel tool calling
2023-11-17 17:03:24 -08:00
ishaan-jaff
bb9e7c65e9
(test) parallel function calling
2023-11-17 15:51:27 -08:00