Krrish Dholakia
34a1a48af1
fix(common_utils.py): fix linting error
Read Version from pyproject.toml / read-version (push) Successful in 19s
Helm unit test / unit-test (push) Successful in 20s
Publish Prisma Migrations / publish-migrations (push) Failing after 1m15s
2025-03-27 23:31:58 -07:00
Krish Dholakia
ccbac691e5
Support discovering gemini, anthropic, xai models by calling their /v1/model
endpoint ( #9530 )
...
* fix: initial commit for adding provider model discovery to gemini
* feat(gemini/): add model discovery for gemini/ route
* docs(set_keys.md): update docs to show you can check available gemini models as well
* feat(anthropic/): add model discovery for anthropic api key
* feat(xai/): add model discovery for XAI
enables checking what models an xai key can call
* ci: bump ci config yml
* fix(topaz/common_utils.py): fix linting error
* fix: fix linting error for python38
2025-03-27 22:50:48 -07:00
Ishaan Jaff
7a5dd29fe0
(fix) unable to pass input_type parameter to Voyage AI embedding mode ( #7276 )
...
Read Version from pyproject.toml / read-version (push) Successful in 46s
* VoyageEmbeddingConfig
* fix voyage logic to get params
* add voyage embedding transformation
* add get_provider_embedding_config
* use BaseEmbeddingConfig
* voyage clean up
* use llm http handler for embedding transformations
* test_voyage_ai_embedding_extra_params
* add voyage async
* test_voyage_ai_embedding_extra_params
* add async for llm http handler
* update BaseLLMEmbeddingTest
* test_voyage_ai_embedding_extra_params
* fix linting
* fix get_provider_embedding_config
* fix anthropic text test
* update location of base/chat/transformation
* fix import path
* fix IBMWatsonXAIConfig
2024-12-17 19:23:49 -08:00
Krish Dholakia
5bbf906c83
Litellm code qa common config ( #7113 )
...
Read Version from pyproject.toml / read-version (push) Successful in 44s
* feat(base_llm): initial commit for common base config class
Addresses code qa critique https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132
* feat(base_llm/): add transform request/response abstract methods to base config class
* feat(cohere-+-clarifai): refactor integrations to use common base config class
* fix: fix linting errors
* refactor(anthropic/): move anthropic + vertex anthropic to use base config
* test: fix xai test
* test: fix tests
* fix: fix linting errors
* test: comment out WIP test
* fix(transformation.py): fix is pdf used check
* fix: fix linting error
2024-12-09 15:58:25 -08:00
Krrish Dholakia
498e14ba59
fix(return-openai-compatible-headers): v0 is openai, azure, anthropic
...
Fixes https://github.com/BerriAI/litellm/issues/5957
2024-09-28 21:08:15 -07:00
Krish Dholakia
0b30e212da
LiteLLM Minor Fixes & Improvements (09/27/2024) ( #5938 )
...
* fix(langfuse.py): prevent double logging requester metadata
Fixes https://github.com/BerriAI/litellm/issues/5935
* build(model_prices_and_context_window.json): add mistral pixtral cost tracking
Closes https://github.com/BerriAI/litellm/issues/5837
* handle streaming for azure ai studio error
* [Perf Proxy] parallel request limiter - use one cache update call (#5932 )
* fix parallel request limiter - use one cache update call
* ci/cd run again
* run ci/cd again
* use docker username password
* fix config.yml
* fix config
* fix config
* fix config.yml
* ci/cd run again
* use correct typing for batch set cache
* fix async_set_cache_pipeline
* fix only check user id tpm / rpm limits when limits set
* fix test_openai_azure_embedding_with_oidc_and_cf
* fix(groq/chat/transformation.py): Fixes https://github.com/BerriAI/litellm/issues/5839
* feat(anthropic/chat.py): return 'retry-after' headers from anthropic
Fixes https://github.com/BerriAI/litellm/issues/4387
* feat: raise validation error if message has tool calls without passing `tools` param for anthropic/bedrock
Closes https://github.com/BerriAI/litellm/issues/5747
* [Feature]#5940, add max_workers parameter for the batch_completion (#5947 )
* handle streaming for azure ai studio error
* bump: version 1.48.2 → 1.48.3
* docs(data_security.md): add legal/compliance faq's
Make it easier for companies to use litellm
* docs: resolve imports
* [Feature]#5940, add max_workers parameter for the batch_completion method
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: josearangos <josearangos@Joses-MacBook-Pro.local>
* fix(converse_transformation.py): fix default message value
* fix(utils.py): fix get_model_info to handle finetuned models
Fixes issue for standard logging payloads, where model_map_value was null for finetuned openai models
* fix(litellm_pre_call_utils.py): add debug statement for data sent after updating with team/key callbacks
* fix: fix linting errors
* fix(anthropic/chat/handler.py): fix cache creation input tokens
* fix(exception_mapping_utils.py): fix missing imports
* fix(anthropic/chat/handler.py): fix usage block translation
* test: fix test
* test: fix tests
* style(types/utils.py): trigger new build
* test: fix test
---------
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Jose Alberto Arango Sanchez <jose.arangos@udea.edu.co>
Co-authored-by: josearangos <josearangos@Joses-MacBook-Pro.local>
2024-09-27 22:52:57 -07:00