litellm/litellm
Ishaan Jaff c119bad5f9
(feat) Vertex AI - add support for fine tuned embedding models (#6749)
* fix use fine tuned vertex embedding models

* test_vertex_embedding_url

* add _transform_openai_request_to_fine_tuned_embedding_request

* add _transform_openai_request_to_fine_tuned_embedding_request

* add transform_openai_request_to_vertex_embedding_request

* add _transform_vertex_response_to_openai_for_fine_tuned_models

* test_vertexai_embedding for ft models

* fix test_vertexai_embedding_finetuned

* doc fine tuned / custom embedding models

* fix test test_partner_models_httpx
2024-11-14 20:37:55 -08:00
..
adapters LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
assistants Add pyright to ci/cd + Fix remaining type-checking errors (#6082) 2024-10-05 17:04:00 -04:00
batch_completion (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
batches Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
caching LiteLLM Minor Fixes & Improvements (11/12/2024) (#6705) 2024-11-12 22:50:51 +05:30
deprecated_litellm_server (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
files Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
fine_tuning Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
integrations LiteLLM Minor Fixes & Improvements (11/12/2024) (#6705) 2024-11-12 22:50:51 +05:30
litellm_core_utils [Feature]: Stop swallowing up AzureOpenAi exception responses in litellm's implementation for a BadRequestError (#6745) 2024-11-14 15:54:28 -08:00
llms (feat) Vertex AI - add support for fine tuned embedding models (#6749) 2024-11-14 20:37:55 -08:00
proxy LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
realtime_api LiteLLM Minor Fixes & Improvements (10/10/2024) (#6158) 2024-10-11 23:04:36 -07:00
rerank_api LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
router_strategy fix(lowest_tpm_rpm_routing.py): fix parallel rate limit check (#6577) 2024-11-05 22:03:44 +05:30
router_utils fix(pattern_match_deployments.py): default to user input if unable to map based on wildcards (#6646) 2024-11-07 23:57:37 +05:30
secret_managers (Feat) Add support for storing virtual keys in AWS SecretManager (#6728) 2024-11-14 09:25:07 -08:00
types LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
__init__.py fix: import audio check (#6740) 2024-11-14 17:00:38 -08:00
_logging.py LiteLLM Minor Fixes & Improvements (10/30/2024) (#6519) 2024-11-02 00:44:32 +05:30
_redis.py (fix proxy redis) Add redis sentinel support (#6154) 2024-11-12 18:36:46 -08:00
_service_logger.py (feat) log error class, function_name on prometheus service failure hook + only log DB related failures on DB service hook (#6650) 2024-11-07 17:01:18 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
constants.py Litellm router max depth (#6501) 2024-10-29 22:05:41 -07:00
cost.json
cost_calculator.py fix imagegeneration output_cost_per_image on model cost map (#6752) 2024-11-14 20:37:21 -08:00
exceptions.py Litellm dev 10 26 2024 (#6472) 2024-10-28 15:05:43 -07:00
main.py LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
model_prices_and_context_window_backup.json fix imagegeneration output_cost_per_image on model cost map (#6752) 2024-11-14 20:37:21 -08:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
requirements.txt
router.py LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30