litellm-mirror/litellm
Krish Dholakia e9fbefca5d Litellm ollama refactor (#7162)
* refactor(ollama/): refactor ollama `/api/generate` to use base llm config

Addresses https://github.com/andrewyng/aisuite/issues/113#issuecomment-2512369132

* test: skip unresponsive test

* test(test_secret_manager.py): mark flaky test

* test: fix google sm test

* fix: fix init.py
2024-12-10 21:45:35 -08:00
..
adapters LiteLLM Minor Fixes & Improvements (09/27/2024) (#5938) 2024-09-27 22:52:57 -07:00
assistants rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
batch_completion (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
batches rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
caching Provider Budget Routing - Get Budget, Spend Details (#7063) 2024-12-06 21:14:12 -08:00
deprecated_litellm_server (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
files Code Quality Improvement - remove file_apis, fine_tuning_apis from /llms (#7156) 2024-12-10 21:44:25 -08:00
fine_tuning Code Quality Improvement - remove file_apis, fine_tuning_apis from /llms (#7156) 2024-12-10 21:44:25 -08:00
integrations Litellm code qa common config (#7113) 2024-12-09 15:58:25 -08:00
litellm_core_utils Litellm ollama refactor (#7162) 2024-12-10 21:45:35 -08:00
llms Litellm ollama refactor (#7162) 2024-12-10 21:45:35 -08:00
proxy (Refactor) Code Quality improvement - stop redefining LiteLLMBase (#7147) 2024-12-10 15:49:01 -08:00
realtime_api rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
rerank_api LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
router_strategy (Refactor) Code Quality improvement - stop redefining LiteLLMBase (#7147) 2024-12-10 15:49:01 -08:00
router_utils Litellm dev 12 07 2024 (#7086) 2024-12-08 00:30:33 -08:00
secret_managers (Perf / latency improvement) improve pass through endpoint latency to ~50ms (before PR was 400ms) (#6874) 2024-11-22 18:47:26 -08:00
types (Refactor) Code Quality improvement - stop redefining LiteLLMBase (#7147) 2024-12-10 15:49:01 -08:00
__init__.py Litellm ollama refactor (#7162) 2024-12-10 21:45:35 -08:00
_logging.py LiteLLM Minor Fixes & Improvements (10/30/2024) (#6519) 2024-11-02 00:44:32 +05:30
_redis.py (redis fix) - fix AbstractConnection.__init__() got an unexpected keyword argument 'ssl' (#6908) 2024-11-25 22:52:44 -08:00
_service_logger.py LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
constants.py Litellm ollama refactor (#7162) 2024-12-10 21:45:35 -08:00
cost.json
cost_calculator.py rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
exceptions.py Litellm 12 02 2024 (#6994) 2024-12-02 22:00:01 -08:00
main.py Litellm ollama refactor (#7162) 2024-12-10 21:45:35 -08:00
model_prices_and_context_window_backup.json fix llama-3.3-70b-versatile 2024-12-07 20:19:02 -08:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
router.py Litellm dev 12 07 2024 (#7086) 2024-12-08 00:30:33 -08:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py Litellm ollama refactor (#7162) 2024-12-10 21:45:35 -08:00