litellm-mirror/litellm
Krish Dholakia 03e711e3e4 LITELLM: Remove requests library usage (#7235)
* fix(generic_api_callback.py): remove requests lib usage

* fix(budget_manager.py): remove requests lib usgae

* fix(main.py): cleanup requests lib usage

* fix(utils.py): remove requests lib usage

* fix(argilla.py): fix argilla test

* fix(athina.py): replace 'requests' lib usage with litellm module

* fix(greenscale.py): replace 'requests' lib usage with httpx

* fix: remove unused 'requests' lib import + replace usage in some places

* fix(prompt_layer.py): remove 'requests' lib usage from prompt layer

* fix(ollama_chat.py): remove 'requests' lib usage

* fix(baseten.py): replace 'requests' lib usage

* fix(codestral/): replace 'requests' lib usage

* fix(predibase/): replace 'requests' lib usage

* refactor: cleanup unused 'requests' lib imports

* fix(oobabooga.py): cleanup 'requests' lib usage

* fix(invoke_handler.py): remove unused 'requests' lib usage

* refactor: cleanup unused 'requests' lib import

* fix: fix linting errors

* refactor(ollama/): move ollama to using base llm http handler

removes 'requests' lib dep for ollama integration

* fix(ollama_chat.py): fix linting errors

* fix(ollama/completion/transformation.py): convert non-jpeg/png image to jpeg/png before passing to ollama
2024-12-17 12:50:04 -08:00
..
adapters Litellm remove circular imports (#7232) 2024-12-14 16:28:34 -08:00
assistants rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
batch_completion Litellm vllm refactor (#7158) 2024-12-10 21:48:35 -08:00
batches Code Quality Improvement - use vertex_ai/ as folder name for vertexAI (#7166) 2024-12-11 00:32:41 -08:00
caching Provider Budget Routing - Get Budget, Spend Details (#7063) 2024-12-06 21:14:12 -08:00
deprecated_litellm_server (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
files Code Quality Improvement - use vertex_ai/ as folder name for vertexAI (#7166) 2024-12-11 00:32:41 -08:00
fine_tuning Code Quality Improvement - use vertex_ai/ as folder name for vertexAI (#7166) 2024-12-11 00:32:41 -08:00
integrations LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00
litellm_core_utils LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00
llms LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00
proxy LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00
realtime_api rename llms/OpenAI/ -> llms/openai/ (#7154) 2024-12-10 20:14:07 -08:00
rerank_api LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
router_strategy LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00
router_utils (feat) Add Azure Blob Storage Logging Integration (#7265) 2024-12-16 22:18:22 -08:00
secret_managers (Refactor) Code Quality improvement - remove /prompt_templates/ , base_aws_llm.py from /llms folder (#7164) 2024-12-11 00:02:46 -08:00
types (feat) Add Tag-based budgets on litellm router / proxy (#7236) 2024-12-14 17:28:36 -08:00
__init__.py (feat) Add Azure Blob Storage Logging Integration (#7265) 2024-12-16 22:18:22 -08:00
_logging.py LiteLLM Minor Fixes & Improvements (10/30/2024) (#6519) 2024-11-02 00:44:32 +05:30
_redis.py (redis fix) - fix AbstractConnection.__init__() got an unexpected keyword argument 'ssl' (#6908) 2024-11-25 22:52:44 -08:00
_service_logger.py LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
_version.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
budget_manager.py LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00
constants.py (feat) Add Bedrock knowledge base pass through endpoints (#7267) 2024-12-16 22:19:34 -08:00
cost.json
cost_calculator.py fix(utils.py): fix openai-like api response format parsing (#7273) 2024-12-17 12:49:09 -08:00
exceptions.py Litellm 12 02 2024 (#6994) 2024-12-02 22:00:01 -08:00
main.py LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00
model_prices_and_context_window_backup.json Add new Gemini 2.0 Flash model to Vertex AI. (#7193) 2024-12-14 15:59:43 -08:00
py.typed feature - Types for mypy - #360 2024-05-30 14:14:41 -04:00
router.py Litellm dev 12 14 2024 p1 (#7231) 2024-12-14 22:22:29 -08:00
scheduler.py (refactor) caching use LLMCachingHandler for async_get_cache and set_cache (#6208) 2024-10-14 16:34:01 +05:30
timeout.py Litellm ruff linting enforcement (#5992) 2024-10-01 19:44:20 -04:00
utils.py LITELLM: Remove requests library usage (#7235) 2024-12-17 12:50:04 -08:00