mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
* fix(generic_api_callback.py): remove requests lib usage * fix(budget_manager.py): remove requests lib usgae * fix(main.py): cleanup requests lib usage * fix(utils.py): remove requests lib usage * fix(argilla.py): fix argilla test * fix(athina.py): replace 'requests' lib usage with litellm module * fix(greenscale.py): replace 'requests' lib usage with httpx * fix: remove unused 'requests' lib import + replace usage in some places * fix(prompt_layer.py): remove 'requests' lib usage from prompt layer * fix(ollama_chat.py): remove 'requests' lib usage * fix(baseten.py): replace 'requests' lib usage * fix(codestral/): replace 'requests' lib usage * fix(predibase/): replace 'requests' lib usage * refactor: cleanup unused 'requests' lib imports * fix(oobabooga.py): cleanup 'requests' lib usage * fix(invoke_handler.py): remove unused 'requests' lib usage * refactor: cleanup unused 'requests' lib import * fix: fix linting errors * refactor(ollama/): move ollama to using base llm http handler removes 'requests' lib dep for ollama integration * fix(ollama_chat.py): fix linting errors * fix(ollama/completion/transformation.py): convert non-jpeg/png image to jpeg/png before passing to ollama |
||
---|---|---|
.. | ||
ai21/chat | ||
anthropic | ||
azure | ||
azure_ai | ||
base_llm | ||
bedrock | ||
cerebras | ||
clarifai | ||
cloudflare/chat | ||
codestral/completion | ||
cohere | ||
custom_httpx | ||
databricks | ||
deepinfra/chat | ||
deepseek/chat | ||
deprecated_providers | ||
empower/chat | ||
fireworks_ai | ||
friendliai/chat | ||
galadriel/chat | ||
github/chat | ||
groq | ||
hosted_vllm/chat | ||
huggingface | ||
jina_ai | ||
lm_studio | ||
mistral | ||
nlp_cloud | ||
nvidia_nim | ||
ollama | ||
oobabooga | ||
openai | ||
openai_like | ||
openrouter/chat | ||
perplexity/chat | ||
petals | ||
predibase | ||
replicate | ||
sagemaker | ||
sambanova | ||
together_ai | ||
triton | ||
vertex_ai | ||
vllm/completion | ||
watsonx | ||
xai/chat | ||
__init__.py | ||
base.py | ||
baseten.py | ||
custom_llm.py | ||
maritalk.py | ||
ollama_chat.py | ||
README.md | ||
volcengine.py |
File Structure
August 27th, 2024
To make it easy to see how calls are transformed for each model/provider:
we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.
Each folder will contain a *_transformation.py
file, which has all the request/response transformation logic, making it easy to see how calls are modified.
E.g. cohere/
, bedrock/
.