litellm-mirror/litellm/llms
Ishaan Jaff 868cdd0226
[Feat] Add Support for DELETE /v1/responses/{response_id} on OpenAI, Azure OpenAI (#10205)
* add transform_delete_response_api_request to base responses config

* add transform_delete_response_api_request

* add delete_response_api_handler

* fixes for deleting responses, response API

* add adelete_responses

* add async test_basic_openai_responses_delete_endpoint

* test_basic_openai_responses_delete_endpoint

* working delete for streaming on responses API

* fixes azure transformation

* TestAnthropicResponsesAPITest

* fix code check

* fix linting

* fixes for get_complete_url

* test_basic_openai_responses_streaming_delete_endpoint

* streaming fixes
2025-04-22 18:27:03 -07:00
..
ai21/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
aiohttp_openai/chat VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
anthropic [Feat] Pass through endpoints - ensure PassthroughStandardLoggingPayload is logged and contains method, url, request/response body (#10194) 2025-04-21 19:46:22 -07:00
azure [Feat] Add Support for DELETE /v1/responses/{response_id} on OpenAI, Azure OpenAI (#10205) 2025-04-22 18:27:03 -07:00
azure_ai fix azure foundry phi error 2025-04-19 22:10:18 -07:00
base_llm [Feat] Add Support for DELETE /v1/responses/{response_id} on OpenAI, Azure OpenAI (#10205) 2025-04-22 18:27:03 -07:00
bedrock fix(bedrock): wrong system prompt transformation (#10120) 2025-04-21 08:48:14 -07:00
cerebras (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
clarifai VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
cloudflare/chat VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
codestral/completion build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
cohere Updated cohere v2 passthrough (#9997) 2025-04-14 19:51:01 -07:00
custom_httpx [Feat] Add Support for DELETE /v1/responses/{response_id} on OpenAI, Azure OpenAI (#10205) 2025-04-22 18:27:03 -07:00
databricks Support 'file' message type for VLLM video url's + Anthropic redacted message thinking support (#10129) 2025-04-19 11:16:37 -07:00
deepgram VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
deepinfra/chat Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
deepseek Add Google AI Studio /v1/files upload API support (#9645) 2025-04-02 08:56:58 -07:00
deprecated_providers build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
empower/chat LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config (#7148) 2024-12-10 17:12:42 -08:00
fireworks_ai Handle fireworks ai tool calling response (#10130) 2025-04-19 09:37:45 -07:00
friendliai/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
galadriel/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
gemini Gemini-2.5-flash - support reasoning cost calc + return reasoning content (#10141) 2025-04-19 09:20:52 -07:00
github/chat (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
groq fix(llm_http_handler.py): fix fake streaming (#10061) 2025-04-16 10:15:11 -07:00
hosted_vllm Support 'file' message type for VLLM video url's + Anthropic redacted message thinking support (#10129) 2025-04-19 11:16:37 -07:00
huggingface VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
infinity [Feat] Add infinity embedding support (contributor pr) (#10196) 2025-04-21 20:01:29 -07:00
jina_ai Add cohere v2/rerank support (#8421) (#8605) 2025-02-22 22:25:29 -08:00
litellm_proxy/chat Support 'file' message type for VLLM video url's + Anthropic redacted message thinking support (#10129) 2025-04-19 11:16:37 -07:00
lm_studio fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
mistral fix(mistral_chat_transformation.py): add missing comma (#9606) 2025-03-27 22:16:21 -07:00
nlp_cloud VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
nvidia_nim fix: dictionary changed size during iteration error (#8327) (#8341) 2025-02-07 16:20:28 -08:00
ollama VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
oobabooga VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
openai [Feat] Add Support for DELETE /v1/responses/{response_id} on OpenAI, Azure OpenAI (#10205) 2025-04-22 18:27:03 -07:00
openai_like fix(llm_http_handler.py): fix fake streaming (#10061) 2025-04-16 10:15:11 -07:00
openrouter fix #8425, passthrough kwargs during acompletion, and unwrap extra_body for openrouter (#9747) 2025-04-03 22:19:40 -07:00
perplexity/chat fix missing comma 2025-02-24 01:00:07 +05:30
petals VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
predibase VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
replicate VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
sagemaker VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
sambanova update sambanova docs (#8875) 2025-02-27 20:23:33 -08:00
snowflake VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
together_ai Squashed commit of the following: (#9709) 2025-04-02 21:24:54 -07:00
topaz Add /vllm/* and /mistral/* passthrough endpoints (adds support for Mistral OCR via passthrough) 2025-04-14 22:06:33 -07:00
triton fix(triton/completion/transformation.py): remove bad_words / stop wor… (#10163) 2025-04-19 11:23:37 -07:00
vertex_ai Gemini-2.5-flash improvements (#10198) 2025-04-21 22:48:00 -07:00
vllm Add /vllm/* and /mistral/* passthrough endpoints (adds support for Mistral OCR via passthrough) 2025-04-14 22:06:33 -07:00
voyage/embedding VertexAI non-jsonl file storage support (#9781) 2025-04-09 14:01:48 -07:00
watsonx to get API key from environment viarble of WATSONX_APIKEY (#10131) 2025-04-19 11:25:14 -07:00
xai Add /vllm/* and /mistral/* passthrough endpoints (adds support for Mistral OCR via passthrough) 2025-04-14 22:06:33 -07:00
__init__.py
base.py build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
baseten.py test(base_llm_unit_tests.py): add test to ensure drop params is respe… (#8224) 2025-02-03 16:04:44 -08:00
custom_llm.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00
maritalk.py build(pyproject.toml): add new dev dependencies - for type checking (#9631) 2025-03-29 11:02:13 -07:00
ollama_chat.py Litellm dev 03 08 2025 p3 (#9089) 2025-03-09 18:20:56 -07:00
README.md LiteLLM Minor Fixes and Improvements (09/13/2024) (#5689) 2024-09-14 10:02:55 -07:00
volcengine.py (code quality) run ruff rule to ban unused imports (#7313) 2024-12-19 12:33:42 -08:00

File Structure

August 27th, 2024

To make it easy to see how calls are transformed for each model/provider:

we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.

Each folder will contain a *_transformation.py file, which has all the request/response transformation logic, making it easy to see how calls are modified.

E.g. cohere/, bedrock/.