mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 03:04:13 +00:00
* Add date picker to usage tab + Add reasoning_content token tracking across all providers on streaming (#9722) * feat(new_usage.tsx): add date picker for new usage tab allow user to look back on their usage data * feat(anthropic/chat/transformation.py): report reasoning tokens in completion token details allows usage tracking on how many reasoning tokens are actually being used * feat(streaming_chunk_builder.py): return reasoning_tokens in anthropic/openai streaming response allows tracking reasoning_token usage across providers * Fix update team metadata + fix bulk adding models on Ui (#9721) * fix(handle_add_model_submit.tsx): fix bulk adding models * fix(team_info.tsx): fix team metadata update Fixes https://github.com/BerriAI/litellm/issues/9689 * (v0) Unified file id - allow calling multiple providers with same file id (#9718) * feat(files_endpoints.py): initial commit adding 'target_model_names' support allow developer to specify all the models they want to call with the file * feat(files_endpoints.py): return unified files endpoint * test(test_files_endpoints.py): add validation test - if invalid purpose submitted * feat: more updates * feat: initial working commit of unified file id translation * fix: additional fixes * fix(router.py): remove model replace logic in jsonl on acreate_file enables file upload to work for chat completion requests as well * fix(files_endpoints.py): remove whitespace around model name * fix(azure/handler.py): return acreate_file with correct response type * fix: fix linting errors * test: fix mock test to run on github actions * fix: fix ruff errors * fix: fix file too large error * fix(utils.py): remove redundant var * test: modify test to work on github actions * test: update tests * test: more debug logs to understand ci/cd issue * test: fix test for respx * test: skip mock respx test fails on ci/cd - not clear why * fix: fix ruff check * fix: fix test * fix(model_connection_test.tsx): fix linting error * test: update unit tests |
||
---|---|---|
.. | ||
ai21/chat | ||
aiohttp_openai/chat | ||
anthropic | ||
azure | ||
azure_ai | ||
base_llm | ||
bedrock | ||
cerebras | ||
clarifai | ||
cloudflare/chat | ||
codestral/completion | ||
cohere | ||
custom_httpx | ||
databricks | ||
deepgram | ||
deepinfra/chat | ||
deepseek | ||
deprecated_providers | ||
empower/chat | ||
fireworks_ai | ||
friendliai/chat | ||
galadriel/chat | ||
gemini | ||
github/chat | ||
groq | ||
hosted_vllm | ||
huggingface | ||
infinity/rerank | ||
jina_ai | ||
litellm_proxy/chat | ||
lm_studio | ||
mistral | ||
nlp_cloud | ||
nvidia_nim | ||
ollama | ||
oobabooga | ||
openai | ||
openai_like | ||
openrouter | ||
perplexity/chat | ||
petals | ||
predibase | ||
replicate | ||
sagemaker | ||
sambanova | ||
snowflake | ||
together_ai | ||
topaz | ||
triton | ||
vertex_ai | ||
vllm/completion | ||
voyage/embedding | ||
watsonx | ||
xai | ||
__init__.py | ||
base.py | ||
baseten.py | ||
custom_llm.py | ||
maritalk.py | ||
ollama_chat.py | ||
README.md | ||
volcengine.py |
File Structure
August 27th, 2024
To make it easy to see how calls are transformed for each model/provider:
we are working on moving all supported litellm providers to a folder structure, where folder name is the supported litellm provider name.
Each folder will contain a *_transformation.py
file, which has all the request/response transformation logic, making it easy to see how calls are modified.
E.g. cohere/
, bedrock/
.