mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-26 11:14:04 +00:00
* fix(utils.py): return citations for perplexity streaming Fixes https://github.com/BerriAI/litellm/issues/5535 * fix(anthropic/chat.py): support fallbacks for anthropic streaming (#5542) * fix(anthropic/chat.py): support fallbacks for anthropic streaming Fixes https://github.com/BerriAI/litellm/issues/5512 * fix(anthropic/chat.py): use module level http client if none given (prevents early client closure) * fix: fix linting errors * fix(http_handler.py): fix raise_for_status error handling * test: retry flaky test * fix otel type * fix(bedrock/embed): fix error raising * test(test_openai_batches_and_files.py): skip azure batches test (for now) quota exceeded * fix(test_router.py): skip azure batch route test (for now) - hit batch quota limits --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> * All `model_group_alias` should show up in `/models`, `/model/info` , `/model_group/info` (#5539) * fix(router.py): support returning model_alias model names in `/v1/models` * fix(proxy_server.py): support returning model alias'es on `/model/info` * feat(router.py): support returning model group alias for `/model_group/info` * fix(proxy_server.py): fix linting errors * fix(proxy_server.py): fix linting errors * build(model_prices_and_context_window.json): add amazon titan text premier pricing information Closes https://github.com/BerriAI/litellm/issues/5560 * feat(litellm_logging.py): log standard logging response object for pass through endpoints. Allows bedrock /invoke agent calls to be correctly logged to langfuse + s3 * fix(success_handler.py): fix linting error * fix(success_handler.py): fix linting errors * fix(team_endpoints.py): Allows admin to update team member budgets --------- Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> |
||
---|---|---|
.. | ||
AI21 | ||
anthropic | ||
AzureOpenAI | ||
bedrock | ||
cerebras | ||
cohere | ||
custom_httpx | ||
files_apis | ||
fine_tuning_apis | ||
huggingface_llms_metadata | ||
OpenAI | ||
prompt_templates | ||
sagemaker | ||
togetherai | ||
tokenizers | ||
vertex_ai_and_google_ai_studio | ||
__init__.py | ||
aleph_alpha.py | ||
azure_text.py | ||
base.py | ||
base_aws_llm.py | ||
baseten.py | ||
clarifai.py | ||
cloudflare.py | ||
custom_llm.py | ||
databricks.py | ||
fireworks_ai.py | ||
gemini.py | ||
huggingface_restapi.py | ||
maritalk.py | ||
nlp_cloud.py | ||
nvidia_nim.py | ||
ollama.py | ||
ollama_chat.py | ||
oobabooga.py | ||
openrouter.py | ||
palm.py | ||
petals.py | ||
predibase.py | ||
replicate.py | ||
text_completion_codestral.py | ||
together_ai.py | ||
triton.py | ||
vllm.py | ||
volcengine.py | ||
watsonx.py |