forked from phoenix/litellm-mirror
* fix(langfuse.py): support new langfuse prompt_chat class init params * fix(langfuse.py): handle new init values on prompt chat + prompt text templates fixes error caused during langfuse logging * docs(openai_compatible.md): clarify `openai/` handles correct routing for `/v1/completions` route Fixes https://github.com/BerriAI/litellm/issues/5876 * fix(utils.py): handle unmapped gemini model optional param translation Fixes https://github.com/BerriAI/litellm/issues/5888 * fix(o1_transformation.py): fix o-1 validation, to not raise error if temperature=1 Fixes https://github.com/BerriAI/litellm/issues/5884 * fix(prisma_client.py): refresh iam token Fixes https://github.com/BerriAI/litellm/issues/5896 * fix: pass drop params where required * fix(utils.py): pass drop_params correctly * fix(types/vertex_ai.py): fix generation config * test(test_max_completion_tokens.py): fix test * fix(vertex_and_google_ai_studio_gemini.py): fix map openai params |
||
---|---|---|
.. | ||
adapters | ||
assistants | ||
batches | ||
deprecated_litellm_server | ||
files | ||
fine_tuning | ||
integrations | ||
litellm_core_utils | ||
llms | ||
proxy | ||
rerank_api | ||
router_strategy | ||
router_utils | ||
secret_managers | ||
tests | ||
types | ||
__init__.py | ||
_logging.py | ||
_redis.py | ||
_service_logger.py | ||
_version.py | ||
budget_manager.py | ||
caching.py | ||
cost.json | ||
cost_calculator.py | ||
exceptions.py | ||
main.py | ||
model_prices_and_context_window_backup.json | ||
py.typed | ||
requirements.txt | ||
router.py | ||
scheduler.py | ||
timeout.py | ||
utils.py |