forked from phoenix/litellm-mirror
* fix(langfuse.py): support new langfuse prompt_chat class init params * fix(langfuse.py): handle new init values on prompt chat + prompt text templates fixes error caused during langfuse logging * docs(openai_compatible.md): clarify `openai/` handles correct routing for `/v1/completions` route Fixes https://github.com/BerriAI/litellm/issues/5876 * fix(utils.py): handle unmapped gemini model optional param translation Fixes https://github.com/BerriAI/litellm/issues/5888 * fix(o1_transformation.py): fix o-1 validation, to not raise error if temperature=1 Fixes https://github.com/BerriAI/litellm/issues/5884 * fix(prisma_client.py): refresh iam token Fixes https://github.com/BerriAI/litellm/issues/5896 * fix: pass drop params where required * fix(utils.py): pass drop_params correctly * fix(types/vertex_ai.py): fix generation config * test(test_max_completion_tokens.py): fix test * fix(vertex_and_google_ai_studio_gemini.py): fix map openai params |
||
---|---|---|
.. | ||
caching | ||
completion | ||
debugging | ||
embedding | ||
extras | ||
langchain | ||
observability | ||
pass_through | ||
projects | ||
providers | ||
proxy | ||
tutorials | ||
anthropic_completion.md | ||
assistants.md | ||
audio_transcription.md | ||
batches.md | ||
budget_manager.md | ||
contact.md | ||
contributing.md | ||
data_security.md | ||
default_code_snippet.md | ||
enterprise.md | ||
exception_mapping.md | ||
fine_tuning.md | ||
getting_started.md | ||
hosted.md | ||
image_generation.md | ||
index.md | ||
load_test.md | ||
migration.md | ||
migration_policy.md | ||
oidc.md | ||
old_guardrails.md | ||
projects.md | ||
prompt_injection.md | ||
proxy_api.md | ||
proxy_server.md | ||
rerank.md | ||
routing.md | ||
rules.md | ||
scheduler.md | ||
sdk_custom_pricing.md | ||
secret.md | ||
set_keys.md | ||
simple_proxy_old_doc.md | ||
text_to_speech.md | ||
troubleshoot.md |