litellm-mirror/docs/my-website/release_notes/v1.63.2-stable/index.md
Krish Dholakia 792ee079c2
Litellm 04 05 2025 release notes (#9785)
* docs: update docs

* docs: additional cleanup

* docs(index.md): add initial links

* docs: more doc updates

* docs(index.md): add more links

* docs(files.md): add gemini files API to docs

* docs(index.md): add more docs

* docs: more docs

* docs: update docs
2025-04-06 09:03:51 -07:00

5.7 KiB
Raw Permalink Blame History

title slug date authors tags hide_table_of_contents
v1.63.2-stable v1.63.2-stable 2025-03-08T10:00:00
name title url image_url
Krrish Dholakia CEO, LiteLLM https://www.linkedin.com/in/krish-d/ 1737327772
name title url image_url
Ishaan Jaffer CTO, LiteLLM https://www.linkedin.com/in/reffajnaahsi/ 1675971026
llm translation
thinking
reasoning_content
claude-3-7-sonnet
false

import Image from '@theme/IdealImage';

These are the changes since v1.61.20-stable.

This release is primarily focused on:

  • LLM Translation improvements (more thinking content improvements)
  • UI improvements (Error logs now shown on UI)

:::info

This release will be live on 03/09/2025

:::

<Image img={require('../../img/release_notes/v1632_release.jpg')} />

Demo Instance

Here's a Demo Instance to test changes:

New Models / Updated Models

  1. Add supports_pdf_input for specific Bedrock Claude models PR
  2. Add pricing for amazon eu models PR
  3. Fix Azure O1 mini pricing PR

LLM Translation

<Image img={require('../../img/release_notes/anthropic_thinking.jpg')}/>

  1. Support /openai/ passthrough for Assistant endpoints. Get Started
  2. Bedrock Claude - fix tool calling transformation on invoke route. Get Started
  3. Bedrock Claude - response_format support for claude on invoke route. Get Started
  4. Bedrock - pass description if set in response_format. Get Started
  5. Bedrock - Fix passing response_format: {"type": "text"}. PR
  6. OpenAI - Handle sending image_url as str to openai. Get Started
  7. Deepseek - return 'reasoning_content' missing on streaming. Get Started
  8. Caching - Support caching on reasoning content. Get Started
  9. Bedrock - handle thinking blocks in assistant message. Get Started
  10. Anthropic - Return signature on streaming. Get Started
  • Note: We've also migrated from signature_delta to signature. Read more
  1. Support format param for specifying image type. Get Started
  2. Anthropic - /v1/messages endpoint - thinking param support. Get Started
  • Note: this refactors the [BETA] unified /v1/messages endpoint, to just work for the Anthropic API.
  1. Vertex AI - handle $id in response schema when calling vertex ai. Get Started

Spend Tracking Improvements

  1. Batches API - Fix cost calculation to run on retrieve_batch. Get Started
  2. Batches API - Log batch models in spend logs / standard logging payload. Get Started

Management Endpoints / UI

<Image img={require('../../img/release_notes/error_logs.jpg')} />

  1. Virtual Keys Page
    • Allow team/org filters to be searchable on the Create Key Page
    • Add created_by and updated_by fields to Keys table
    • Show 'user_email' on key table
    • Show 100 Keys Per Page, Use full height, increase width of key alias
  2. Logs Page
    • Show Error Logs on LiteLLM UI
    • Allow Internal Users to View their own logs
  3. Internal Users Page
    • Allow admin to control default model access for internal users
  4. Fix session handling with cookies

Logging / Guardrail Integrations

  1. Fix prometheus metrics w/ custom metrics, when keys containing team_id make requests. PR

Performance / Loadbalancing / Reliability improvements

  1. Cooldowns - Support cooldowns on models called with client side credentials. Get Started
  2. Tag-based Routing - ensures tag-based routing across all endpoints (/embeddings, /image_generation, etc.). Get Started

General Proxy Improvements

  1. Raise BadRequestError when unknown model passed in request
  2. Enforce model access restrictions on Azure OpenAI proxy route
  3. Reliability fix - Handle emojis in text - fix orjson error
  4. Model Access Patch - don't overwrite litellm.anthropic_models when running auth checks
  5. Enable setting timezone information in docker image

Complete Git Diff

Here's the complete git diff