mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 02:34:29 +00:00
docs(index.md): initial draft release notes
This commit is contained in:
parent
03b5399f86
commit
7d38aabe62
1 changed files with 104 additions and 0 deletions
104
docs/my-website/release_notes/v1.67.0-stable/index.md
Normal file
104
docs/my-website/release_notes/v1.67.0-stable/index.md
Normal file
|
@ -0,0 +1,104 @@
|
|||
---
|
||||
title: v1.67.0-stable - Unified Responses API
|
||||
slug: v1.67.0-stable
|
||||
date: 2025-04-19T10:00:00
|
||||
authors:
|
||||
- name: Krrish Dholakia
|
||||
title: CEO, LiteLLM
|
||||
url: https://www.linkedin.com/in/krish-d/
|
||||
image_url: https://media.licdn.com/dms/image/v2/D4D03AQGrlsJ3aqpHmQ/profile-displayphoto-shrink_400_400/B4DZSAzgP7HYAg-/0/1737327772964?e=1749686400&v=beta&t=Hkl3U8Ps0VtvNxX0BNNq24b4dtX5wQaPFp6oiKCIHD8
|
||||
- name: Ishaan Jaffer
|
||||
title: CTO, LiteLLM
|
||||
url: https://www.linkedin.com/in/reffajnaahsi/
|
||||
image_url: https://pbs.twimg.com/profile_images/1613813310264340481/lz54oEiB_400x400.jpg
|
||||
|
||||
tags: ["sso", "unified_file_id", "cost_tracking", "security"]
|
||||
hide_table_of_contents: false
|
||||
---
|
||||
|
||||
## New Models / Updated Models
|
||||
|
||||
- **OpenAI**
|
||||
1. gpt-4.1 pricing - https://github.com/BerriAI/litellm/pull/9990
|
||||
2. o4 - correctly map o4 to openai o_series model
|
||||
- **Azure AI**
|
||||
1. Phi-4 output cost per token fix - https://github.com/BerriAI/litellm/pull/9880
|
||||
2. Responses API support https://github.com/BerriAI/litellm/pull/10116
|
||||
- **Anthropic**
|
||||
1. redacted message thinking support - https://github.com/BerriAI/litellm/pull/10129
|
||||
- **Cohere**
|
||||
1. `/v2/chat` Passthrough endpoint support w/ cost tracking - https://github.com/BerriAI/litellm/pull/9997
|
||||
- **Azure**
|
||||
1. Support azure tenant_id/client_id env vars - https://github.com/BerriAI/litellm/pull/9993
|
||||
2. Fix response_format check for 2025+ api versions - https://github.com/BerriAI/litellm/pull/9993
|
||||
- **VLLM**
|
||||
1. Files - Support 'file' message type for VLLM video url's - https://github.com/BerriAI/litellm/pull/10129
|
||||
2. Passthrough - new `/vllm/` passthrough endpoint support https://github.com/BerriAI/litellm/pull/10002
|
||||
- **Mistral**
|
||||
1. new `/mistral` passthrough endpoint support https://github.com/BerriAI/litellm/pull/10002
|
||||
- **AWS**
|
||||
1. New mapped bedrock regions - https://github.com/BerriAI/litellm/pull/9430
|
||||
- **VertexAI / Google AI Studio**
|
||||
1. Gemini - Response format - Retain schema field ordering for google gemini and vertex by specifying propertyOrdering - https://github.com/BerriAI/litellm/pull/9828
|
||||
2. Gemini-2.5-flash - return reasoning content
|
||||
3. Gemini-2.5-flash - pricing + model information - https://github.com/BerriAI/litellm/pull/10125
|
||||
4. Passthrough - new `/vertex_ai/discovery` route - enables calling AgentBuilder API routes https://github.com/BerriAI/litellm/pull/10084
|
||||
- **Fireworks AI**
|
||||
1. return tool calling responses in `tool_calls` field (fireworks incorrectly returns this as a json str in content) - https://github.com/BerriAI/litellm/pull/10130
|
||||
- **Triton**
|
||||
1. Remove fixed remove bad_words / stop words from `/generate` call - https://github.com/BerriAI/litellm/pull/10163
|
||||
- **Other**
|
||||
1. Support for all litellm providers on Responses API (works with Codex) - https://github.com/BerriAI/litellm/pull/10132
|
||||
2. Fix combining multiple tool calls in streaming response - https://github.com/BerriAI/litellm/pull/10040
|
||||
|
||||
|
||||
## Spend Tracking Improvements
|
||||
|
||||
- **Cost Control** - inject cache control points in prompt for cost reduction https://github.com/BerriAI/litellm/pull/10000
|
||||
- **Spend Tags** - spend tags in headers - support x-litellm-tags even if tag based routing not enabled
|
||||
- **Gemini-2.5-flash** - support cost calculation for reasoning tokens
|
||||
|
||||
## Management Endpoints / UI
|
||||
- **Users**
|
||||
1. Show created_at and updated_at on users page - https://github.com/BerriAI/litellm/pull/10033
|
||||
- **Virtual Keys**
|
||||
1. Filter by key alias - https://github.com/BerriAI/litellm/pull/10085
|
||||
- **Usage Tab**
|
||||
1. Team based usage
|
||||
1. New `LiteLLM_DailyTeamSpend` Table for aggregate team based usage logging - https://github.com/BerriAI/litellm/pull/10039
|
||||
2. New Team based usage dashboard + new `/team/daily/activity` API - https://github.com/BerriAI/litellm/pull/10081
|
||||
3. Return team alias on /team/daily/activity API - https://github.com/BerriAI/litellm/pull/10157
|
||||
4. allow internal user view spend for teams they belong to - https://github.com/BerriAI/litellm/pull/10157
|
||||
5. allow viewing top keys by team
|
||||
2. Tag Based Usage
|
||||
1. New `LiteLLM_DailyTagSpend` Table for aggregate tag based usage logging - https://github.com/BerriAI/litellm/pull/10071
|
||||
2. Restrict to only Proxy Admins - https://github.com/BerriAI/litellm/pull/10157
|
||||
3. allow viewing top keys by tag
|
||||
4. Return tags passed in request (i.e. dynamic tags) on `/tag/list`
|
||||
5.
|
||||
3. Track prompt caching metrics in daily user, team, tag tables - https://github.com/BerriAI/litellm/pull/10029
|
||||
4. Show usage by key (on all up, team, and tag usage dashboards) - https://github.com/BerriAI/litellm/pull/10157
|
||||
5. swap old usage with new usage tab
|
||||
- **Models**
|
||||
1. Make columns resizable/hideable - https://github.com/BerriAI/litellm/pull/10119
|
||||
- **API Playground**
|
||||
1. Allow internal user to call api playground - https://github.com/BerriAI/litellm/pull/10157
|
||||
- **SCIM**
|
||||
1. Add LiteLLM SCIM Integration for Team and User management - https://github.com/BerriAI/litellm/pull/10072
|
||||
|
||||
|
||||
## Logging / Guardrail Integrations
|
||||
- **GCS**
|
||||
1. Fix gcs pub sub logging with env var GCS_PROJECT_ID - https://github.com/BerriAI/litellm/pull/10042
|
||||
- **AIM**
|
||||
1. Add litellm call id passing to Aim guardrails on pre and post-hooks calls - https://github.com/BerriAI/litellm/pull/10021
|
||||
- **Azure blob storage**
|
||||
1. Ensure logging works in high throughput scenarios - https://github.com/BerriAI/litellm/pull/9962
|
||||
|
||||
## General Proxy Improvements
|
||||
|
||||
- **Support setting `litellm.modify_params` via env var** - https://github.com/BerriAI/litellm/pull/9964
|
||||
- **Model Discovery** - Check provider’s `/models` endpoints when calling proxy’s `/v1/models` endpoint - https://github.com/BerriAI/litellm/pull/9958
|
||||
- **`/utils/token_counter`** - fix retrieving custom tokenizer for db models - https://github.com/BerriAI/litellm/pull/10047
|
||||
- **Prisma migrate** - handle existing columns in db table - https://github.com/BerriAI/litellm/pull/10138
|
||||
|
Loading…
Add table
Add a link
Reference in a new issue