litellm-mirror/docs/my-website/release_notes/v1.56.3/index.md
Krish Dholakia 792ee079c2
Litellm 04 05 2025 release notes (#9785)
* docs: update docs

* docs: additional cleanup

* docs(index.md): add initial links

* docs: more doc updates

* docs(index.md): add more links

* docs(files.md): add gemini files API to docs

* docs(index.md): add more docs

* docs: more docs

* docs: update docs
2025-04-06 09:03:51 -07:00

3 KiB

title slug date authors tags hide_table_of_contents
v1.56.3 v1.56.3 2024-12-28T10:00:00
name title url image_url
Krrish Dholakia CEO, LiteLLM https://www.linkedin.com/in/krish-d/ 1737327772
name title url image_url
Ishaan Jaffer CTO, LiteLLM https://www.linkedin.com/in/reffajnaahsi/ 1675971026
guardrails
logging
virtual key management
new models
false

import Image from '@theme/IdealImage';

guardrails, logging, virtual key management, new models

:::info

Get a 7 day free trial for LiteLLM Enterprise here.

no call needed

:::

New Features

Log Guardrail Traces

Track guardrail failure rate and if a guardrail is going rogue and failing requests. Start here

Traced Guardrail Success

<Image img={require('../../img/gd_success.png')} />

Traced Guardrail Failure

<Image img={require('../../img/gd_fail.png')} />

/guardrails/list

/guardrails/list allows clients to view available guardrails + supported guardrail params

curl -X GET 'http://0.0.0.0:4000/guardrails/list'

Expected response

{
    "guardrails": [
        {
        "guardrail_name": "aporia-post-guard",
        "guardrail_info": {
            "params": [
            {
                "name": "toxicity_score",
                "type": "float",
                "description": "Score between 0-1 indicating content toxicity level"
            },
            {
                "name": "pii_detection",
                "type": "boolean"
            }
            ]
        }
        }
    ]
}

Guardrails with Mock LLM

Send mock_response to test guardrails without making an LLM call. More info on mock_response here

curl -i http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {"role": "user", "content": "hi my email is ishaan@berri.ai"}
    ],
    "mock_response": "This is a mock response",
    "guardrails": ["aporia-pre-guard", "aporia-post-guard"]
  }'

Assign Keys to Users

You can now assign keys to users via Proxy UI

<Image img={require('../../img/ui_key.png')} />

New Models

  • openrouter/openai/o1
  • vertex_ai/mistral-large@2411

Fixes