diff --git a/docs/my-website/release_notes/v1.66.0-stable/index.md b/docs/my-website/release_notes/v1.66.0-stable/index.md
index c15692f905..5312f1e2b6 100644
--- a/docs/my-website/release_notes/v1.66.0-stable/index.md
+++ b/docs/my-website/release_notes/v1.66.0-stable/index.md
@@ -89,34 +89,34 @@ Get started with this [here](https://docs.litellm.ai/docs/tutorials/msft_sso)
## New Models / Updated Models
-- xAI
+- **xAI**
1. Added reasoning_effort support for `xai/grok-3-mini-beta` [Get Started](https://docs.litellm.ai/docs/providers/xai#reasoning-usage)
2. Added cost tracking for `xai/grok-3` models [PR](https://github.com/BerriAI/litellm/pull/9920)
-- Hugging Face
+- **Hugging Face**
1. Added inference providers support [Get Started](https://docs.litellm.ai/docs/providers/huggingface#serverless-inference-providers)
-- Azure
+- **Azure**
1. Added azure/gpt-4o-realtime-audio cost tracking [PR](https://github.com/BerriAI/litellm/pull/9893)
-- VertexAI
+- **VertexAI**
1. Added enterpriseWebSearch tool support [Get Started](https://docs.litellm.ai/docs/providers/vertex#grounding---web-search)
2. Moved to only passing keys accepted by the Vertex AI response schema [PR](https://github.com/BerriAI/litellm/pull/8992)
-- Google AI Studio
+- **Google AI Studio**
1. Added cost tracking for `gemini-2.5-pro` [PR](https://github.com/BerriAI/litellm/pull/9837)
2. Fixed pricing for 'gemini/gemini-2.5-pro-preview-03-25' [PR](https://github.com/BerriAI/litellm/pull/9896)
3. Fixed handling file_data being passed in [PR](https://github.com/BerriAI/litellm/pull/9786)
-- Azure
+- **Azure**
1. Updated Azure Phi-4 pricing [PR](https://github.com/BerriAI/litellm/pull/9862)
2. Added azure/gpt-4o-realtime-audio cost tracking [PR](https://github.com/BerriAI/litellm/pull/9893)
-- Databricks
+- **Databricks**
1. Removed reasoning_effort from parameters [PR](https://github.com/BerriAI/litellm/pull/9811)
2. Fixed custom endpoint check for Databricks [PR](https://github.com/BerriAI/litellm/pull/9925)
-- General
+- **General**
1. Added litellm.supports_reasoning() util to track if an llm supports reasoning [Get Started](https://docs.litellm.ai/docs/providers/anthropic#reasoning)
2. Function Calling - Handle pydantic base model in message tool calls, handle tools = [], and support fake streaming on tool calls for meta.llama3-3-70b-instruct-v1:0 [PR](https://github.com/BerriAI/litellm/pull/9774)
3. LiteLLM Proxy - Allow passing `thinking` param to litellm proxy via client sdk [PR](https://github.com/BerriAI/litellm/pull/9386)
@@ -124,20 +124,20 @@ Get started with this [here](https://docs.litellm.ai/docs/tutorials/msft_sso)
## Spend Tracking Improvements
-- OpenAI, Azure
+- **OpenAI, Azure**
1. Realtime API Cost tracking with token usage metrics in spend logs [Get Started](https://docs.litellm.ai/docs/realtime)
-- Anthropic
+- **Anthropic**
1. Fixed Claude Haiku cache read pricing per token [PR](https://github.com/BerriAI/litellm/pull/9834)
2. Added cost tracking for Claude responses with base_model [PR](https://github.com/BerriAI/litellm/pull/9897)
3. Fixed Anthropic prompt caching cost calculation and trimmed logged message in db [PR](https://github.com/BerriAI/litellm/pull/9838)
-- General
+- **General**
1. Added token tracking and log usage object in spend logs [PR](https://github.com/BerriAI/litellm/pull/9843)
2. Handle custom pricing at deployment level [PR](https://github.com/BerriAI/litellm/pull/9855)
## Management Endpoints / UI
-1. Test Key Tab:
+- **Test Key Tab**
1. Added rendering of Reasoning content, ttft, usage metrics on test key page [PR](https://github.com/BerriAI/litellm/pull/9931)
Create and manage tags.
-3. Redesigned Login Screen: +- **Redesigned Login Screen** 1. Polished login screen [PR](https://github.com/BerriAI/litellm/pull/9778) -2. Microsoft SSO Auto-Sync: +- **Microsoft SSO Auto-Sync** 1. Added debug route to allow admins to debug SSO JWT fields [PR](https://github.com/BerriAI/litellm/pull/9835) 2. Added ability to use MSFT Graph API to assign users to teams [PR](https://github.com/BerriAI/litellm/pull/9865) 3. Connected litellm to Azure Entra ID Enterprise Application [PR](https://github.com/BerriAI/litellm/pull/9872) 4. Added ability for admins to set `default_team_params` for when litellm SSO creates default teams [PR](https://github.com/BerriAI/litellm/pull/9895) 5. Fixed MSFT SSO to use correct field for user email [PR](https://github.com/BerriAI/litellm/pull/9886) 6. Added UI support for setting Default Team setting when litellm SSO auto creates teams [PR](https://github.com/BerriAI/litellm/pull/9918) -5. UI Bug Fixes: +- **UI Bug Fixes** 1. Prevented team, key, org, model numerical values changing on scrolling [PR](https://github.com/BerriAI/litellm/pull/9776) 2. Instantly reflect key and team updates in UI [PR](https://github.com/BerriAI/litellm/pull/9825) ## Logging / Guardrail Improvements -1. Prometheus: - - Emit Key and Team Budget metrics on a cron job schedule [Get Started](https://docs.litellm.ai/docs/proxy/prometheus#initialize-budget-metrics-on-startup) +- **Prometheus** + 1. Emit Key and Team Budget metrics on a cron job schedule [Get Started](https://docs.litellm.ai/docs/proxy/prometheus#initialize-budget-metrics-on-startup) ## Security Fixes -1. Fixed [CVE-2025-0330](https://www.cve.org/CVERecord?id=CVE-2025-0330) - Leakage of Langfuse API keys in team exception handling [PR](https://github.com/BerriAI/litellm/pull/9830) -2. Fixed [CVE-2024-6825](https://www.cve.org/CVERecord?id=CVE-2024-6825) - Remote code execution in post call rules [PR](https://github.com/BerriAI/litellm/pull/9826) +- Fixed [CVE-2025-0330](https://www.cve.org/CVERecord?id=CVE-2025-0330) - Leakage of Langfuse API keys in team exception handling [PR](https://github.com/BerriAI/litellm/pull/9830) +- Fixed [CVE-2024-6825](https://www.cve.org/CVERecord?id=CVE-2024-6825) - Remote code execution in post call rules [PR](https://github.com/BerriAI/litellm/pull/9826) ## Helm -1. Added service annotations to litellm-helm chart [PR](https://github.com/BerriAI/litellm/pull/9840) -2. Added extraEnvVars to the helm deployment [PR](https://github.com/BerriAI/litellm/pull/9292) +- Added service annotations to litellm-helm chart [PR](https://github.com/BerriAI/litellm/pull/9840) +- Added extraEnvVars to the helm deployment [PR](https://github.com/BerriAI/litellm/pull/9292) ## Demo