# What does this PR do?


## Test Plan
# What does this PR do?


## Test Plan
# What does this PR do?


## Test Plan
Completes the refactoring started in previous commit by:

1. **Fix library client** (critical): Add logic to detect Pydantic model parameters
   and construct them properly from request bodies. The key fix is to NOT exclude
   any params when converting the body for Pydantic models - we need all fields
   to pass to the Pydantic constructor.

   Before: _convert_body excluded all params, leaving body empty for Pydantic construction
   After: Check for Pydantic params first, skip exclusion, construct model with full body

2. **Update remaining providers** to use new Pydantic-based signatures:
   - litellm_openai_mixin: Extract extra fields via __pydantic_extra__
   - databricks: Use TYPE_CHECKING import for params type
   - llama_openai_compat: Use TYPE_CHECKING import for params type
   - sentence_transformers: Update method signatures to use params

3. **Update unit tests** to use new Pydantic signature:
   - test_openai_mixin.py: Use OpenAIChatCompletionRequestParams

This fixes test failures where the library client was trying to construct
Pydantic models with empty dictionaries.
The previous fix had a bug: it called _convert_body() which only keeps fields
that match function parameter names. For Pydantic methods with signature:
  openai_chat_completion(params: OpenAIChatCompletionRequestParams)

The signature only has 'params', but the body has 'model', 'messages', etc.
So _convert_body() returned an empty dict.

Fix: Skip _convert_body() entirely for Pydantic params. Use the raw body
directly to construct the Pydantic model (after stripping NOT_GIVENs).

This properly fixes the ValidationError where required fields were missing.
The streaming code path (_call_streaming) had the same issue as non-streaming:
it called _convert_body() which returned empty dict for Pydantic params.

Applied the same fix as commit 7476c0ae:
- Detect Pydantic model parameters before body conversion
- Skip _convert_body() for Pydantic params
- Construct Pydantic model directly from raw body (after stripping NOT_GIVENs)

This fixes streaming endpoints like openai_chat_completion with stream=True.
The streaming code path (_call_streaming) had the same issue as non-streaming:
it called _convert_body() which returned empty dict for Pydantic params.

Applied the same fix as commit 7476c0ae:
- Detect Pydantic model parameters before body conversion
- Skip _convert_body() for Pydantic params
- Construct Pydantic model directly from raw body (after stripping NOT_GIVENs)

This fixes streaming endpoints like openai_chat_completion with stream=True.
This commit is contained in:
Eric Huang 2025-10-09 13:53:31 -07:00
parent 26fd5dbd34
commit a93130e323
295 changed files with 51966 additions and 3051 deletions

View file

@ -14,28 +14,29 @@
"__data__": {
"models": [
{
"model": "llama3.2:3b",
"name": "llama3.2:3b",
"digest": "a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72",
"expires_at": "2025-10-08T16:14:05.423042-07:00",
"size": 3367856128,
"size_vram": 3367856128,
"model": "all-minilm:l6-v2",
"name": "all-minilm:l6-v2",
"digest": "1b226e2802dbb772b5fc32a58f103ca1804ef7501331012de126ab22f67475ef",
"expires_at": "2025-10-09T11:53:33.065351-07:00",
"size": 585846784,
"size_vram": 585846784,
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"family": "bert",
"families": [
"llama"
"bert"
],
"parameter_size": "3.2B",
"quantization_level": "Q4_K_M"
}
"parameter_size": "23M",
"quantization_level": "F16"
},
"context_length": 256
},
{
"model": "nomic-embed-text:latest",
"name": "nomic-embed-text:latest",
"digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f",
"expires_at": "2025-10-08T11:32:34.970974-07:00",
"expires_at": "2025-10-09T11:53:33.984879-07:00",
"size": 848677888,
"size_vram": 848677888,
"details": {
@ -47,15 +48,16 @@
],
"parameter_size": "137M",
"quantization_level": "F16"
}
},
"context_length": 8192
},
{
"model": "llama-guard3:1b",
"name": "llama-guard3:1b",
"digest": "494147e06bf99e10dbe67b63a07ac81c162f18ef3341aa3390007ac828571b3b",
"expires_at": "2025-10-08T11:30:00.392919-07:00",
"size": 2350966784,
"size_vram": 2350966784,
"model": "llama3.2:3b-instruct-fp16",
"name": "llama3.2:3b-instruct-fp16",
"digest": "195a8c01d91ec3cb1e0aad4624a51f2602c51fa7d96110f8ab5a20c84081804d",
"expires_at": "2025-10-09T11:53:30.486441-07:00",
"size": 7919570944,
"size_vram": 7919570944,
"details": {
"parent_model": "",
"format": "gguf",
@ -63,9 +65,10 @@
"families": [
"llama"
],
"parameter_size": "1.5B",
"quantization_level": "Q8_0"
}
"parameter_size": "3.2B",
"quantization_level": "F16"
},
"context_length": 4096
}
]
}

View file

@ -13,29 +13,11 @@
"__type__": "ollama._types.ProcessResponse",
"__data__": {
"models": [
{
"model": "llama3.2:3b",
"name": "llama3.2:3b",
"digest": "a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72",
"expires_at": "2025-10-08T16:14:05.423042-07:00",
"size": 3367856128,
"size_vram": 3367856128,
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"families": [
"llama"
],
"parameter_size": "3.2B",
"quantization_level": "Q4_K_M"
}
},
{
"model": "nomic-embed-text:latest",
"name": "nomic-embed-text:latest",
"digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f",
"expires_at": "2025-10-08T11:32:34.970974-07:00",
"expires_at": "2025-10-09T11:53:33.984879-07:00",
"size": 848677888,
"size_vram": 848677888,
"details": {
@ -47,15 +29,35 @@
],
"parameter_size": "137M",
"quantization_level": "F16"
}
},
"context_length": 8192
},
{
"model": "llama-guard3:1b",
"name": "llama-guard3:1b",
"digest": "494147e06bf99e10dbe67b63a07ac81c162f18ef3341aa3390007ac828571b3b",
"expires_at": "2025-10-08T11:30:00.392919-07:00",
"size": 2350966784,
"size_vram": 2350966784,
"model": "all-minilm:l6-v2",
"name": "all-minilm:l6-v2",
"digest": "1b226e2802dbb772b5fc32a58f103ca1804ef7501331012de126ab22f67475ef",
"expires_at": "2025-10-09T11:53:33.065351-07:00",
"size": 585846784,
"size_vram": 585846784,
"details": {
"parent_model": "",
"format": "gguf",
"family": "bert",
"families": [
"bert"
],
"parameter_size": "23M",
"quantization_level": "F16"
},
"context_length": 256
},
{
"model": "llama3.2:3b-instruct-fp16",
"name": "llama3.2:3b-instruct-fp16",
"digest": "195a8c01d91ec3cb1e0aad4624a51f2602c51fa7d96110f8ab5a20c84081804d",
"expires_at": "2025-10-09T11:53:30.486441-07:00",
"size": 7919570944,
"size_vram": 7919570944,
"details": {
"parent_model": "",
"format": "gguf",
@ -63,9 +65,10 @@
"families": [
"llama"
],
"parameter_size": "1.5B",
"quantization_level": "Q8_0"
}
"parameter_size": "3.2B",
"quantization_level": "F16"
},
"context_length": 4096
}
]
}

View file

@ -14,28 +14,29 @@
"__data__": {
"models": [
{
"model": "llama3.2:3b",
"name": "llama3.2:3b",
"digest": "a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72",
"expires_at": "2025-10-08T16:14:05.423042-07:00",
"size": 3367856128,
"size_vram": 3367856128,
"model": "all-minilm:l6-v2",
"name": "all-minilm:l6-v2",
"digest": "1b226e2802dbb772b5fc32a58f103ca1804ef7501331012de126ab22f67475ef",
"expires_at": "2025-10-09T11:53:33.065351-07:00",
"size": 585846784,
"size_vram": 585846784,
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"family": "bert",
"families": [
"llama"
"bert"
],
"parameter_size": "3.2B",
"quantization_level": "Q4_K_M"
}
"parameter_size": "23M",
"quantization_level": "F16"
},
"context_length": 256
},
{
"model": "nomic-embed-text:latest",
"name": "nomic-embed-text:latest",
"digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f",
"expires_at": "2025-10-08T11:32:34.970974-07:00",
"expires_at": "2025-10-09T11:53:33.984879-07:00",
"size": 848677888,
"size_vram": 848677888,
"details": {
@ -47,15 +48,16 @@
],
"parameter_size": "137M",
"quantization_level": "F16"
}
},
"context_length": 8192
},
{
"model": "llama-guard3:1b",
"name": "llama-guard3:1b",
"digest": "494147e06bf99e10dbe67b63a07ac81c162f18ef3341aa3390007ac828571b3b",
"expires_at": "2025-10-08T11:30:00.392919-07:00",
"size": 2350966784,
"size_vram": 2350966784,
"model": "llama3.2:3b-instruct-fp16",
"name": "llama3.2:3b-instruct-fp16",
"digest": "195a8c01d91ec3cb1e0aad4624a51f2602c51fa7d96110f8ab5a20c84081804d",
"expires_at": "2025-10-09T11:53:30.486441-07:00",
"size": 7919570944,
"size_vram": 7919570944,
"details": {
"parent_model": "",
"format": "gguf",
@ -63,9 +65,10 @@
"families": [
"llama"
],
"parameter_size": "1.5B",
"quantization_level": "Q8_0"
}
"parameter_size": "3.2B",
"quantization_level": "F16"
},
"context_length": 4096
}
]
}