llama-stack-mirror/tests/integration/common/recordings/53f5b7bf02d12f1988ec26c4d0a3830d094f43b3ad5a444cbae3ca815f31ca20.json
Eric Huang a70fc60485 test
# What does this PR do?


## Test Plan
# What does this PR do?


## Test Plan
# What does this PR do?


## Test Plan
Completes the refactoring started in previous commit by:

1. **Fix library client** (critical): Add logic to detect Pydantic model parameters
   and construct them properly from request bodies. The key fix is to NOT exclude
   any params when converting the body for Pydantic models - we need all fields
   to pass to the Pydantic constructor.

   Before: _convert_body excluded all params, leaving body empty for Pydantic construction
   After: Check for Pydantic params first, skip exclusion, construct model with full body

2. **Update remaining providers** to use new Pydantic-based signatures:
   - litellm_openai_mixin: Extract extra fields via __pydantic_extra__
   - databricks: Use TYPE_CHECKING import for params type
   - llama_openai_compat: Use TYPE_CHECKING import for params type
   - sentence_transformers: Update method signatures to use params

3. **Update unit tests** to use new Pydantic signature:
   - test_openai_mixin.py: Use OpenAIChatCompletionRequestParams

This fixes test failures where the library client was trying to construct
Pydantic models with empty dictionaries.
The previous fix had a bug: it called _convert_body() which only keeps fields
that match function parameter names. For Pydantic methods with signature:
  openai_chat_completion(params: OpenAIChatCompletionRequestParams)

The signature only has 'params', but the body has 'model', 'messages', etc.
So _convert_body() returned an empty dict.

Fix: Skip _convert_body() entirely for Pydantic params. Use the raw body
directly to construct the Pydantic model (after stripping NOT_GIVENs).

This properly fixes the ValidationError where required fields were missing.
The streaming code path (_call_streaming) had the same issue as non-streaming:
it called _convert_body() which returned empty dict for Pydantic params.

Applied the same fix as commit 7476c0ae:
- Detect Pydantic model parameters before body conversion
- Skip _convert_body() for Pydantic params
- Construct Pydantic model directly from raw body (after stripping NOT_GIVENs)

This fixes streaming endpoints like openai_chat_completion with stream=True.
The streaming code path (_call_streaming) had the same issue as non-streaming:
it called _convert_body() which returned empty dict for Pydantic params.

Applied the same fix as commit 7476c0ae:
- Detect Pydantic model parameters before body conversion
- Skip _convert_body() for Pydantic params
- Construct Pydantic model directly from raw body (after stripping NOT_GIVENs)

This fixes streaming endpoints like openai_chat_completion with stream=True.
2025-10-09 13:53:18 -07:00

58 lines
3 KiB
JSON

{
"test_id": null,
"request": {
"method": "POST",
"url": "http://0.0.0.0:11434/v1/v1/chat/completions",
"headers": {},
"body": {
"model": "llama3.2:3b-instruct-fp16",
"messages": [
{
"role": "user",
"content": "Test trace openai 0"
}
],
"stream": false,
"extra_body": null
},
"endpoint": "/v1/chat/completions",
"model": "llama3.2:3b-instruct-fp16"
},
"response": {
"body": {
"__type__": "openai.types.chat.chat_completion.ChatCompletion",
"__data__": {
"id": "rec-53f5b7bf02d1",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "You're referring to the test environment for OpenAI's CLIA model!\n\nTo access the test environment, you'll need to follow these steps:\n\n1. **Sign up for an OpenAI account**: If you haven't already, create a new account on the OpenAI website.\n2. **Apply for access to the test environment**: Submit a request to gain access to the test environment through the OpenAI portal.\n3. **Read and agree to the terms of service**: Once your request is accepted, you'll need to review and agree to the terms of service for the test environment.\n\nAfter completing these steps, you should have access to the test environment, which includes the CLIA model (a multilingual conversational AI designed for customer support).\n\nIn the test environment, you can interact with the CLIA model using the interface provided by OpenAI. Keep in mind that this is a limited environment, and you might not be able to push code or test more advanced use cases.\n\nHere are some tips for testing the CLIA model:\n\n* Use simple text inputs to get started.\n* Experiment with different queries and conversations to understand how the model responds.\n* Keep an eye on the model's limitations (e.g., it may struggle with complex questions or follow-up conversations).\n* Report any issues or concerns you encounter during testing.\n\nRemember that the test environment is designed for exploration and evaluation purposes only. Any data collected from the test environment will be anonymized and used solely for improving the CLIA model and other OpenAI products.\n\nIf you have any specific questions or need help with a particular test scenario, feel free to ask!",
"refusal": null,
"role": "assistant",
"annotations": null,
"audio": null,
"function_call": null,
"tool_calls": null
}
}
],
"created": 0,
"model": "llama3.2:3b-instruct-fp16",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": "fp_ollama",
"usage": {
"completion_tokens": 327,
"prompt_tokens": 31,
"total_tokens": 358,
"completion_tokens_details": null,
"prompt_tokens_details": null
}
}
},
"is_streaming": false
}
}