# What does this PR do?
## Test Plan
# What does this PR do?
## Test Plan
# What does this PR do?
## Test Plan
Completes the refactoring started in previous commit by:
1. **Fix library client** (critical): Add logic to detect Pydantic model parameters
and construct them properly from request bodies. The key fix is to NOT exclude
any params when converting the body for Pydantic models - we need all fields
to pass to the Pydantic constructor.
Before: _convert_body excluded all params, leaving body empty for Pydantic construction
After: Check for Pydantic params first, skip exclusion, construct model with full body
2. **Update remaining providers** to use new Pydantic-based signatures:
- litellm_openai_mixin: Extract extra fields via __pydantic_extra__
- databricks: Use TYPE_CHECKING import for params type
- llama_openai_compat: Use TYPE_CHECKING import for params type
- sentence_transformers: Update method signatures to use params
3. **Update unit tests** to use new Pydantic signature:
- test_openai_mixin.py: Use OpenAIChatCompletionRequestParams
This fixes test failures where the library client was trying to construct
Pydantic models with empty dictionaries.
The previous fix had a bug: it called _convert_body() which only keeps fields
that match function parameter names. For Pydantic methods with signature:
openai_chat_completion(params: OpenAIChatCompletionRequestParams)
The signature only has 'params', but the body has 'model', 'messages', etc.
So _convert_body() returned an empty dict.
Fix: Skip _convert_body() entirely for Pydantic params. Use the raw body
directly to construct the Pydantic model (after stripping NOT_GIVENs).
This properly fixes the ValidationError where required fields were missing.
The streaming code path (_call_streaming) had the same issue as non-streaming:
it called _convert_body() which returned empty dict for Pydantic params.
Applied the same fix as commit 7476c0ae:
- Detect Pydantic model parameters before body conversion
- Skip _convert_body() for Pydantic params
- Construct Pydantic model directly from raw body (after stripping NOT_GIVENs)
This fixes streaming endpoints like openai_chat_completion with stream=True.
The streaming code path (_call_streaming) had the same issue as non-streaming:
it called _convert_body() which returned empty dict for Pydantic params.
Applied the same fix as commit 7476c0ae:
- Detect Pydantic model parameters before body conversion
- Skip _convert_body() for Pydantic params
- Construct Pydantic model directly from raw body (after stripping NOT_GIVENs)
This fixes streaming endpoints like openai_chat_completion with stream=True.
Propagate test IDs from client to server via HTTP headers to maintain
proper test isolation when running with server-based stack configs.
Without
this, recorded/replayed inference requests in server mode would leak
across
tests.
Changes:
- Patch client _prepare_request to inject test ID into provider data
header
- Sync test context from provider data on server side before storage
operations
- Set LLAMA_STACK_TEST_STACK_CONFIG_TYPE env var based on stack config
- Configure console width for cleaner log output in CI
- Add SQLITE_STORE_DIR temp directory for test data isolation
Uses test_id in request hashes and test-scoped subdirectories to prevent
cross-test contamination. Model list endpoints exclude test_id to enable
merging recordings from different servers.
Additionally, this PR adds a `record-if-missing` mode (which we will use
instead of `record` which records everything) which is very useful.
🤖 Co-authored with [Claude Code](https://claude.com/claude-code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
# What does this PR do?
## Test Plan
LLAMA_STACK_CONFIG=fireworks pytest -s -v
tests/integration/files/test_files.py::test_openai_client_basic_operations
# What does this PR do?
TSIA
Added Files provider to the fireworks template. Might want to add to all
templates as a follow-up.
## Test Plan
llama-stack pytest tests/unit/files/test_files.py
llama-stack llama stack build --template fireworks --image-type conda
--run
LLAMA_STACK_CONFIG=http://localhost:8321 pytest -s -v
tests/integration/files/