llama-stack-mirror/tests/unit/distribution
Ashwin Bharambe 44096512b5
feat: add custom_metadata to OpenAIModel to unify /v1/models with /v1/openai/v1/models (#4051)
We need to remove `/v1/openai/v1` paths shortly. There is one trouble --
our current `/v1/openai/v1/models` endpoint provides different data than
`/v1/models`. Unfortunately our tests target the latter (llama-stack
customized) behavior. We need to get to true OpenAI compatibility.

This is step 1: adding `custom_metadata` field to `OpenAIModel` that
includes all the extra stuff we add in the native `/v1/models` response.
This can be extracted on the consumer end by look at
`__pydantic_extra__` or other similar fields.

This PR:
- Adds `custom_metadata` field to `OpenAIModel` class in
`src/llama_stack/apis/models/models.py`
- Modified `openai_list_models()` in
`src/llama_stack/core/routing_tables/models.py` to populate
custom_metadata

Next Steps
1. Update stainless client to use `/v1/openai/v1/models` instead of
`/v1/models`
2. Migrate tests to read from `custom_metadata`
3. Remove `/v1/openai/v1/` prefix entirely and consolidate to single
`/v1/models` endpoint
2025-11-03 15:56:07 -08:00
..
routers feat: add custom_metadata to OpenAIModel to unify /v1/models with /v1/openai/v1/models (#4051) 2025-11-03 15:56:07 -08:00
routing_tables chore!: BREAKING CHANGE removing VectorDB APIs (#3774) 2025-10-11 14:07:08 -07:00
test_api_recordings.py fix(testing): improve api_recorder error messages for missing recordings (#3760) 2025-10-09 15:04:16 -07:00
test_context.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
test_distribution.py feat(prompts): attach prompts to storage stores in run configs (#3893) 2025-10-27 11:12:12 -07:00
test_library_client_initialization.py chore: refactor server.main (#3462) 2025-09-18 21:11:13 -07:00
test_stack_list_deps.py refactor(build): rework CLI commands and build process (1/2) (#2974) 2025-10-17 19:52:14 -07:00