mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
We need to remove `/v1/openai/v1` paths shortly. There is one trouble -- our current `/v1/openai/v1/models` endpoint provides different data than `/v1/models`. Unfortunately our tests target the latter (llama-stack customized) behavior. We need to get to true OpenAI compatibility. This is step 1: adding `custom_metadata` field to `OpenAIModel` that includes all the extra stuff we add in the native `/v1/models` response. This can be extracted on the consumer end by look at `__pydantic_extra__` or other similar fields. This PR: - Adds `custom_metadata` field to `OpenAIModel` class in `src/llama_stack/apis/models/models.py` - Modified `openai_list_models()` in `src/llama_stack/core/routing_tables/models.py` to populate custom_metadata Next Steps 1. Update stainless client to use `/v1/openai/v1/models` instead of `/v1/models` 2. Migrate tests to read from `custom_metadata` 3. Remove `/v1/openai/v1/` prefix entirely and consolidate to single `/v1/models` endpoint |
||
|---|---|---|
| .. | ||
| agents | ||
| batches | ||
| benchmarks | ||
| common | ||
| conversations | ||
| datasetio | ||
| datasets | ||
| eval | ||
| files | ||
| inference | ||
| inspect | ||
| models | ||
| post_training | ||
| prompts | ||
| providers | ||
| safety | ||
| scoring | ||
| scoring_functions | ||
| shields | ||
| synthetic_data_generation | ||
| tools | ||
| vector_io | ||
| vector_stores | ||
| __init__.py | ||
| datatypes.py | ||
| resource.py | ||
| version.py | ||