llama-stack-mirror/llama_stack/providers/inline/inference/meta_reference
ehhuang 80d58ab519
chore: refactor (chat)completions endpoints to use shared params struct (#3761)
# What does this PR do?

Converts openai(_chat)_completions params to pydantic BaseModel to
reduce code duplication across all providers.

## Test Plan
CI









---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/llamastack/llama-stack/pull/3761).
* #3777
* __->__ #3761
2025-10-10 15:46:34 -07:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
common.py fix: update dangling references to llama download command (#3763) 2025-10-09 18:35:02 -07:00
config.py chore(api): add mypy coverage to meta_reference_config (#2664) 2025-07-09 10:24:30 +02:00
generators.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
inference.py chore: refactor (chat)completions endpoints to use shared params struct (#3761) 2025-10-10 15:46:34 -07:00
model_parallel.py chore: remove /v1/inference/completion and implementations (#3622) 2025-10-01 11:36:53 -04:00
parallel_utils.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00