llama-stack-mirror/llama_stack/providers/remote/inference/llama_openai_compat
Charlie Doern 41431d8bdd refactor: convert providers to be installed via package
currently providers have a `pip_package` list. Rather than make our own form of python dependency management, we should use `pyproject.toml` files in each provider declaring the dependencies in a more trackable manner.
Each provider can then be installed using the already in place `module` field in the ProviderSpec, pointing to the directory the provider lives in
we can then simply `uv pip install` this directory as opposed to installing the dependencies one by one

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-09-22 09:23:50 -04:00
..
__init__.py feat: introduce APIs for retrieving chat completion requests (#2145) 2025-05-18 21:43:19 -07:00
config.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
llama.py chore: indicate to mypy that InferenceProvider.rerank is concrete (#3238) 2025-08-22 12:02:13 -07:00
models.py feat: add api.llama provider, llama-guard-4 model (#2058) 2025-04-29 10:07:41 -07:00
pyproject.toml refactor: convert providers to be installed via package 2025-09-22 09:23:50 -04:00