llama-stack-mirror/llama_stack/core
Charlie Doern 92d0470f74 feat: allow for multiple external provider specs
when using the providers.d method of installation users could hand craft their AdapterSpec's to use overlapping code meaning one repo could contain an inline
and remote impl. Currently installing a provider via module does not allow for that as each repo is only allowed to have one `get_provider_spec` method with one Spec returned

add an optional way for `get_provider_spec` to return a list of `ProviderSpec` where each can be either an inline or remote impl.

Note: the `adapter_type` in `get_provider_spec` MUST match the `provider_type` in the build/run yaml for this to work.

resolves #3226

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-10-03 13:30:48 -04:00
..
access_control chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
conversations feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
prompts feat: Adding OpenAI Prompts API (#3319) 2025-09-08 11:05:13 -04:00
routers chore: remove deprecated inference.chat_completion implementations (#3654) 2025-10-03 07:55:34 -04:00
routing_tables feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
server feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
store feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
ui feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
utils refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
__init__.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
build.py feat(distro): no huggingface provider for starter (#3258) 2025-08-26 14:06:36 -07:00
build_container.sh chore: fix build (#3522) 2025-09-22 22:53:48 -07:00
build_venv.sh fix(ci, tests): ensure uv environments in CI are kosher, record tests (#3193) 2025-08-18 17:02:24 -07:00
client.py feat: introduce API leveling, post_training, eval to v1alpha (#3449) 2025-09-26 16:18:07 +02:00
common.sh refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
configure.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
datatypes.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
distribution.py feat: allow for multiple external provider specs 2025-10-03 13:30:48 -04:00
external.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inspect.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
library_client.py chore: refactor server.main (#3462) 2025-09-18 21:11:13 -07:00
providers.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
request_headers.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00
resolver.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
stack.py feat: Add OpenAI Conversations API (#3429) 2025-10-03 08:47:18 -07:00
start_stack.sh docs: update documentation links (#3459) 2025-09-17 10:37:35 -07:00