fix: code was injecting run_config.vector_stores even when it was None. (#4423)

# What does this PR do?
Fixed issue where code was injecting `run_config.vector_stores` even
when it was `None`, which overrode the `default_factory` in
`RagToolRuntimeConfig`. Currently, _most_ providers don't have a default
implementation for vectors_stores:

 - nvidia
 - meta-reference-gpu
 - dell
 - oci
 - open-benchmark
 - postgres-demo
 - watsonx

The only ones which do are:
 - ci-tests
 - starter
 - starter-gpu

## Test Plan
Prior to the change, I could not start llama-stack with the oci
distribution:

```
Traceback (most recent call last):
  File "/home/opc/llama-stack/.venv/bin/llama", line 10, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/opc/llama-stack/src/llama_stack/cli/llama.py", line 52, in main
    parser.run(args)
  File "/home/opc/llama-stack/src/llama_stack/cli/llama.py", line 46, in run
    args.func(args)
  File "/home/opc/llama-stack/src/llama_stack/cli/stack/run.py", line 184, in _run_stack_run_cmd
    self._uvicorn_run(config_file, args)
  File "/home/opc/llama-stack/src/llama_stack/cli/stack/run.py", line 242, in _uvicorn_run
    uvicorn.run("llama_stack.core.server.server:create_app", **uvicorn_config)  # type: ignore[arg-type]
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/.venv/lib/python3.12/site-packages/uvicorn/main.py", line 580, in run
    server.run()
  File "/home/opc/llama-stack/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 67, in run
    return asyncio.run(self.serve(sockets=sockets))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 71, in serve
    await self._serve(sockets)
  File "/home/opc/llama-stack/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 78, in _serve
    config.load()
  File "/home/opc/llama-stack/.venv/lib/python3.12/site-packages/uvicorn/config.py", line 442, in load
    self.loaded_app = self.loaded_app()
                      ^^^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/src/llama_stack/core/server/server.py", line 403, in create_app
    app = StackApp(
          ^^^^^^^^^
  File "/home/opc/llama-stack/src/llama_stack/core/server/server.py", line 161, in __init__
    future.result()
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/concurrent/futures/thread.py", line 59, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/src/llama_stack/core/stack.py", line 534, in initialize
    impls = await resolve_impls(
            ^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/src/llama_stack/core/resolver.py", line 180, in resolve_impls
    return await instantiate_providers(sorted_providers, router_apis, dist_registry, run_config, policy, internal_impls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/src/llama_stack/core/resolver.py", line 321, in instantiate_providers
    impl = await instantiate_provider(provider, deps, inner_impls, dist_registry, run_config, policy)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/src/llama_stack/core/resolver.py", line 417, in instantiate_provider
    config = config_type(**provider_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/opc/llama-stack/.venv/lib/python3.12/site-packages/pydantic/main.py", line 253, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for RagToolRuntimeConfig
vector_stores_config
  Input should be a valid dictionary or instance of VectorStoresConfig [type=model_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.11/v/model_type
```

Afer tracing through and finding a simple solution to the change, I was
able to run the distribution again. I also executed the integration
tests for pytest:

```bash
OCI_COMPARTMENT_OCID="ocid1.compartment.oc1..xxx" OCI_REGION="us-chicago-1" OCI_AUTH_TYPE=instance_principal OCI_CLI_PROFILE=CHICAGO uv run pytest -sv tests/integration/inference/ --stack-config oci --text-model oci/meta.llama-3.3-70b-instruct --inference-mode live
```
This commit is contained in:
Dennis Kennetz 2025-12-22 01:48:16 -06:00 committed by GitHub
parent b6043bd53b
commit d684ec91cc
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -412,6 +412,8 @@ async def instantiate_provider(
# Inject vector_stores_config for providers that need it (introspection-based) # Inject vector_stores_config for providers that need it (introspection-based)
config_type = instantiate_class_type(provider_spec.config_class) config_type = instantiate_class_type(provider_spec.config_class)
if hasattr(config_type, "__fields__") and "vector_stores_config" in config_type.__fields__: if hasattr(config_type, "__fields__") and "vector_stores_config" in config_type.__fields__:
# Only inject if vector_stores is provided, otherwise let default_factory handle it
if run_config.vector_stores is not None:
provider_config["vector_stores_config"] = run_config.vector_stores provider_config["vector_stores_config"] = run_config.vector_stores
config = config_type(**provider_config) config = config_type(**provider_config)