llama-stack-mirror/tests/integration/telemetry
Ashwin Bharambe d089a6d106 fix(inference): enable routing of models with provider_data alone
Assume a remote inference provider which works only when users provide
their own API keys via provider_data. By definition, we cannot list
models and hence update our routing registry. But because we _require_ a
provider ID in the models now, we can identify which provider to route
to and let that provider decide.

Note that we still try to look up our registry since it may have a
pre-registered alias. Just that we don't outright fail when we are not
able to look it up.

Also, updated inference router so that the responses have the _exact_
model that the request had.

Added an integration test
2025-10-27 18:58:32 -07:00
..
recordings test(telemetry): Telemetry Tests (#3805) 2025-10-17 10:43:33 -07:00
conftest.py chore(telemetry): code cleanup (#3897) 2025-10-23 23:13:02 -07:00
test_completions.py fix(inference): enable routing of models with provider_data alone 2025-10-27 18:58:32 -07:00