The telemetry module was moved from llama_stack.apis.telemetry to llama_stack.core.telemetry in PR #3919. This updates the import in the provider data routing test to use the new location.
Assume a remote inference provider which works only when users provide
their own API keys via provider_data. By definition, we cannot list
models and hence update our routing registry. But because we _require_ a
provider ID in the models now, we can identify which provider to route
to and let that provider decide.
Note that we still try to look up our registry since it may have a
pre-registered alias. Just that we don't outright fail when we are not
able to look it up.
Also, updated inference router so that the responses have the _exact_
model that the request had.
Added an integration test