fix: #1867 InferenceRouter has no attribute formatter (#2422)
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 20s
Integration Tests / test-matrix (http, 3.10, providers) (push) Failing after 23s
Integration Tests / test-matrix (http, 3.10, agents) (push) Failing after 25s
Integration Tests / test-matrix (http, 3.11, inference) (push) Failing after 21s
Integration Tests / test-matrix (http, 3.12, post_training) (push) Failing after 5s
Integration Tests / test-matrix (http, 3.10, inspect) (push) Failing after 32s
Integration Tests / test-matrix (http, 3.11, providers) (push) Failing after 28s
Integration Tests / test-matrix (http, 3.11, agents) (push) Failing after 30s
Integration Tests / test-matrix (http, 3.12, inspect) (push) Failing after 14s
Integration Tests / test-matrix (http, 3.12, tool_runtime) (push) Failing after 10s
Integration Tests / test-matrix (http, 3.12, providers) (push) Failing after 12s
Integration Tests / test-matrix (http, 3.12, scoring) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.10, datasets) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.10, agents) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.10, inference) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.10, post_training) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.10, scoring) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.11, agents) (push) Failing after 5s
Integration Tests / test-matrix (http, 3.10, post_training) (push) Failing after 51s
Integration Tests / test-matrix (http, 3.11, datasets) (push) Failing after 49s
Integration Tests / test-matrix (http, 3.11, scoring) (push) Failing after 47s
Integration Tests / test-matrix (http, 3.10, tool_runtime) (push) Failing after 52s
Integration Tests / test-matrix (http, 3.10, inference) (push) Failing after 54s
Integration Tests / test-matrix (http, 3.11, post_training) (push) Failing after 51s
Integration Tests / test-matrix (http, 3.12, inference) (push) Failing after 47s
Integration Tests / test-matrix (http, 3.12, agents) (push) Failing after 49s
Integration Tests / test-matrix (http, 3.11, inspect) (push) Failing after 53s
Integration Tests / test-matrix (http, 3.10, datasets) (push) Failing after 57s
Integration Tests / test-matrix (library, 3.10, inspect) (push) Failing after 17s
Integration Tests / test-matrix (http, 3.10, scoring) (push) Failing after 55s
Integration Tests / test-matrix (http, 3.12, datasets) (push) Failing after 50s
Integration Tests / test-matrix (http, 3.11, tool_runtime) (push) Failing after 51s
Integration Tests / test-matrix (library, 3.10, tool_runtime) (push) Failing after 15s
Integration Tests / test-matrix (library, 3.11, scoring) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.10, providers) (push) Failing after 17s
Integration Tests / test-matrix (library, 3.11, post_training) (push) Failing after 6s
Integration Tests / test-matrix (library, 3.11, datasets) (push) Failing after 14s
Integration Tests / test-matrix (library, 3.11, inference) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.11, inspect) (push) Failing after 14s
Test External Providers / test-external-providers (venv) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.11, tool_runtime) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.11, providers) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.12, inference) (push) Failing after 7s
Integration Tests / test-matrix (library, 3.12, inspect) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, datasets) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.12, agents) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, scoring) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, post_training) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, tool_runtime) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.12, providers) (push) Failing after 13s
Unit Tests / unit-tests (3.12) (push) Failing after 10s
Unit Tests / unit-tests (3.13) (push) Failing after 9s
Unit Tests / unit-tests (3.10) (push) Failing after 2m9s
Unit Tests / unit-tests (3.11) (push) Failing after 2m7s
Pre-commit / pre-commit (push) Failing after 3h13m50s

# What does this PR do?

Closes #1867 

[Steps to reproduce the
bug](https://github.com/meta-llama/llama-stack/issues/1867#issuecomment-2956819381)

The change was designed to minimize code changes. Open to option of
skipping `metrics` field entirely when `telemetry` is disabled.


## Test Plan
1. Build llama-stack remote-vllm container
    ```bash
    llama stack build --template remote-vllm --image-type container
    ```
2. Create a small run.yaml
    ```yaml
    version: '2'
    image_name: remote-vllm
    apis:
    - inference
    providers:
      inference:
      - provider_id: vllm-inference
        provider_type: remote::vllm
        config:
          url: ${env.VLLM_URL:http://localhost:8000/v1}
          max_tokens: ${env.VLLM_MAX_TOKENS:4096}
          api_token: ${env.VLLM_API_TOKEN:fake}
          tls_verify: ${env.VLLM_TLS_VERIFY:true}
    metadata_store:
      type: sqlite
db_path:
${env.SQLITE_STORE_DIR:~/.llama/distributions/remote-vllm}/registry.db
    inference_store:
      type: sqlite
db_path:
${env.SQLITE_STORE_DIR:~/.llama/distributions/remote-vllm}/inference_store.db
    models:
    - metadata: {}
      model_id: ${env.INFERENCE_MODEL}
      provider_id: vllm-inference
      model_type: llm
    shields: []
    vector_dbs: []
    datasets: []
    scoring_fns: []
    benchmarks: []
    server:
      port: 8321
    ```
3. Run the llama-stack server
    ```bash
    export VLLM_URL="http://localhost:8000/v1"
    export INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct"
    llama stack run run.yaml
    ```
4. Then perform a curl
    ```bash
    curl -X 'POST' \
      'http://localhost:8321/v1/inference/completion' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "model_id": "meta-llama/Llama-3.2-3B-Instruct",
      "content": "string",
      "sampling_params": {
        "strategy": {
          "type": "greedy"
        },
        "max_tokens": 10,
        "repetition_penalty": 1,
        "stop": [
          "string"
        ]
      },
      "stream": false,
      "logprobs": {
        "top_k": 0
      }
    }'
    ```
5. You should receive a 200 response with metric values set to 0,
similar to one below:
    ```
    {
      "metrics": [
        {
          "metric": "prompt_tokens",
          "value": 0,
          "unit": null
        },
        {
          "metric": "completion_tokens",
          "value": 0,
          "unit": null
        },
        {
          "metric": "total_tokens",
          "value": 0,
          "unit": null
        }
      ],
      [...]
    }
    ```

Co-authored-by: Rohan Awhad <rawhad@redhat.com>
This commit is contained in:
Rohan Awhad 2025-06-11 12:14:41 -04:00 committed by GitHub
parent 5ac43268e8
commit 4e37b49cdc
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -163,6 +163,9 @@ class InferenceRouter(Inference):
messages: list[Message] | InterleavedContent, messages: list[Message] | InterleavedContent,
tool_prompt_format: ToolPromptFormat | None = None, tool_prompt_format: ToolPromptFormat | None = None,
) -> int | None: ) -> int | None:
if not hasattr(self, "formatter") or self.formatter is None:
return None
if isinstance(messages, list): if isinstance(messages, list):
encoded = self.formatter.encode_dialog_prompt(messages, tool_prompt_format) encoded = self.formatter.encode_dialog_prompt(messages, tool_prompt_format)
else: else: