llama-stack-mirror/llama_stack/providers/inline/inference
Dmitry Rogozhkin 7ea14ae62e
feat: enable xpu support for meta-reference stack (#558)
This commit adds support for XPU and CPU devices into meta-reference
stack for text models. On creation stack automatically identifies which
device to use checking available accelerate capabilities in the
following order: CUDA, then XPU, finally CPU. This behaviour can be
overwritten with the `DEVICE` environment variable. In this case
explicitly specified device will be used.

Tested with:
```
torchrun pytest llama_stack/providers/tests/inference/test_text_inference.py -k meta_reference
```

Results:
* Tested on: system with single CUDA device, system with single XPU
device and on pure CPU system
* Results: all test pass except `test_completion_logprobs`
* `test_completion_logprobs` fails in the same way as on a baseline,
i.e. unrelated with this change: `AssertionError: Unexpected top_k=3`

Requires: https://github.com/meta-llama/llama-models/pull/233

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-01-31 12:11:49 -08:00
..
meta_reference feat: enable xpu support for meta-reference stack (#558) 2025-01-31 12:11:49 -08:00
sentence_transformers remove conflicting default for tool prompt format in chat completion (#742) 2025-01-10 10:41:53 -08:00
vllm Convert SamplingParams.strategy to a union (#767) 2025-01-15 05:38:51 -08:00
__init__.py precommit 2024-11-08 17:58:58 -08:00