llama-stack-mirror/llama_stack/templates
Sébastien Han edd9aaac3b
fix: use torchao 0.8.0 for inference (#1925)
# What does this PR do?

While building the "experimental-post-training" distribution, we
encountered a version conflict between torchao with inference requiring
version 0.5.0 and training currently depending on version 0.8.0.

Resolves this error:

```
  × No solution found when resolving dependencies:
  ╰─▶ Because you require torchao==0.5.0 and torchao==0.8.0, we can conclude that your requirements are unsatisfiable.
ERROR    2025-04-10 10:41:22,597 llama_stack.distribution.build:128 uncategorized: Failed to build target test with
         return code 1
```

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-04-10 13:39:20 -07:00
..
bedrock chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
cerebras chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
ci-tests test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
dell chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
dev test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
experimental-post-training fix: fix experimental-post-training template (#1740) 2025-03-20 23:07:19 -07:00
fireworks test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
groq test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
hf-endpoint chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
hf-serverless chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
meta-reference-gpu refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
nvidia chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
ollama chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
open-benchmark chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
passthrough chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
remote-vllm docs: Redirect instructions for additional hardware accelerators for remote vLLM provider (#1923) 2025-04-10 10:04:17 +02:00
sambanova test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
tgi chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
together test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
verification fix: type (#1898) 2025-04-08 09:07:25 -07:00
vllm-gpu chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
dependencies.json fix: use torchao 0.8.0 for inference (#1925) 2025-04-10 13:39:20 -07:00
template.py feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00