llama-stack/llama_stack/providers/utils
Ashwin Bharambe 530d4bdfe1
refactor: move all llama code to models/llama out of meta reference (#1887)
# What does this PR do?

Move around bits. This makes the copies from llama-models _much_ easier
to maintain and ensures we don't entangle meta-reference specific
tidbits into llama-models code even by accident.

Also, kills the meta-reference-quantized-gpu distro and rolls
quantization deps into meta-reference-gpu.

## Test Plan

```
LLAMA_MODELS_DEBUG=1 \
  with-proxy llama stack run meta-reference-gpu \
  --env INFERENCE_MODEL=meta-llama/Llama-4-Scout-17B-16E-Instruct \
   --env INFERENCE_CHECKPOINT_DIR=<DIR> \
   --env MODEL_PARALLEL_SIZE=4 \
   --env QUANTIZATION_TYPE=fp8_mixed
```

Start a server with and without quantization. Point integration tests to
it using:

```
pytest -s -v  tests/integration/inference/test_text_inference.py \
   --stack-config http://localhost:8321 --text-model meta-llama/Llama-4-Scout-17B-16E-Instruct
```
2025-04-07 15:03:58 -07:00
..
bedrock Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
common feat: [new open benchmark] BFCL_v3 (#1578) 2025-03-14 12:50:49 -07:00
datasetio refactor: extract pagination logic into shared helper function (#1770) 2025-03-31 13:08:29 -07:00
inference refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
kvstore chore: made inbuilt tools blocking calls into async non blocking calls (#1509) 2025-03-09 16:59:24 -07:00
memory fix(deps): move chardet and pypdf imports inline where used (#1434) 2025-03-06 17:09:14 -08:00
scoring feat: [New Eval Benchamark] IfEval (#1708) 2025-03-19 16:39:59 -07:00
telemetry feat: use same trace ids in stack and otel (#1759) 2025-03-21 15:41:26 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00