llama-stack/llama_stack/providers
Jash Gulabrai 30fc66923b
fix: Add llama-3.2-1b-instruct to NVIDIA fine-tuned model list (#1975)
# What does this PR do?
Adds `meta/llama-3.2-1b-instruct` to list of models that NeMo Customizer
can fine-tune. This is the model our example notebooks typically use for
fine-tuning.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

Co-authored-by: Jash Gulabrai <jgulabrai@nvidia.com>
2025-04-16 15:02:08 -07:00
..
inline feat: Implement async job execution for torchtune training (#1437) 2025-04-14 08:59:11 -07:00
registry fix: use torchao 0.8.0 for inference (#1925) 2025-04-10 13:39:20 -07:00
remote fix: Add llama-3.2-1b-instruct to NVIDIA fine-tuned model list (#1975) 2025-04-16 15:02:08 -07:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils feat: Implement async job execution for torchtune training (#1437) 2025-04-14 08:59:11 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00