# What does this PR do?
- updated the notebooks to reflect past changes up to llama-stack 0.0.53
- updated readme to provide accurate and up-to-date info
- improve the current zero to hero by integrating an example using
together api
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
---------
Co-authored-by: Sanyam Bhutani <sanyambhutani@meta.com>
# What does this PR do?
Safety provider `inline::meta-reference` is now deprecated. However, we
* aren't checking / printing the deprecation message in `llama stack
build`
* make the deprecated (unusable) provider
So I (1) added checking and (2) made `inline::llama-guard` the default
## Test Plan
Before
```
Traceback (most recent call last):
File "/home/dalton/.conda/envs/nov22/bin/llama", line 8, in <module>
sys.exit(main())
File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 46, in main
parser.run(args)
File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 40, in run
args.func(args)
File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 177, in _run_stack_build_command
self._run_stack_build_command_from_build_config(build_config)
File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 305, in _run_stack_build_command_from_build_config
self._generate_run_config(build_config, build_dir)
File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 226, in _generate_run_config
config_type = instantiate_class_type(
File "/home/dalton/all/llama-stack/llama_stack/distribution/utils/dynamic.py", line 12, in instantiate_class_type
module = importlib.import_module(module_name)
File "/home/dalton/.conda/envs/nov22/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'llama_stack.providers.inline.safety.meta_reference'
```
After
```
Traceback (most recent call last):
File "/home/dalton/.conda/envs/nov22/bin/llama", line 8, in <module>
sys.exit(main())
File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 46, in main
parser.run(args)
File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 40, in run
args.func(args)
File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 177, in _run_stack_build_command
self._run_stack_build_command_from_build_config(build_config)
File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 309, in _run_stack_build_command_from_build_config
self._generate_run_config(build_config, build_dir)
File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 228, in _generate_run_config
raise InvalidProviderError(p.deprecation_error)
llama_stack.distribution.resolver.InvalidProviderError:
Provider `inline::meta-reference` for API `safety` does not work with the latest Llama Stack.
- if you are using Llama Guard v3, please use the `inline::llama-guard` provider instead.
- if you are using Prompt Guard, please use the `inline::prompt-guard` provider instead.
- if you are using Code Scanner, please use the `inline::code-scanner` provider instead.
```
<img width="469" alt="Screenshot 2024-11-22 at 4 10 24 PM"
src="https://github.com/user-attachments/assets/8c2e09fe-379a-4504-b246-7925f80a6ed6">
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
This PR moves all print statements to use logging. Things changed:
- Had to add `await start_trace("sse_generator")` to server.py to
actually get tracing working. else was not seeing any logs
- If no telemetry provider is provided in the run.yaml, we will write to
stdout
- by default, the logs are going to be in JSON, but we expose an option
to configure to output in a human readable way.
# What does this PR do?
Fix fp8 quantization script.
## Test Plan
```
sh run_quantize_checkpoint.sh localhost fp8 /home/yll/fp8_test/ /home/yll/fp8_test/quantized_2 /home/yll/fp8_test/tokenizer.model 1 1
```
## Sources
Please link relevant resources if necessary.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [x] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
Co-authored-by: Yunlu Li <yll@meta.com>
# What does this PR do?
As the title says.
## Test Plan
This needs
8752149f58
to also land. So the next package (0.0.54) will make this work properly.
The test is:
```bash
pytest -v -s -m "llama_3b and meta_reference" test_model_registration.py
```