llama-stack-mirror/llama_stack
Xi Yan e2054d53e4
Fix issue 586 (#594)
# What does this PR do?

- Addresses issue (#586 )


## Test Plan

```
python llama_stack/scripts/distro_codegen.py
```


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-10 10:22:04 -08:00
..
apis memory retrival to print only the bytes injected 2024-12-10 09:32:18 -08:00
cli Remove the unnecessary message after llama stack build 2024-12-10 09:46:56 -08:00
distribution Revert "add tracing to library client (#591)" 2024-12-10 08:50:20 -08:00
providers Revert "add tracing to library client (#591)" 2024-12-10 08:50:20 -08:00
scripts Integrate distro docs into the restructured docs 2024-11-20 23:20:05 -08:00
templates Fix issue 586 (#594) 2024-12-10 10:22:04 -08:00
__init__.py Miscellaneous fixes around telemetry, library client and run yaml autogen 2024-12-08 20:40:22 -08:00