forked from phoenix-oss/llama-stack-mirror
Fixes multiple issues 1. llama stack build of dependencies was breaking with incompatible numpy / pandas when importing datasets Moved the notebook to start a local server instead of using library as a client. This way the setup is cleaner since its all contained and by using `uv run --with` we can test both the server setup process too in CI and release time. 2. The change to [1] surfaced some other issues - running `llama stack run` was defaulting to conda env name - provider data was not being managed properly - Some notebook cells (telemetry for evals) were not updated with latest changes Fixed all the issues and update the notebook. ### Test 1. Manually run it all in local env 2. `pytest -v -s --nbval-lax docs/getting_started.ipynb` |
||
---|---|---|
.. | ||
model | ||
scripts | ||
stack | ||
__init__.py | ||
download.py | ||
llama.py | ||
subcommand.py | ||
table.py | ||
verify_download.py |