fix: multiple issues with getting_started notebook (#1795)

Fixes multiple issues 

1. llama stack build of dependencies was breaking with incompatible
numpy / pandas when importing datasets

Moved the notebook to start a local server instead of using library as a
client. This way the setup is cleaner since its all contained and by
using `uv run --with` we can test both the server setup process too in
CI and release time.

2. The change to [1] surfaced some other issues 
- running `llama stack run` was defaulting to conda env name 
- provider data was not being managed properly 
- Some notebook cells (telemetry for evals) were not updated with latest
changes

Fixed all the issues and update the notebook. 

### Test 

1. Manually run it all in local env 
2. `pytest -v -s --nbval-lax docs/getting_started.ipynb`
This commit is contained in:
Hardik Shah 2025-03-26 10:59:12 -07:00 committed by GitHub
parent bdfe7fee92
commit cb2a9784ab
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 445 additions and 1842 deletions

View file

@ -18,15 +18,19 @@ def preserve_contexts_async_generator(
This is needed because we start a new asyncio event loop for each streaming request,
and we need to preserve the context across the event loop boundary.
"""
# Capture initial context values
initial_context_values = {context_var.name: context_var.get() for context_var in context_vars}
async def wrapper() -> AsyncGenerator[T, None]:
while True:
try:
item = await gen.__anext__()
context_values = {context_var.name: context_var.get() for context_var in context_vars}
yield item
# Restore context values before any await
for context_var in context_vars:
_ = context_var.set(context_values[context_var.name])
context_var.set(initial_context_values[context_var.name])
item = await gen.__anext__()
yield item
except StopAsyncIteration:
break