llama-stack-mirror/llama_stack/distribution
Ashwin Bharambe 205661bc78
fix: Use re-entrancy and concurrency safe context managers for provider data (#1498)
Concurrent requests should not trample (or reuse) each others' provider
data. Provider data should be scoped to each request.

## Test Plan

Set the uvicorn server to have a single worker process + thread by
updating the config:
```python
    uvicorn_config = {
        ...
        "workers": 1,
        "loop": "asyncio",
    }
```

Then perform the following steps on `origin/main` (without this change).

(1) Run the server using `llama stack run dev` without having
`FIREWORKS_API_KEY` in the environment.

(2) Run a test by specifying the FIREWORKS_API_KEY env var so it gets
stored in the thread local
```
pytest -s -v tests/integration/inference/test_text_inference.py \
    --stack-config http://localhost:8321 \
    --text-model accounts/fireworks/models/llama-v3p1-8b-instruct \
    -k test_text_chat_completion_with_tool_calling_and_streaming \
     --env FIREWORKS_API_KEY=<...>
``` 
Ensure you don't have any other API keys in the environment (otherwise
the bug will not reproduce due to other specifics in our testing code.)
Verify this works.

(3) Run the same command again without specifying FIREWORKS_API_KEY. See
that the request actually succeeds when it *should have failed*.


----
Now do the same tests on this branch, verify step (3) results in
failure.

Finally, run the full `test_text_inference.py` test suite with this
change, verify it succeeds.
2025-03-08 22:56:30 -08:00
..
routers feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00
server fix: Use re-entrancy and concurrency safe context managers for provider data (#1498) 2025-03-08 22:56:30 -08:00
store refactor: move a few tests to top-level tests/ directory 2025-03-03 17:33:39 -08:00
ui docs: update test_agents to use new Agent SDK API (#1402) 2025-03-06 15:21:12 -08:00
utils chore: remove unused build dir (#1379) 2025-03-05 15:40:00 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
build.py build(container): misc improvements (#1291) 2025-02-28 10:01:52 -08:00
build_conda_env.sh chore: remove straggler references to llama-models (#1345) 2025-03-01 14:26:03 -08:00
build_container.sh chore: remove straggler references to llama-models (#1345) 2025-03-01 14:26:03 -08:00
build_venv.sh chore: remove straggler references to llama-models (#1345) 2025-03-01 14:26:03 -08:00
client.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
common.sh fix: Fixing some small issues with the build scripts (#1132) 2025-02-19 22:20:49 -08:00
configure.py fix: resolve pydantic warning on .dict() usage (#1445) 2025-03-06 11:27:47 -08:00
datatypes.py fix!: update eval-tasks -> benchmarks (#1032) 2025-02-13 16:40:58 -08:00
distribution.py chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
inspect.py fix: improve signal handling and update dependencies (#1044) 2025-02-13 08:07:59 -08:00
library_client.py fix: Use re-entrancy and concurrency safe context managers for provider data (#1498) 2025-03-08 22:56:30 -08:00
request_headers.py fix: Use re-entrancy and concurrency safe context managers for provider data (#1498) 2025-03-08 22:56:30 -08:00
resolver.py feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00
stack.py feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00
start_stack.sh feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00