llama-stack/llama_stack/providers/remote/inference
Ashwin Bharambe 205661bc78
fix: Use re-entrancy and concurrency safe context managers for provider data (#1498)
Concurrent requests should not trample (or reuse) each others' provider
data. Provider data should be scoped to each request.

## Test Plan

Set the uvicorn server to have a single worker process + thread by
updating the config:
```python
    uvicorn_config = {
        ...
        "workers": 1,
        "loop": "asyncio",
    }
```

Then perform the following steps on `origin/main` (without this change).

(1) Run the server using `llama stack run dev` without having
`FIREWORKS_API_KEY` in the environment.

(2) Run a test by specifying the FIREWORKS_API_KEY env var so it gets
stored in the thread local
```
pytest -s -v tests/integration/inference/test_text_inference.py \
    --stack-config http://localhost:8321 \
    --text-model accounts/fireworks/models/llama-v3p1-8b-instruct \
    -k test_text_chat_completion_with_tool_calling_and_streaming \
     --env FIREWORKS_API_KEY=<...>
``` 
Ensure you don't have any other API keys in the environment (otherwise
the bug will not reproduce due to other specifics in our testing code.)
Verify this works.

(3) Run the same command again without specifying FIREWORKS_API_KEY. See
that the request actually succeeds when it *should have failed*.


----
Now do the same tests on this branch, verify step (3) results in
failure.

Finally, run the full `test_text_inference.py` test suite with this
change, verify it succeeds.
2025-03-08 22:56:30 -08:00
..
anthropic feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
bedrock fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
cerebras fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
databricks fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
fireworks fix: Use re-entrancy and concurrency safe context managers for provider data (#1498) 2025-03-08 22:56:30 -08:00
gemini feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
groq fix: register provider model name and HF alias in run.yaml (#1304) 2025-02-27 16:39:23 -08:00
nvidia fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
ollama feat(logging): implement category-based logging (#1362) 2025-03-07 11:34:30 -08:00
openai feat(providers): Groq now uses LiteLLM openai-compat (#1303) 2025-02-27 13:16:50 -08:00
passthrough fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
runpod fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
sambanova fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
sample build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
tgi fix: solve ruff B008 warnings (#1444) 2025-03-06 16:48:35 -08:00
together fix: Use re-entrancy and concurrency safe context managers for provider data (#1498) 2025-03-08 22:56:30 -08:00
vllm fix: Swap to AsyncOpenAI client in remote vllm provider (#1459) 2025-03-07 14:48:00 -05:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00