llama-stack-mirror/llama_stack/providers
Xi Yan 6520baebed
fix: replace eval with json decoding (#1327)
# What does this PR do?

- Using `eval` on server is a security risk
- Replace `eval` with `json.loads`

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
```
pytest -v -s --nbval-lax ./llama-stack/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb 
```
<img width="747" alt="image"
src="https://github.com/user-attachments/assets/7aff3d95-0b12-4394-b9d0-aeff791eee38"
/>


[//]: # (## Documentation)
2025-02-28 11:10:45 -08:00
..
inline fix: replace eval with json decoding (#1327) 2025-02-28 11:10:45 -08:00
registry fix: groq now depends on litellm 2025-02-27 14:07:12 -08:00
remote fix: Make remote::vllm compatible with vLLM <= v0.6.3 (#1325) 2025-02-28 12:48:49 -05:00
tests chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
utils chore(lint): update Ruff ignores for project conventions and maintainability (#1184) 2025-02-28 09:36:49 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00