llama-stack/llama_stack/providers
Xi Yan 75cda30df7 fix: replace eval with json decoding for format_adapter (#1328)
# What does this PR do?
- using `eval` is a security risk

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

- see https://github.com/meta-llama/llama-stack/pull/1327

cc @SLR722 we will need to update the corresponding dataset via

```python
def update_to_json_str():
        
dataset = datasets.load_dataset(...)
processed_dataset = dataset[split].map(
        lambda x: {
                "column": json.dumps(eval(x["column"]))
       }
)
processed_dataset.push_to_hub(...)
```
[//]: # (## Documentation)
2025-02-28 11:41:37 -08:00
..
inline fix: replace eval with json decoding for format_adapter (#1328) 2025-02-28 11:41:37 -08:00
registry fix: groq now depends on litellm 2025-02-27 14:07:12 -08:00
remote feat: add nvidia embedding implementation for new signature, task_type, output_dimention, text_truncation (#1213) 2025-02-27 16:58:11 -08:00
tests fix: Structured outputs for recursive models (#1311) 2025-02-27 17:31:53 -08:00
utils fix: Agent telemetry inputs/outputs should be structured (#1302) 2025-02-27 23:06:37 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00