llama-stack/llama_stack/providers/inline
Ihar Hrachyshka c219a74fa0
fix: Don't require efficiency_config for torchtune (#2104)
# What does this PR do?

Revert a change that by mistake forced efficiency_config on torchtune
provider
users.

```
    fix: Don't require efficiency_config for torchtune

    It was enforced by mistake when
    0751a960a5 merged.

    Other asserts made sense in that the code was written, potentially, to
    always expect a non-None value. But not efficiency_config.
```

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-05-06 09:50:44 -07:00
..
agents fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
datasetio chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
eval chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
inference chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
ios/inference chore: removed executorch submodule (#1265) 2025-02-25 21:57:21 -08:00
post_training fix: Don't require efficiency_config for torchtune (#2104) 2025-05-06 09:50:44 -07:00
safety chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
telemetry chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
tool_runtime fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
vector_io chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00