llama-stack-mirror/llama_stack
Sébastien Han 294e91724a
fix: do not override the entrypoint when running container
Since https://github.com/meta-llama/llama-stack/pull/2005 the run
configuration is embedded into the container image itself and the
entrypoint is correctly configured during the container image build
process. We don't need to override the container image entrypoint
anymore.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-04-24 13:56:08 +02:00
..
apis feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
cli feat: include run.yaml in the container image (#2005) 2025-04-24 11:29:53 +02:00
distribution fix: do not override the entrypoint when running container 2025-04-24 13:56:08 +02:00
models fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
providers fix: Added lazy initialization of the remote vLLM client to avoid issues with expired asyncio event loop (#1969) 2025-04-23 15:33:19 +02:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates feat: Update NVIDIA to GA docs; remove notebook reference until ready (#1999) 2025-04-18 19:13:18 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00