llama-stack-mirror/llama_stack/templates
Yogish Baliga 0f878ad87a
feat(provider): adding llama4 support in together inference provider (#2123)
# What does this PR do?
Adding Llama4 model support in TogetherAI provider
2025-05-08 14:27:56 -07:00
..
bedrock chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
cerebras chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
ci-tests chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
dell chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
dev chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
experimental-post-training fix: fix experimental-post-training template (#1740) 2025-03-20 23:07:19 -07:00
fireworks chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
groq chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
hf-endpoint chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
hf-serverless chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
llama_api chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
meta-reference-gpu chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
nvidia chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
ollama chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
open-benchmark chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
passthrough chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
remote-vllm chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
sambanova chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
tgi chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
together feat(provider): adding llama4 support in together inference provider (#2123) 2025-05-08 14:27:56 -07:00
verification feat(provider): adding llama4 support in together inference provider (#2123) 2025-05-08 14:27:56 -07:00
vllm-gpu chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
watsonx chore(refact)!: simplify config management (#1105) 2025-05-07 09:18:12 -07:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
dependencies.json feat(providers): sambanova updated to use LiteLLM openai-compat (#1596) 2025-05-06 16:50:22 -07:00
template.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00