llama-stack-mirror/llama_stack/templates/fireworks/build.yaml
Xi Yan 748606195b
Kill llama stack configure (#371)
* remove configure

* build msg

* wip

* build->run

* delete prints

* docs

* fix docs, kill configure

* precommit

* update fireworks build

* docs

* clean up build

* comments

* fix

* test

* remove baking build.yaml into docker

* fix msg, urls

* configure msg
2024-11-06 13:32:10 -08:00

11 lines
279 B
YAML

name: fireworks
distribution_spec:
description: Use Fireworks.ai for running LLM inference
providers:
inference: remote::fireworks
memory:
- meta-reference
- remote::weaviate
safety: meta-reference
agents: meta-reference
telemetry: meta-reference