llama-stack/distributions/fireworks
2024-10-27 11:57:21 -07:00
..
build.yaml fix broken --list-templates with adding build.yaml files for packaging (#327) 2024-10-25 12:51:22 -07:00
compose.yaml add more distro templates (#279) 2024-10-21 18:15:08 -07:00
README.md distributions readme typos 2024-10-27 11:57:21 -07:00
run.yaml add more distro templates (#279) 2024-10-21 18:15:08 -07:00

Fireworks Distribution

The llamastack/distribution- distribution consists of the following provider configurations.

API Inference Agents Memory Safety Telemetry
Provider(s) remote::fireworks meta-reference meta-reference meta-reference meta-reference

Start the Distribution (Single Node CPU)

Note

This assumes you have an hosted endpoint at Fireworks with API Key.

$ cd distributions/fireworks
$ ls
compose.yaml  run.yaml
$ docker compose up

Make sure in you run.yaml file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g.

inference:
  - provider_id: fireworks
    provider_type: remote::fireworks
    config:
      url: https://api.fireworks.ai/inferenc
      api_key: <optional api key>

(Alternative) TGI server + llama stack run (Single Node GPU)

docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-fireworks --yaml_config /root/my-run.yaml

Make sure in you run.yaml file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g.

inference:
  - provider_id: fireworks
    provider_type: remote::fireworks
    config:
      url: https://api.fireworks.ai/inference
      api_key: <optional api key>

Via Conda

llama stack build --template fireworks --image-type conda
# -- modify run.yaml to a valid Fireworks server endpoint
llama stack run ./run.yaml