llama-stack/distributions/fireworks
Xi Yan ae671eaf7a
distro readmes with model serving instructions (#339)
* readme updates

* quantied compose

* dell tgi

* config update

* readme

* update model serving readmes

* update

* update

* config
2024-10-28 17:47:14 -07:00
..
build.yaml fix broken --list-templates with adding build.yaml files for packaging (#327) 2024-10-25 12:51:22 -07:00
compose.yaml update distributions/readmes 2024-10-28 15:10:40 -07:00
README.md distro readmes with model serving instructions (#339) 2024-10-28 17:47:14 -07:00
run.yaml distro readmes with model serving instructions (#339) 2024-10-28 17:47:14 -07:00

Fireworks Distribution

The llamastack/distribution- distribution consists of the following provider configurations.

API Inference Agents Memory Safety Telemetry
Provider(s) remote::fireworks meta-reference meta-reference meta-reference meta-reference

Start the Distribution (Single Node CPU)

Note

This assumes you have an hosted endpoint at Fireworks with API Key.

$ cd distributions/fireworks
$ ls
compose.yaml  run.yaml
$ docker compose up

Make sure in you run.yaml file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g.

inference:
  - provider_id: fireworks
    provider_type: remote::fireworks
    config:
      url: https://api.fireworks.ai/inferenc
      api_key: <optional api key>

(Alternative) llama stack run (Single Node CPU)

docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-fireworks --yaml_config /root/my-run.yaml

Make sure in you run.yaml file, you inference provider is pointing to the correct Fireworks URL server endpoint. E.g.

inference:
  - provider_id: fireworks
    provider_type: remote::fireworks
    config:
      url: https://api.fireworks.ai/inference
      api_key: <enter your api key>

Via Conda

llama stack build --template fireworks --image-type conda
# -- modify run.yaml to a valid Fireworks server endpoint
llama stack run ./run.yaml

Model Serving

Use llama-stack-client models list to chekc the available models served by Fireworks.

$ llama-stack-client models list
+------------------------------+------------------------------+---------------+------------+
| identifier                   | llama_model                  | provider_id   | metadata   |
+==============================+==============================+===============+============+
| Llama3.1-8B-Instruct         | Llama3.1-8B-Instruct         | fireworks0    | {}         |
+------------------------------+------------------------------+---------------+------------+
| Llama3.1-70B-Instruct        | Llama3.1-70B-Instruct        | fireworks0    | {}         |
+------------------------------+------------------------------+---------------+------------+
| Llama3.1-405B-Instruct       | Llama3.1-405B-Instruct       | fireworks0    | {}         |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-1B-Instruct         | Llama3.2-1B-Instruct         | fireworks0    | {}         |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-3B-Instruct         | Llama3.2-3B-Instruct         | fireworks0    | {}         |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-11B-Vision-Instruct | Llama3.2-11B-Vision-Instruct | fireworks0    | {}         |
+------------------------------+------------------------------+---------------+------------+
| Llama3.2-90B-Vision-Instruct | Llama3.2-90B-Vision-Instruct | fireworks0    | {}         |
+------------------------------+------------------------------+---------------+------------+