Commit graph

844 commits

Author SHA1 Message Date
Jeff Tang
15dee2b8b8
Added link to the Colab notebook of the Llama Stack lesson on the Llama 3.2 course on DLAI (#445)
# What does this PR do?

It shows a complete zero-setup Colab using the Llama Stack server
implemented and powered by together.ai: using Llama Stack Client API to
run inference, agent and 3.2 models. Good for a quick start guide.



- [ ] Addresses issue (#issue)


## Test Plan

Please describe:
 - tests you ran to verify your changes with result summaries.
 - provide instructions so it can be reproduced.


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-13 13:59:41 -08:00
Dinesh Yeduguru
787e2034b7
model registration in ollama and vllm check against the available models in the provider (#446)
tests:
pytest -v -s -m "ollama"
llama_stack/providers/tests/inference/test_text_inference.py

pytest -v -s -m vllm_remote
llama_stack/providers/tests/inference/test_text_inference.py --env
VLLM_URL="http://localhost:9798/v1"

---------
2024-11-13 13:04:06 -08:00
Ashwin Bharambe
7f6ac2fbd7 allow seeing warnings with traces optionally 2024-11-13 12:27:19 -08:00
Ashwin Bharambe
96e7ef646f
add support for ${env.FOO_BAR} placeholders in run.yaml files (#439)
# What does this PR do?

We'd like our docker steps to require _ZERO EDITS_ to a YAML file in
order to get going. This is often not possible because depending on the
provider, we do need some configuration input from the user. Environment
variables are the best way to obtain this information.

This PR allows our run.yaml to contain `${env.FOO_BAR}` placeholders
which can be replaced using `docker run -e FOO_BAR=baz` (and similar
`docker compose` equivalent).

## Test Plan

For remote-vllm, example `run.yaml` snippet looks like this:
```yaml
providers:
  inference:
  # serves main inference model
  - provider_id: vllm-0
    provider_type: remote::vllm
    config:
      # NOTE: replace with "localhost" if you are running in "host" network mode
      url: ${env.LLAMA_INFERENCE_VLLM_URL:http://host.docker.internal:5100/v1}
      max_tokens: ${env.MAX_TOKENS:4096}
      api_token: fake
  # serves safety llama_guard model
  - provider_id: vllm-1
    provider_type: remote::vllm
    config:
      # NOTE: replace with "localhost" if you are running in "host" network mode
      url: ${env.LLAMA_SAFETY_VLLM_URL:http://host.docker.internal:5101/v1}
      max_tokens: ${env.MAX_TOKENS:4096}
      api_token: fake
```

`compose.yaml` snippet looks like this:
```yaml
llamastack:
    depends_on:
    - vllm-0
    - vllm-1
      # image: llamastack/distribution-remote-vllm
    image: llamastack/distribution-remote-vllm:test-0.0.52rc3
    volumes:
      - ~/.llama:/root/.llama
      - ~/local/llama-stack/distributions/remote-vllm/run.yaml:/root/llamastack-run-remote-vllm.yaml
    # network_mode: "host"
    environment:
      - LLAMA_INFERENCE_VLLM_URL=${LLAMA_INFERENCE_VLLM_URL:-http://host.docker.internal:5100/v1}
      - LLAMA_INFERENCE_MODEL=${LLAMA_INFERENCE_MODEL:-Llama3.1-8B-Instruct}
      - MAX_TOKENS=${MAX_TOKENS:-4096}
      - SQLITE_STORE_DIR=${SQLITE_STORE_DIR:-$HOME/.llama/distributions/remote-vllm}
      - LLAMA_SAFETY_VLLM_URL=${LLAMA_SAFETY_VLLM_URL:-http://host.docker.internal:5101/v1}
      - LLAMA_SAFETY_MODEL=${LLAMA_SAFETY_MODEL:-Llama-Guard-3-1B}
```
2024-11-13 11:25:58 -08:00
Sarthak Deshpande
838b8d4fb5
PR-437-Fixed bug to allow system instructions after first turn (#440)
# What does this PR do?

In short, provide a summary of what this PR does and why. Usually, the
relevant context should be present in a linked issue.

- [This PR solves the issue where agents cannot keep track of
instructions after executing the first turn because system instructions
were not getting appended in the messages list. It also solves the issue
where turns are not being fetched in the appropriate sequence.]
Addresses issue (#issue)


## Test Plan

Please describe:
- I have a file which has a precise prompt which requires more than one
turn to be executed will share the file below. I ran that file as a
python script to make sure that the turns are being executed as per the
instructions after making the code change
 
```
import asyncio
from typing import List, Optional, Dict

from llama_stack_client import LlamaStackClient
from llama_stack_client.lib.agents.event_logger import EventLogger

from llama_stack_client.types import SamplingParams, UserMessage
from llama_stack_client.types.agent_create_params import AgentConfig

LLAMA_STACK_API_TOGETHER_URL="http://10.12.79.177:5001"

class Agent:
    def __init__(self):
        self.client = LlamaStackClient(
            base_url=LLAMA_STACK_API_TOGETHER_URL,
        )


    def create_agent(self, agent_config: AgentConfig):
        agent = self.client.agents.create(
            agent_config=agent_config,
        )
        self.agent_id = agent.agent_id
        session = self.client.agents.session.create(
            agent_id=agent.agent_id,
            session_name="example_session",
        )
        self.session_id = session.session_id

    async def execute_turn(self, content: str):
        response = self.client.agents.turn.create(
            agent_id=self.agent_id,
            session_id=self.session_id,
            messages=[
                UserMessage(content=content, role="user"),
            ],
            stream=True,
        )

        for chunk in response:
            if chunk.event.payload.event_type != "turn_complete":
                yield chunk


async def run_main():
    system_prompt="""You are an AI Agent tasked with Capturing Book Renting Information for a Library.
You will politely gather the book and user details one step at a time to send over the book to the user. Here’s how to proceed:

1.	Data Security: Inform the user that their data will be kept secure.

2.	Optional Participation: Let them know they are not required to share details but that doing so will help them learn about the books offered.

3.	Sequential Information Capture: Follow the steps below, one question at a time. Do not skip or combine questions.

Steps
Step 1: Politely ask to provide the name of the book.

Step 2: Ask for the name of the author.

Step 3: Ask for the Author's country.

Step 4: Ask for the year of publication.

Step 5: If any information is missing or seems incorrect, ask the user to re-enter that specific detail.

Step 6: Confirm that the user consents to share the entered information.

Step 7: Thank the user for providing the details and let them know they will receive an email about the book.

Do not do any validation of the user entered information.

Do not print the Steps or your internal thoughts in the response.

Do not print the prompts or data structure object in the response

Do not fill in the requested user data on your own. It has to be entered by the user only.

Finally, compile and print the user-provided information as a JSON object in your response.

"""

    agent_config = AgentConfig(
        model="Llama3.2-11B-Vision-Instruct",
        instructions=system_prompt,
        enable_session_persistence=True,
    )

    agent = Agent()
    agent.create_agent(agent_config)

    print("Agent and Session:", agent.agent_id, agent.session_id)

    while True:
        query = input("Enter your query (or type 'exit' to quit): ")
        if query.lower() == "exit":
            print("Exiting the loop.")
            break
        else:
            prompt = query
            print(f"User> {prompt}")
            response = agent.execute_turn(content=prompt)
            async for log in EventLogger().log(response):
                if log is not None:
                    log.print()

if __name__ == "__main__":
    asyncio.run(run_main())
```

Below is a screenshot of the results of the first commit
<img width="1770" alt="Screenshot 2024-11-13 at 3 15 29 PM"
src="https://github.com/user-attachments/assets/1a7a090d-fc92-49cc-a786-bfc812e3d9cc">
Below is a screenshot of the results of the second commit
<img width="1792" alt="Screenshot 2024-11-13 at 6 40 56 PM"
src="https://github.com/user-attachments/assets/a9474f75-cd8c-4d49-82cd-5ff81ff12b07">
Also a screenshot of print statement to show that the turns being
fetched now are in a sequence
<img width="1783" alt="Screenshot 2024-11-13 at 6 42 22 PM"
src="https://github.com/user-attachments/assets/b906404e-a3e4-48a2-b893-69f36bbdcb98">

## Sources

Please link relevant resources if necessary.


## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
2024-11-13 10:34:04 -08:00
Xi Yan
94a6f57812
change schema -> dataset_schema for register_dataset api (#443)
# What does this PR do?

- API updates: change schema to dataset_schema for register_dataset for
resolving pydantic naming conflict
- Note: this OpenAPI update will be synced with
llama-stack-client-python SDK.

cc @dineshyv 

## Test Plan

```
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio eval/test_eval.py
```

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-13 11:17:46 -05:00
Xi Yan
d5b1202c83
change schema -> dataset_schema (#442)
# What does this PR do?

- `schema` should not a field w/ pydantic warnings
- change `schema` to `dataset_schema`

<img width="855" alt="image"
src="https://github.com/user-attachments/assets/47cb6bb9-4be0-46a5-8701-24d24e2eaabd">


## Test Plan

```
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio eval/test_eval.py
```


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-13 10:58:12 -05:00
Xi Yan
c29fa56dde
add inline:: prefix for localfs provider (#441)
# What does this PR do?

- add inline:: prefix for localfs provider

## Test Plan

```
llama stack run

datasetio:
  - provider_id: localfs-0
    provider_type: inline::localfs
    config: {}
```

```
pytest -v -s -m meta_reference_eval_fireworks_inference eval/test_eval.py
pytest -v -s -m localfs datasetio/test_datasetio.py
```

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-13 10:44:39 -05:00
Ashwin Bharambe
36b052ab10 slightly update README.md 2024-11-12 22:11:46 -08:00
Ashwin Bharambe
12947ac19e
Kill "remote" providers and fix testing with a remote stack properly (#435)
# What does this PR do?

This PR kills the notion of "pure passthrough" remote providers. You
cannot specify a single provider you must specify a whole distribution
(stack) as remote.

This PR also significantly fixes / upgrades testing infrastructure so
you can now test against a remotely hosted stack server by just doing

```bash
pytest -s -v -m remote  test_agents.py \
  --inference-model=Llama3.1-8B-Instruct --safety-shield=Llama-Guard-3-1B \
  --env REMOTE_STACK_URL=http://localhost:5001
```

Also fixed `test_agents_persistence.py` (which was broken) and killed
some deprecated testing functions.

## Test Plan

All the tests.
2024-11-12 21:51:29 -08:00
Xi Yan
59a65e34d3
Update new_api_provider.md 2024-11-13 00:02:13 -05:00
Dinesh Yeduguru
fdff24e77a
Inference to use provider resource id to register and validate (#428)
This PR changes the way model id gets translated to the final model name
that gets passed through the provider.
Major changes include:
1) Providers are responsible for registering an object and as part of
the registration returning the object with the correct provider specific
name of the model provider_resource_id
2) To help with the common look ups different names a new ModelLookup
class is created.



Tested all inference providers including together, fireworks, vllm,
ollama, meta reference and bedrock
2024-11-12 20:02:00 -08:00
Ashwin Bharambe
e51107e019 Fix compose.yaml 2024-11-12 15:43:30 -08:00
Ashwin Bharambe
e4f14eafe2 Use GPUs 0 and 1 2024-11-12 14:21:22 -08:00
Ashwin Bharambe
1245a625ce Update vllm compose and run YAMLs 2024-11-12 14:17:38 -08:00
Ashwin Bharambe
afe4a53ae8 Check vLLM registration 2024-11-12 13:14:36 -08:00
Ashwin Bharambe
1aeac7b9f7 Change order of building the Docker 2024-11-12 13:09:04 -08:00
Ashwin Bharambe
998419ffb2 use image tag actually! 2024-11-12 12:57:08 -08:00
Ashwin Bharambe
2c294346ae Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
Ashwin Bharambe
896b304e62 Use tags for docker images instead of changing image name 2024-11-12 12:42:30 -08:00
Ashwin Bharambe
983d6ce2df
Remove the "ShieldType" concept (#430)
# What does this PR do?

This PR kills the notion of "ShieldType". The impetus for this is the
realization:

> Why is keyword llama-guard appearing so many times everywhere,
sometimes with hyphens, sometimes with underscores?

Now that we have a notion of "provider specific resource identifiers"
and "user specific aliases" for those and the fact that this works with
models ("Llama3.1-8B-Instruct" <> "fireworks/llama-3pv1-..."), we can
follow the same rules for Shields.

So each Safety provider can make up a notion of identifiers it has
registered. This already happens with Bedrock correctly. We just
generalize it for Llama Guard, Prompt Guard, etc.

For Llama Guard, we further simplify by just adopting the underlying
model name itself as the identifier! No confusion necessary.

While doing this, I noticed a bug in our DistributionRegistry where we
weren't scoping identifiers by type. Fixed.

## Feature/Issue validation/testing/test plan

Ran (inference, safety, memory, agents) tests with ollama and fireworks
providers.
2024-11-12 12:37:24 -08:00
Ashwin Bharambe
09269e2a44
Enable sane naming of registered objects with defaults (#429)
# What does this PR do? 

This is a follow-up to #425. That PR allows for specifying models in the
registry, but each entry needs to look like:

```yaml
- identifier: ...
  provider_id: ...
  provider_resource_identifier: ...
```

This is headache-inducing.

The current PR makes this situation better by adopting the shape of our
APIs. Namely, we need the user to only specify `model-id`. The rest
should be optional and figured out by the Stack. You can always override
it.

Here's what example `ollama` "full stack" registry looks like (we still
need to kill or simplify shield_type crap):
```yaml
models:
- model_id: Llama3.2-3B-Instruct
- model_id: Llama-Guard-3-1B
shields:
- shield_id: llama_guard
  shield_type: llama_guard
```

## Test Plan

See test plan for #425. Re-ran it.
2024-11-12 11:18:05 -08:00
Ashwin Bharambe
d9d271a684
Allow specifying resources in StackRunConfig (#425)
# What does this PR do? 

This PR brings back the facility to not force registration of resources
onto the user. This is not just annoying but actually not feasible
sometimes. For example, you may have a Stack which boots up with private
providers for inference for models A and B. There is no way for the user
to actually know which model is being served by these providers now (to
be able to register it.)

How will this avoid the users needing to do registration? In a follow-up
diff, I will make sure I update the sample run.yaml files so they list
the models served by the distributions explicitly. So when users do
`llama stack build --template <...>` and run it, their distributions
come up with the right set of models they expect.

For self-hosted distributions, it also allows us to have a place to
explicit list the models that need to be served to make the "complete"
stack (including safety, e.g.)

## Test Plan

Started ollama locally with two lightweight models: Llama3.2-3B-Instruct
and Llama-Guard-3-1B.

Updated all the tests including agents. Here's the tests I ran so far:

```bash
pytest -s -v -m "fireworks and llama_3b" test_text_inference.py::TestInference \
  --env FIREWORKS_API_KEY=...

pytest -s -v -m "ollama and llama_3b" test_text_inference.py::TestInference 

pytest -s -v -m ollama test_safety.py

pytest -s -v -m faiss test_memory.py

pytest -s -v -m ollama  test_agents.py \
  --inference-model=Llama3.2-3B-Instruct --safety-model=Llama-Guard-3-1B
```

Found a few bugs here and there pre-existing that these test runs fixed.
2024-11-12 10:58:49 -08:00
Dinesh Yeduguru
8035fa1869 versioned persistence key prefixes 2024-11-12 10:30:39 -08:00
Xi Yan
cb77426fb5
fix fireworks (#427) 2024-11-12 12:15:55 -05:00
Xi Yan
ec4fcad5ca
fix eval task registration (#426)
* fix eval tasks

* fix eval tasks

* fix eval tests
2024-11-12 11:51:34 -05:00
Xi Yan
84c6fbbd93
fix tests after registration migration & rename meta-reference -> basic / llm_as_judge provider (#424)
* rename meta-reference -> basic

* config rename

* impl rename

* rename llm_as_judge, fix test

* util

* rebase

* naming fix
2024-11-12 10:35:44 -05:00
Ashwin Bharambe
3d7561e55c
Rename all inline providers with an inline:: prefix (#423) 2024-11-11 22:19:16 -08:00
Ashwin Bharambe
f4426f6a43 Fix bug in llama stack build; SERVER_DEPENDENCIES were dropped 2024-11-11 20:12:13 -08:00
Ashwin Bharambe
506b99242a Allow specifying TEST / PYPI VERSION for docker name 2024-11-11 19:56:42 -08:00
Ashwin Bharambe
36da9a600e add explicit platform 2024-11-11 19:30:15 -08:00
Ashwin Bharambe
218803b7c8 add pypi version to docker tag 2024-11-11 19:20:31 -08:00
Ashwin Bharambe
47e7c2dc15 Fix openapi generator and regenerator OpenAPI types 2024-11-11 18:44:38 -08:00
Ashwin Bharambe
343458479d Make sure TEST_PYPI_VERSION is used in docker builds 2024-11-11 18:40:13 -08:00
Ashwin Bharambe
285cd26fb2 Replace colon in path so it doesn't cause issue on Windows 2024-11-11 17:33:53 -08:00
Dinesh Yeduguru
0a3b3d5fb6
migrate scoring fns to resource (#422)
* fix after rebase

* remove print

---------

Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
2024-11-11 17:28:48 -08:00
Dinesh Yeduguru
3802edfc50
migrate evals to resource (#421)
* migrate evals to resource

* remove listing of providers's evals

* change the order of params in register

* fix after rebase

* linter fix

---------

Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
2024-11-11 17:24:03 -08:00
Dinesh Yeduguru
b95cb5308f
migrate dataset to resource (#420)
* migrate dataset to resource

* remove auto discovery

* remove listing of providers's datasets

* fix after rebase

---------

Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
2024-11-11 17:14:41 -08:00
Dinesh Yeduguru
38cce97597
migrate memory banks to Resource and new registration (#411)
* migrate memory banks to Resource and new registration

* address feedback

* address feedback

* fix tests

* pgvector fix

* pgvector fix v2

* remove auto discovery

* change register signature to make params required

* update client

* client fix

* use annotated union to parse

* remove base MemoryBank inheritence

---------

Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
2024-11-11 17:10:44 -08:00
Xi Yan
6b9850e11b run openapi gen 2024-11-11 18:12:24 -05:00
Xi Yan
b4416b72fd
Folder restructure for evals/datasets/scoring (#419)
* rename evals related stuff

* fix datasetio

* fix scoring test

* localfs -> LocalFS

* refactor scoring

* refactor scoring

* remove 8b_correctness scoring_fn from tests

* tests w/ eval params

* scoring fn braintrust fixture

* import
2024-11-11 17:35:40 -05:00
Xi Yan
2b7d70ba86
[Evals API][11/n] huggingface dataset provider + mmlu scoring fn (#392)
* wip

* scoring fn api

* eval api

* eval task

* evaluate api update

* pre commit

* unwrap context -> config

* config field doc

* typo

* naming fix

* separate benchmark / app eval

* api name

* rename

* wip tests

* wip

* datasetio test

* delete unused

* fixture

* scoring resolve

* fix scoring register

* scoring test pass

* score batch

* scoring fix

* fix eval

* test eval works

* huggingface provider

* datasetdef files

* mmlu scoring fn

* test wip

* remove type ignore

* api refactor

* add default task_eval_id for routing

* add eval_id for jobs

* remove type ignore

* huggingface provider

* wip huggingface register

* only keep 1 run_eval

* fix optional

* register task required

* register task required

* delete old tests

* fix

* mmlu loose

* refactor

* msg

* fix tests

* move benchmark task def to file

* msg

* gen openapi

* openapi gen

* move dataset to hf llamastack repo

* remove todo

* refactor

* add register model to unit test

* rename

* register to client

* delete preregistered dataset/eval task

* comments

* huggingface -> remote adapter

* openapi gen
2024-11-11 14:49:50 -05:00
Suraj Subramanian
b78ee3a0a5
fix duplicate deploy in compose.yaml (#417) 2024-11-11 10:51:14 -08:00
Ashwin Bharambe
c1f7ba3aed
Split safety into (llama-guard, prompt-guard, code-scanner) (#400)
Splits the meta-reference safety implementation into three distinct providers:

- inline::llama-guard
- inline::prompt-guard
- inline::code-scanner

Note that this PR is a backward incompatible change to the llama stack server. I have added deprecation_error field to ProviderSpec -- the server reads it and immediately barfs. This is used to direct the user with a specific message on what action to perform. An automagical "config upgrade" is a bit too much work to implement right now :/

(Note that we will be gradually prefixing all inline providers with inline:: -- I am only doing this for this set of new providers because otherwise existing configuration files will break even more badly.)
2024-11-11 09:29:18 -08:00
Justin Lee
6d38b1690b
added quickstart w ollama and toolcalling using together (#413)
* added quickstart w ollama and toolcalling using together

* corrected url for colab

---------

Co-authored-by: Justin Lee <justinai@fb.com>
2024-11-09 10:52:26 -08:00
Xi Yan
b0b9c905b3 docs 2024-11-09 10:22:41 -08:00
Xi Yan
cc61fd8083 docs 2024-11-09 09:00:18 -08:00
Xi Yan
0c14761453 docs 2024-11-09 08:57:51 -08:00
Ashwin Bharambe
4986e46188
Distributions updates (slight updates to ollama, add inline-vllm and remote-vllm) (#408)
* remote vllm distro

* add inline-vllm details, fix things

* Write some docs
2024-11-08 18:09:39 -08:00
Xi Yan
ba82021d4b precommit 2024-11-08 17:58:58 -08:00