faiss serialize index returns a np object, that we first need to save to
buffer and then write to sqllite. Since we are using json, we need to
base64 encode the data.
Same in the read path, we base64 decode and read into np array and then
call into deserialize index.
tests:
torchrun $CONDA_PREFIX/bin/pytest -v -s -m "faiss"
llama_stack/providers/tests/memory/test_memory.py
Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
The semantics of an Update on resources is very tricky to reason about
especially for memory banks and models. The best way to go forward here
is for the user to unregister and register a new resource. We don't have
a compelling reason to support update APIs.
Tests:
pytest -v -s llama_stack/providers/tests/memory/test_memory.py -m
"chroma" --env CHROMA_HOST=localhost --env CHROMA_PORT=8000
pytest -v -s llama_stack/providers/tests/memory/test_memory.py -m
"pgvector" --env PGVECTOR_DB=postgres --env PGVECTOR_USER=postgres --env
PGVECTOR_PASSWORD=mysecretpassword --env PGVECTOR_HOST=0.0.0.0
$CONDA_PREFIX/bin/pytest -v -s -m "ollama"
llama_stack/providers/tests/inference/test_model_registration.py
---------
Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
# What does this PR do?
- add local persistence for eval tasks
- follow https://github.com/meta-llama/llama-stack/pull/375
## Test Plan
1. fresh llama stack run
2. kill server
3. restart server: llama stack run
<img width="690" alt="image"
src="https://github.com/user-attachments/assets/3d76e477-b91a-43a6-86ea-8e3ef2d04ed3">
Using run.yaml
```yaml
eval_tasks:
- eval_task_id: meta-reference-mmlu
provider_id: meta-reference-0
dataset_id: mmlu
scoring_functions:
- basic::regex_parser_multiple_choice_answer
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
In short, provide a summary of what this PR does and why. Usually, the
relevant context should be present in a linked issue.
- [This PR solves the issue where agents cannot keep track of
instructions after executing the first turn because system instructions
were not getting appended in the messages list. It also solves the issue
where turns are not being fetched in the appropriate sequence.]
Addresses issue (#issue)
## Test Plan
Please describe:
- I have a file which has a precise prompt which requires more than one
turn to be executed will share the file below. I ran that file as a
python script to make sure that the turns are being executed as per the
instructions after making the code change
```
import asyncio
from typing import List, Optional, Dict
from llama_stack_client import LlamaStackClient
from llama_stack_client.lib.agents.event_logger import EventLogger
from llama_stack_client.types import SamplingParams, UserMessage
from llama_stack_client.types.agent_create_params import AgentConfig
LLAMA_STACK_API_TOGETHER_URL="http://10.12.79.177:5001"
class Agent:
def __init__(self):
self.client = LlamaStackClient(
base_url=LLAMA_STACK_API_TOGETHER_URL,
)
def create_agent(self, agent_config: AgentConfig):
agent = self.client.agents.create(
agent_config=agent_config,
)
self.agent_id = agent.agent_id
session = self.client.agents.session.create(
agent_id=agent.agent_id,
session_name="example_session",
)
self.session_id = session.session_id
async def execute_turn(self, content: str):
response = self.client.agents.turn.create(
agent_id=self.agent_id,
session_id=self.session_id,
messages=[
UserMessage(content=content, role="user"),
],
stream=True,
)
for chunk in response:
if chunk.event.payload.event_type != "turn_complete":
yield chunk
async def run_main():
system_prompt="""You are an AI Agent tasked with Capturing Book Renting Information for a Library.
You will politely gather the book and user details one step at a time to send over the book to the user. Here’s how to proceed:
1. Data Security: Inform the user that their data will be kept secure.
2. Optional Participation: Let them know they are not required to share details but that doing so will help them learn about the books offered.
3. Sequential Information Capture: Follow the steps below, one question at a time. Do not skip or combine questions.
Steps
Step 1: Politely ask to provide the name of the book.
Step 2: Ask for the name of the author.
Step 3: Ask for the Author's country.
Step 4: Ask for the year of publication.
Step 5: If any information is missing or seems incorrect, ask the user to re-enter that specific detail.
Step 6: Confirm that the user consents to share the entered information.
Step 7: Thank the user for providing the details and let them know they will receive an email about the book.
Do not do any validation of the user entered information.
Do not print the Steps or your internal thoughts in the response.
Do not print the prompts or data structure object in the response
Do not fill in the requested user data on your own. It has to be entered by the user only.
Finally, compile and print the user-provided information as a JSON object in your response.
"""
agent_config = AgentConfig(
model="Llama3.2-11B-Vision-Instruct",
instructions=system_prompt,
enable_session_persistence=True,
)
agent = Agent()
agent.create_agent(agent_config)
print("Agent and Session:", agent.agent_id, agent.session_id)
while True:
query = input("Enter your query (or type 'exit' to quit): ")
if query.lower() == "exit":
print("Exiting the loop.")
break
else:
prompt = query
print(f"User> {prompt}")
response = agent.execute_turn(content=prompt)
async for log in EventLogger().log(response):
if log is not None:
log.print()
if __name__ == "__main__":
asyncio.run(run_main())
```
Below is a screenshot of the results of the first commit
<img width="1770" alt="Screenshot 2024-11-13 at 3 15 29 PM"
src="https://github.com/user-attachments/assets/1a7a090d-fc92-49cc-a786-bfc812e3d9cc">
Below is a screenshot of the results of the second commit
<img width="1792" alt="Screenshot 2024-11-13 at 6 40 56 PM"
src="https://github.com/user-attachments/assets/a9474f75-cd8c-4d49-82cd-5ff81ff12b07">
Also a screenshot of print statement to show that the turns being
fetched now are in a sequence
<img width="1783" alt="Screenshot 2024-11-13 at 6 42 22 PM"
src="https://github.com/user-attachments/assets/b906404e-a3e4-48a2-b893-69f36bbdcb98">
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
# What does this PR do?
- `schema` should not a field w/ pydantic warnings
- change `schema` to `dataset_schema`
<img width="855" alt="image"
src="https://github.com/user-attachments/assets/47cb6bb9-4be0-46a5-8701-24d24e2eaabd">
## Test Plan
```
pytest -v -s -m meta_reference_eval_together_inference_huggingface_datasetio eval/test_eval.py
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
This PR kills the notion of "pure passthrough" remote providers. You
cannot specify a single provider you must specify a whole distribution
(stack) as remote.
This PR also significantly fixes / upgrades testing infrastructure so
you can now test against a remotely hosted stack server by just doing
```bash
pytest -s -v -m remote test_agents.py \
--inference-model=Llama3.1-8B-Instruct --safety-shield=Llama-Guard-3-1B \
--env REMOTE_STACK_URL=http://localhost:5001
```
Also fixed `test_agents_persistence.py` (which was broken) and killed
some deprecated testing functions.
## Test Plan
All the tests.
This PR changes the way model id gets translated to the final model name
that gets passed through the provider.
Major changes include:
1) Providers are responsible for registering an object and as part of
the registration returning the object with the correct provider specific
name of the model provider_resource_id
2) To help with the common look ups different names a new ModelLookup
class is created.
Tested all inference providers including together, fireworks, vllm,
ollama, meta reference and bedrock
# What does this PR do?
This PR kills the notion of "ShieldType". The impetus for this is the
realization:
> Why is keyword llama-guard appearing so many times everywhere,
sometimes with hyphens, sometimes with underscores?
Now that we have a notion of "provider specific resource identifiers"
and "user specific aliases" for those and the fact that this works with
models ("Llama3.1-8B-Instruct" <> "fireworks/llama-3pv1-..."), we can
follow the same rules for Shields.
So each Safety provider can make up a notion of identifiers it has
registered. This already happens with Bedrock correctly. We just
generalize it for Llama Guard, Prompt Guard, etc.
For Llama Guard, we further simplify by just adopting the underlying
model name itself as the identifier! No confusion necessary.
While doing this, I noticed a bug in our DistributionRegistry where we
weren't scoping identifiers by type. Fixed.
## Feature/Issue validation/testing/test plan
Ran (inference, safety, memory, agents) tests with ollama and fireworks
providers.
# What does this PR do?
This is a follow-up to #425. That PR allows for specifying models in the
registry, but each entry needs to look like:
```yaml
- identifier: ...
provider_id: ...
provider_resource_identifier: ...
```
This is headache-inducing.
The current PR makes this situation better by adopting the shape of our
APIs. Namely, we need the user to only specify `model-id`. The rest
should be optional and figured out by the Stack. You can always override
it.
Here's what example `ollama` "full stack" registry looks like (we still
need to kill or simplify shield_type crap):
```yaml
models:
- model_id: Llama3.2-3B-Instruct
- model_id: Llama-Guard-3-1B
shields:
- shield_id: llama_guard
shield_type: llama_guard
```
## Test Plan
See test plan for #425. Re-ran it.
# What does this PR do?
This PR brings back the facility to not force registration of resources
onto the user. This is not just annoying but actually not feasible
sometimes. For example, you may have a Stack which boots up with private
providers for inference for models A and B. There is no way for the user
to actually know which model is being served by these providers now (to
be able to register it.)
How will this avoid the users needing to do registration? In a follow-up
diff, I will make sure I update the sample run.yaml files so they list
the models served by the distributions explicitly. So when users do
`llama stack build --template <...>` and run it, their distributions
come up with the right set of models they expect.
For self-hosted distributions, it also allows us to have a place to
explicit list the models that need to be served to make the "complete"
stack (including safety, e.g.)
## Test Plan
Started ollama locally with two lightweight models: Llama3.2-3B-Instruct
and Llama-Guard-3-1B.
Updated all the tests including agents. Here's the tests I ran so far:
```bash
pytest -s -v -m "fireworks and llama_3b" test_text_inference.py::TestInference \
--env FIREWORKS_API_KEY=...
pytest -s -v -m "ollama and llama_3b" test_text_inference.py::TestInference
pytest -s -v -m ollama test_safety.py
pytest -s -v -m faiss test_memory.py
pytest -s -v -m ollama test_agents.py \
--inference-model=Llama3.2-3B-Instruct --safety-model=Llama-Guard-3-1B
```
Found a few bugs here and there pre-existing that these test runs fixed.
* migrate evals to resource
* remove listing of providers's evals
* change the order of params in register
* fix after rebase
* linter fix
---------
Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
* migrate dataset to resource
* remove auto discovery
* remove listing of providers's datasets
* fix after rebase
---------
Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
Splits the meta-reference safety implementation into three distinct providers:
- inline::llama-guard
- inline::prompt-guard
- inline::code-scanner
Note that this PR is a backward incompatible change to the llama stack server. I have added deprecation_error field to ProviderSpec -- the server reads it and immediately barfs. This is used to direct the user with a specific message on what action to perform. An automagical "config upgrade" is a bit too much work to implement right now :/
(Note that we will be gradually prefixing all inline providers with inline:: -- I am only doing this for this set of new providers because otherwise existing configuration files will break even more badly.)
* init
* working bedrock tests
* bedrock test for inference fixes
* use env vars for bedrock guardrail vars
* add register in meta reference
* use correct shield impl in meta ref
* dont add together fixture
* right naming
* minor updates
* improved registration flow
* address feedback
---------
Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>