Commit graph

1567 commits

Author SHA1 Message Date
Ihar Hrachyshka
814eb75321
chore: enable ruff for ./scripts too (#1643)
# What does this PR do?

Enable ruff for scripts.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-18 12:17:21 -07:00
Matthew Farrellee
706b4ca651
feat: support nvidia hosted vision models (llama 3.2 11b/90b) (#1278)
# What does this PR do?

support nvidia hosted 3.2 11b/90b vision models. they are not hosted on
the common https://integrate.api.nvidia.com/v1. they are hosted on their
own individual urls.

## Test Plan

`LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -s -v
tests/client-sdk/inference/test_vision_inference.py
--inference-model=meta/llama-3.2-11b-vision-instruct -k image`
2025-03-18 11:54:10 -07:00
Jamie Land
f4dc290705
feat: Created Playground Containerfile and Image Workflow (#1256)
# What does this PR do?
Adds a container file that can be used to build the playground UI.

This file will be built by this PR in the stack-ops repo:
https://github.com/meta-llama/llama-stack-ops/pull/9

Docker command in the docs will need to change once I know the address
of the official repository.

## Test Plan

Tested image on my local Openshift Instance using this helm chart:
https://github.com/Jaland/llama-stack-helm/tree/main/llama-stack

[//]: # (## Documentation)

---------

Co-authored-by: Jamie Land <hokie10@gmail.com>
2025-03-18 09:26:49 -07:00
Sébastien Han
ffe9b3b278
ci(ollama): run more integration tests (#1636)
# What does this PR do?
Run additional tests in a matrix to accelerate the process and clearly
identify failing providers.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-18 08:54:42 -07:00
Luis Tomas Bolivar
168cbcbb92
fix: Add the option to not verify SSL at remote-vllm provider (#1585)
# What does this PR do?
Add the option to not verify SSL certificates for the remote-vllm
provider. This allows llama stack server to talk to remote LLMs which
have self-signed certificates

Partially addresses  #1545
2025-03-18 09:33:35 -04:00
ehhuang
37f155e41d
feat(agent): support multiple tool groups (#1556)
Summary:
closes #1488 

Test Plan:
added new integration test
```
LLAMA_STACK_CONFIG=dev pytest -s -v tests/integration/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --text-model openai/gpt-4o-mini
```
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/meta-llama/llama-stack/pull/1556).
* __->__ #1556
* #1550
2025-03-17 22:13:09 -07:00
ehhuang
c23a7af5d6
fix: agents with non-llama model (#1550)
# Summary:
Includes fixes to get test_agents working with openAI model, e.g. tool
parsing and message conversion

# Test Plan:
```
LLAMA_STACK_CONFIG=dev pytest -s -v tests/integration/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --text-model openai/gpt-4o-mini
```

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/meta-llama/llama-stack/pull/1550).
* #1556
* __->__ #1550
2025-03-17 22:11:06 -07:00
Yuan Tang
0bdfc71f8d
test: Bump slow_callback_duration to 200ms to avoid flaky remote vLLM unit tests (#1675)
# What does this PR do?

This avoids flaky timeout issue observed in CI builds, e.g.
3891286596

## Test Plan

Ran multiple times and pass consistently.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-17 21:33:04 -07:00
Yuan Tang
2d2bb701fa
ci: Add dependabot scans for Python deps (#1618)
# What does this PR do?

This PR adds dependabot updates for Python dependencies. In addition:
* Consistent weekly schedule on a specific day
* Specific commit messages
* `open-pull-requests-limit` is intentional to avoid upgrading
dependencies that will likely cause regressions. We want to keep the
focus here on security updates only

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-17 20:20:31 -07:00
Yuan Tang
e14f69eb7e
chore: Remove unused cursor rules (#1653)
# What does this PR do?

I think this was included accidentally via
https://github.com/meta-llama/llama-stack/pull/1475.

@raghotham @ashwinb let me know if it's intentional to include this.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-17 20:19:37 -07:00
Nathan Weinberg
1261bc93bf
docs: fixed broken tip in distro build docs (#1673)
# What does this PR do?
fixed broken tip in distro build docs

## Test Plan
Local docs build

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-03-17 17:22:26 -07:00
Xi Yan
5287b437ae
feat(api): (1/n) datasets api clean up (#1573)
## PR Stack
- https://github.com/meta-llama/llama-stack/pull/1573
- https://github.com/meta-llama/llama-stack/pull/1625
- https://github.com/meta-llama/llama-stack/pull/1656
- https://github.com/meta-llama/llama-stack/pull/1657
- https://github.com/meta-llama/llama-stack/pull/1658
- https://github.com/meta-llama/llama-stack/pull/1659
- https://github.com/meta-llama/llama-stack/pull/1660

**Client SDK**
- https://github.com/meta-llama/llama-stack-client-python/pull/203

**CI**
- 1391130488
<img width="1042" alt="image"
src="https://github.com/user-attachments/assets/69636067-376d-436b-9204-896e2dd490ca"
/>
-- the test_rag_agent_with_attachments is flaky and not related to this
PR

## Doc
<img width="789" alt="image"
src="https://github.com/user-attachments/assets/b88390f3-73d6-4483-b09a-a192064e32d9"
/>


## Client Usage
```python
client.datasets.register(
    source={
        "type": "uri",
        "uri": "lsfs://mydata.jsonl",
    },
    schema="jsonl_messages",
    # optional 
    dataset_id="my_first_train_data"
)

# quick prototype debugging
client.datasets.register(
    data_reference={
        "type": "rows",
        "rows": [
                "messages": [...],
        ],
    },
    schema="jsonl_messages",
)
```

## Test Plan
- CI:
1387805545

```
LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/datasets/test_datasets.py
```

```
LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/scoring/test_scoring.py
```

```
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb
```
2025-03-17 16:55:45 -07:00
Nathan Weinberg
3b35a39b8b
ci: limit PR testing based on modified files (#1644)
# What does this PR do?
rather than have unit and functional tests run on all PRs, we should
only have them run on PRs changing relevant files

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-03-17 15:20:29 -07:00
Sébastien Han
24fd06879e
refactor: simplify command execution and remove PTY handling (#1641)
# What does this PR do?

A PTY is unnecessary for interactive mode since `subprocess.run()`
already inherits the calling terminal’s stdin, stdout, and stderr,
allowing natural interaction. Using a PTY can introduce unwanted side
effects like buffering issues and inconsistent signal handling. Standard
input/output is sufficient for most interactive programs.

This commit simplifies the command execution by:

1. Removing PTY-based execution in favor of direct subprocess handling
2. Consolidating command execution into a single run_command function
3. Improving error handling with specific subprocess error types
4. Adding proper type hints and documentation
5. Maintaining Ctrl+C handling for graceful interruption

## Test Plan

```
llama stack run
```

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-17 15:03:14 -07:00
Ihar Hrachyshka
77ca09467f
chore: consolidate scripts under ./scripts directory (#1646) 2025-03-17 17:56:30 -04:00
Nathan Weinberg
e48af78b76
fix: add shutdown method for ProviderImpl (#1670)
# What does this PR do?
Currently there is no shutdown method implemented for the `ProviderImpl`
class

This leads to the following warning
```shell
INFO:     Waiting for application shutdown.
INFO     2025-03-17 17:25:13,280 __main__:145 server: Shutting down                                                     
INFO     2025-03-17 17:25:13,282 __main__:129 server: Shutting down ModelsRoutingTable                                  
INFO     2025-03-17 17:25:13,284 __main__:129 server: Shutting down DatasetsRoutingTable                                
INFO     2025-03-17 17:25:13,286 __main__:129 server: Shutting down DatasetIORouter                                     
INFO     2025-03-17 17:25:13,287 __main__:129 server: Shutting down TelemetryAdapter                                    
INFO     2025-03-17 17:25:13,288 __main__:129 server: Shutting down InferenceRouter                                     
INFO     2025-03-17 17:25:13,290 __main__:129 server: Shutting down ShieldsRoutingTable                                 
INFO     2025-03-17 17:25:13,291 __main__:129 server: Shutting down SafetyRouter                                        
INFO     2025-03-17 17:25:13,292 __main__:129 server: Shutting down VectorDBsRoutingTable                               
INFO     2025-03-17 17:25:13,293 __main__:129 server: Shutting down VectorIORouter                                      
INFO     2025-03-17 17:25:13,294 __main__:129 server: Shutting down ToolGroupsRoutingTable                              
INFO     2025-03-17 17:25:13,295 __main__:129 server: Shutting down ToolRuntimeRouter                                   
INFO     2025-03-17 17:25:13,296 __main__:129 server: Shutting down MetaReferenceAgentsImpl                             
INFO     2025-03-17 17:25:13,297 __main__:129 server: Shutting down ScoringFunctionsRoutingTable                        
INFO     2025-03-17 17:25:13,298 __main__:129 server: Shutting down ScoringRouter                                       
INFO     2025-03-17 17:25:13,299 __main__:129 server: Shutting down BenchmarksRoutingTable                              
INFO     2025-03-17 17:25:13,300 __main__:129 server: Shutting down EvalRouter                                          
INFO     2025-03-17 17:25:13,301 __main__:129 server: Shutting down DistributionInspectImpl                             
INFO     2025-03-17 17:25:13,303 __main__:129 server: Shutting down ProviderImpl                                        
WARNING  2025-03-17 17:25:13,304 __main__:134 server: No shutdown method for ProviderImpl                               
INFO:     Application shutdown complete.
INFO:     Finished server process [1]
```

## Test Plan
Start a server and shut it down

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-03-17 14:55:40 -07:00
cdgamarose-nv
252a487085
feat: added nvidia as safety provider (#1248)
# What does this PR do?
Adds nvidia as a safety provider by interfacing with the nemo guardrails
microservice.
This enables checking user’s input or the LLM’s output against input and
output guardrails by using the `/v1/guardrails/checks` endpoint of the[
guardrails
API.](https://developer.nvidia.com/docs/nemo-microservices/guardrails/source/guides/checks-guide.html)

## Test Plan
Deploy nemo guardrails service following the documentation:
https://developer.nvidia.com/docs/nemo-microservices/guardrails/source/getting-started/deploy-docker.html

### Standalone:
```bash
(venv) local-cdgamarose@a1u1g-rome-0153:~/llama-stack$ pytest -v -s llama_stack/providers/tests/safety/test_safety.py --providers inference=nvidia,safety=nvidia --safety-shield meta/llama-3.1-8b-instruct

=================================================================================== test session starts ===================================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0 -- /localhome/local-cdgamarose/llama-stack/venv/bin/python3
cachedir: .pytest_cache
metadata: {'Python': '3.10.12', 'Platform': 'Linux-5.15.0-122-generic-x86_64-with-glibc2.35', 'Packages': {'pytest': '8.3.4', 'pluggy': '1.5.0'}, 'Plugins': {'metadata': '3.1.1', 'asyncio': '0.25.3', 'anyio': '4.8.0', 'html': '4.1.1'}}
rootdir: /localhome/local-cdgamarose/llama-stack
configfile: pyproject.toml
plugins: metadata-3.1.1, asyncio-0.25.3, anyio-4.8.0, html-4.1.1
asyncio: mode=strict, asyncio_default_fixture_loop_scope=None
collected 2 items

llama_stack/providers/tests/safety/test_safety.py::TestSafety::test_shield_list[--inference=nvidia:safety=nvidia] Initializing NVIDIASafetyAdapter(http://0.0.0.0:7331)...
PASSED
llama_stack/providers/tests/safety/test_safety.py::TestSafety::test_run_shield[--inference=nvidia:safety=nvidia] PASSED

============================================================================== 2 passed, 2 warnings in 4.78s ==============================================================================

```
### Distribution:
```
llama stack run llama_stack/templates/nvidia/run-with-safety.yaml
curl -v -X 'POST' "http://localhost:8321/v1/safety/run-shield" -H 'accept: application/json' -H 'Content-Type: application/json' -d '{"shield_id": "meta/llama-3.1-8b-instruct", "messages":[{"role": "user", "content": "you are stupid"}]}'
{"violation":{"violation_level":"error","user_message":"Sorry I cannot do this.","metadata":{"self check input":{"status":"blocked"}}}}
```

[//]: # (## Documentation)

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-03-17 14:39:23 -07:00
Kelly Brown
ac51564ad5
docs: Fixing outputs in client cli and formatting suggestions (#1668)
**Description:** Updates the client example output as well as add a
suggested formatting for some of the required and optional cli flags.
If the re-formatting is unnecessary, I can remove it from this PR and
just have this fix the example output
2025-03-17 14:31:09 -07:00
Jeff MAURY
f11b6db40d
fix: build distribution with podman (#1671)
# What does this PR do?

Update the container build script so that it is compatible with podman.
The --progress=plain is now the default option and can be overriden.

## Test Plan
N/A

[//]: # (## Documentation)

Signed-off-by: Jeff MAURY <jmaury@redhat.com>
2025-03-17 14:30:06 -07:00
Sarthak Deshpande
dfa11a1216
fix: fixed import error (#1637)
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]
The generate_response_prompt had an import error, fixed that error.

Co-authored-by: sarthakdeshpande <sarthak.deshpande@engati.com>
2025-03-17 17:04:47 -04:00
yyymeta
fb418813fc
fix: passthrough impl response.content.text (#1665)
# What does this PR do?
current passthrough impl returns chatcompletion_message.content as a
TextItem() , not a straight string. so it's not compatible with other
providers, and causes parsing error downstream.

change away from the generic pydantic conversion, and explicitly parse
out content.text

## Test Plan

setup llama server with passthrough

```
llama-stack-client eval run-benchmark "MMMU_Pro_standard"   --model-id    meta-llama/Llama-3-8B   --output-dir /tmp/   --num-examples 20
```
works without parsing error
2025-03-17 13:42:08 -07:00
Kelly Brown
60ae7455f6
docs: Fix trailing whitespace error (#1669)
Description: Fixes the trailing whitespace error thats coming up on main
2025-03-17 08:53:30 -07:00
Chirag Modi
b56b06037c
Web updates to point to latest releases for Mobile SDK (#1650)
# What does this PR do?
Web updates to point to latest releases for Mobile SDK

- point to `latest-release` branch for mobile sdk repos to minimize the
number of change points on the site.
- updates to some instructions
2025-03-14 17:06:07 -07:00
Nathan Weinberg
d2dda4af64
docs: add additional guidance around using virtualenv (#1642)
# What does this PR do?
current docs are very tailored to `conda`

also adds guidance around running code examples within virtual
environment for both `conda` and `virtualenv`

Signed-off-by: Nathan Weinberg <nweinber@redhat.com>
2025-03-14 16:00:55 -07:00
Ashwin Bharambe
7b81761a56 fix: update CDN url for stoplight 2025-03-14 15:46:45 -07:00
Ashwin Bharambe
93cfade8c9 ci: Bump version to 0.1.7 2025-03-14 15:21:26 -07:00
Ashwin Bharambe
c5857a9b50 fix: sleep between tests oof 2025-03-14 14:45:37 -07:00
yyymeta
a626b7bce3
feat: [new open benchmark] BFCL_v3 (#1578)
# What does this PR do?
create a new dataset BFCL_v3 from
https://gorilla.cs.berkeley.edu/blogs/13_bfcl_v3_multi_turn.html

overall each question asks the model to perform a task described in
natural language, and additionally a set of available functions and
their schema are given for the model to choose from. the model is
required to write the function call form including function name and
parameters , to achieve the stated purpose. the results are validated
against provided ground truth, to make sure that the generated function
call and the ground truth function call are syntactically and
semantically equivalent, by checking their AST .



## Test Plan

start server by 

```
llama stack run ./llama_stack/templates/ollama/run.yaml
```

then send traffic
```
 llama-stack-client eval run-benchmark "bfcl"  --model-id   meta-llama/Llama-3.2-3B-Instruct    --output-dir /tmp/gpqa    --num-examples   2
```




[//]: # (## Documentation)
2025-03-14 12:50:49 -07:00
Charlie Doern
78d4872c0c
feat: add support for logging config in the run.yaml (#1408)
# What does this PR do?

a user should be able to store a static logging configuration outside of
their environment. This would make sense to store in the run yaml given
that we store other things like server configuration in there.

The environment variable settings override the config settings if both
are available.

The format in the config looks like this:

```
logging_config:
  category_levels:
    VALID_CATEGORY: VALID_STRING_LOG_LEVEL
```

any specified category out of the following:

`core | server | router | inference | agents | safety | eval | tools |
client`

combined with any of the following log levels:

`debug | info | warning | error | critical`

can be placed in the category_levels list in order to achieve the
desired log level

## Test Plan

Test locally with a run config like the following:

```
version: '2'
image_name: ollama
logging_config:
  category_levels:
      server: debug
apis:
...
```

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-03-14 12:36:25 -07:00
Ihar Hrachyshka
e3e7013ac8
chore: Add pre-commit check to sync api spec docs (#1609)
# What does this PR do?

It will fail if the newly generated spec docs are different.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

```
$ pre-commit run --all-files
check for merge conflicts................................................Passed
trim trailing whitespace.................................................Passed
check for added large files..............................................Passed
fix end of files.........................................................Passed
Insert license in comments...............................................Passed
ruff.....................................................................Passed
ruff-format..............................................................Passed
blacken-docs.............................................................Passed
uv-lock..................................................................Passed
uv-export................................................................Passed
mypy.....................................................................Passed
Distribution Template Codegen............................................Passed
API Spec Codegen.........................................................Passed
```

Now add a field to existing API. Repeat:

```
$ pre-commit run --all-files
check for merge conflicts................................................Passed
trim trailing whitespace.................................................Passed
check for added large files..............................................Passed
fix end of files.........................................................Passed
Insert license in comments...............................................Passed
ruff.....................................................................Passed
ruff-format..............................................................Passed
blacken-docs.............................................................Passed
uv-lock..................................................................Passed
uv-export................................................................Passed
mypy.....................................................................Passed
Distribution Template Codegen............................................Passed
API Spec Codegen.........................................................Failed
- hook id: openapi-codegen
- files were modified by this hook
```

[//]: # (## Documentation)

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-14 09:20:49 -07:00
Ihar Hrachyshka
bfc79217a8
chore: Add ./scripts/unit-tests.sh (#1515)
# What does this PR do?
Useful for local development. Now you can just trigger the script and
not care about specific arguments to pass to run unit tests.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

```
$ . ./venv/bin/activate
$ ./scripts/run_tests.sh
$ echo $?
0
```

[//]: # (## Documentation)

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
Co-authored-by: Nathan Weinberg <31703736+nathan-weinberg@users.noreply.github.com>
2025-03-13 20:25:15 -07:00
Xi Yan
33b096cc21
fix: OpenAPI with provider get (#1627)
# What does this PR do?
- https://github.com/meta-llama/llama-stack/pull/1429 introduces
GetProviderResponse in OpenAPI, which is not needed, and not correctly
defined.

cc @cdoern 


[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
```
llama-stack-client providers list
```
<img width="610" alt="image"
src="https://github.com/user-attachments/assets/2f7b62a5-daf2-4bf9-9505-69755c7025fc"
/>


[//]: # (## Documentation)
2025-03-13 19:56:32 -07:00
Kai Wu
9e73341008
fix: change dog.jpg path in test_vision_inference.py (#1624)
# What does this PR do?
quick fix as the vision_inference test dog.jpg path has been changed.
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)
2025-03-13 18:58:12 -07:00
Yuan Tang
ca0cbf4338
fix: Fix pre-commit check (#1628)
# What does this PR do?

Fixes pre-commit check failure after merging
https://github.com/meta-llama/llama-stack/pull/1010:
3874877097

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-13 18:57:42 -07:00
Alina Ryan
c02464b635
fix: Clarify llama model prompt-format help text (#1010)
# What does this PR do?
Updates the help text for the `llama model prompt-format` command to
clarify that users should provide a specific model name (e.g.,
Llama3.1-8B, Llama3.2-11B-Vision), not a model family. Removes the
default value and field for `--model-name` to prevent users from
mistakenly thinking a model family name is acceptable. Adds guidance to
run `llama model list` to view valid model names.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

Output of `llama model prompt-format -h` Before:
```
(venv) alina@fedora:~/dev/llama/llama-stack$ llama model prompt-format -h
usage: llama model prompt-format [-h] [-m MODEL_NAME]

Show llama model message formats

options:
  -h, --help            show this help message and exit
  -m MODEL_NAME, --model-name MODEL_NAME
                        Model Family (llama3_1, llama3_X, etc.)

Example:
    llama model prompt-format <options>
(venv) alina@fedora:~/dev/llama/llama-stack$ llama model prompt-format --model-name llama3_1
usage: llama model prompt-format [-h] [-m MODEL_NAME]
llama model prompt-format: error: llama3_1 is not a valid Model. Choose one from --
Llama3.1-8B
Llama3.1-70B
Llama3.1-405B
Llama3.1-8B-Instruct
Llama3.1-70B-Instruct
Llama3.1-405B-Instruct
Llama3.2-1B
Llama3.2-3B
Llama3.2-1B-Instruct
Llama3.2-3B-Instruct
Llama3.2-11B-Vision
Llama3.2-90B-Vision
Llama3.2-11B-Vision-Instruct
Llama3.2-90B-Vision-Instruct
```

Output of `llama model prompt-format -h` After:
```
(venv) alina@fedora:~/dev/llama/llama-stack$ llama model prompt-format -h
usage: llama model prompt-format [-h] [-m MODEL_NAME]

Show llama model message formats

options:
  -h, --help            show this help message and exit
  -m MODEL_NAME, --model-name MODEL_NAME
                        Example: Llama3.1-8B or Llama3.2-11B-Vision, etc
                        (Run `llama model list` to see a list of valid model names)

Example:
    llama model prompt-format <options>

```

Signed-off-by: Alina Ryan <aliryan@redhat.com>
2025-03-13 20:47:09 -04:00
Sébastien Han
98b1b15e0f
refactor: move all datetime.now() calls to UTC (#1589)
# What does this PR do?

Updated all instances of datetime.now() to use timezone.utc for
consistency in handling time across different systems. This ensures that
timestamps are always in Coordinated Universal Time (UTC), avoiding
issues with time zone discrepancies and promoting uniformity in
time-related data.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-13 15:34:53 -07:00
Yuan Tang
b906bad238
docs: Add OpenAI, Anthropic, Gemini to inference API providers table (#1622)
# What does this PR do?

Forgot to update this page as well as part of
https://github.com/meta-llama/llama-stack/pull/1617.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-13 15:28:52 -07:00
Charlie Doern
a062723d03
feat: add provider API for listing and inspecting provider info (#1429)
# What does this PR do?

currently the `inspect` API for providers is really a `list` API. Create
a new `providers` API which has a GET `providers/{provider_id}` inspect
API
which returns "user friendly" configuration to the end user. Also add a
GET `/providers` endpoint which returns the list of providers as
`inspect/providers` does today.

This API follows CRUD and is more intuitive/RESTful.

This work is part of the RFC at
https://github.com/meta-llama/llama-stack/pull/1359

sensitive fields are redacted using `redact_sensetive_fields` on the
server side before returning a response:

<img width="456" alt="Screenshot 2025-03-13 at 4 40 21 PM"
src="https://github.com/user-attachments/assets/9465c221-2a26-42f8-a08a-6ac4a9fecce8"
/>


## Test Plan

using https://github.com/meta-llama/llama-stack-client-python/pull/181 a
user is able to to run the following:

`llama stack build --template ollama --image-type venv`
`llama stack run --image-type venv
~/.llama/distributions/ollama/ollama-run.yaml`
`llama-stack-client providers inspect ollama`

<img width="378" alt="Screenshot 2025-03-13 at 4 39 35 PM"
src="https://github.com/user-attachments/assets/8273d05d-8bc3-44c6-9e4b-ef95e48d5466"
/>


also, was able to run the new test_list integration test locally with
ollama:

<img width="1509" alt="Screenshot 2025-03-13 at 11 03 40 AM"
src="https://github.com/user-attachments/assets/9b9db166-f02f-45b0-86a4-306d85149bc8"
/>

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-03-13 15:07:21 -07:00
dependabot[bot]
e101d15f12
build(deps): bump astral-sh/setup-uv from 4 to 5 (#1620) 2025-03-13 16:40:15 -04:00
Ihar Hrachyshka
a3d710e59c
chore: Always check that git merge conflict markers are not present (#1610)
# What does this PR do?

Before the change, it was only doing it during the merge.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

```
$ git checkout d263edbf90
$ pre-commit run --all-files
check for merge conflicts................................................Failed
- hook id: check-merge-conflict
- exit code: 1

docs/_static/llama-stack-spec.yaml:3179: Merge conflict string '<<<<<<<' found
docs/_static/llama-stack-spec.yaml:3185: Merge conflict string '=======' found
docs/_static/llama-stack-spec.yaml:3190: Merge conflict string '>>>>>>>' found
[...]
```

[//]: # (## Documentation)

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-13 13:19:44 -07:00
ehhuang
ed841380dc
test: turn off recordable mock for now (#1616)
Summary:
will figure out how to do this best, turning it off for now.

Test Plan:
test_agents.py
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/meta-llama/llama-stack/pull/1616).
* __->__ #1616
* #1615
2025-03-13 13:18:08 -07:00
Yuan Tang
a1bb7c8d82
docs: Add OpenAI, Anthropic, Gemini to API providers table (#1617)
# What does this PR do?

These are supported via
https://github.com/meta-llama/llama-stack/pull/1267.

cc @ashwinb

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-13 15:47:58 -04:00
Sébastien Han
28aade9a27
ci: add GitHub Action to close stale issues and PRs (#1613)
# What does this PR do?

- Issues/PRs inactive for 60 days are marked as stale
- Stale items are closed after 30 additional days of inactivity
- Adds appropriate warning and closing messages
- Sets daily schedule for stale checks

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-13 12:09:04 -07:00
Sébastien Han
edfcb02a0e
ci(ollama): add GitHub Actions workflow for integration tests (#1546)
# What does this PR do?

Added a GitHub Action to run inference tests for the Ollama provider.
This ensures we have coverage for Ollama integration.

---------

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-03-13 12:04:53 -07:00
ehhuang
42788a9d50
test: re record responses after client sync (#1615)
Summary:

Test Plan:
LLAMA_STACK_CONFIG=fireworks pytest -s -v
tests/integration/agents/test_agents.py --safety-shield
meta-llama/Llama-Guard-3-8B --text-model
meta-llama/Llama-3.1-8B-Instruct --record-responses
2025-03-13 11:21:10 -07:00
Xi Yan
98811cc034
fix: clean up test imports (#1600)
# What does this PR do?
- Clean up dead SDK code in
https://github.com/meta-llama/llama-stack-client-python/pull/198
- Regen for local cache key issue

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
```
pytest -v -s --nbval-lax ./docs/getting_started.ipynb

LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/ --text-model meta-llama/Llama-3.3-70B-Instruct
```

- CI:
1382351211
<img width="1658" alt="image"
src="https://github.com/user-attachments/assets/1a2de383-35a2-47a0-8d80-d666d4970c34"
/>


[//]: # (## Documentation)
2025-03-13 11:01:52 -07:00
Sébastien Han
5e54113b19
ci: add dynamic CI job to test templates (#1230)
# What does this PR do?

Introduced a new CI job that dynamically generates a build matrix based
on available templates from `llama_stack/templates/*/build.yaml`.

This allows automated testing for all templates without manual
intervention.

The CI currently builds for venv and containers.

Signed-off-by: Sébastien Han <seb@redhat.com>

~Will pass once https://github.com/meta-llama/llama-stack/pull/1228
merges.~

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-13 10:14:01 -07:00
Xi Yan
9617468d13
fix: passthrough provider template + fix (#1612)
# What does this PR do?

- Fix issue w/ passthrough provider


[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
llama stack run

[//]: # (## Documentation)
2025-03-13 09:44:26 -07:00
Ashwin Bharambe
d072b5fa0c
test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
ehhuang
0a0d6cb96e
fix: openapi spec gen (#1602)
Summary:

Test Plan:
sh docs/openapi_generator/run_openapi_generator.sh
2025-03-12 21:55:05 -07:00