Commit graph

18 commits

Author SHA1 Message Date
Dinesh Yeduguru
afb81da91a feat: add optional metrics to all responses 2025-02-11 10:36:33 -08:00
Ashwin Bharambe
b17277b06a Fix the OpenAPI HTML 2025-02-04 10:38:49 -08:00
Hardik Shah
a7b929f17e
Sec fixes as raised by bandit (#917)
minor fixes to hashlib and jinja
2025-01-31 13:44:26 -08:00
Xi Yan
15dcc4ea5e
openapi gen return type fix for streaming/non-streaming (#910)
# What does this PR do?

We need to change

```yaml
/v1/inference/chat-completion:
    post:
      responses:
        '200':
          description: >-
            If stream=False, returns a ChatCompletionResponse with the full completion.
            If stream=True, returns an SSE event stream of ChatCompletionResponseStreamChunk
          content:
            text/event-stream:
              schema:
                oneOf:
                  - $ref: '#/components/schemas/ChatCompletionResponse'
                  - $ref: '#/components/schemas/ChatCompletionResponseStreamChunk'
```

into

```yaml
/v1/inference/chat-completion:
    post:
      responses:
        '200':
          description: >-
            If stream=False, returns a ChatCompletionResponse with the full completion.
            If stream=True, returns an SSE event stream of ChatCompletionResponseStreamChunk
          content:
            text/event-stream:
              schema:
                $ref: '#/components/schemas/ChatCompletionResponseStreamChunk'
            application/json:
              schema:
                $ref: '#/components/schemas/ChatCompletionResponse'
```

## Test Plan

**Python**
- tested in SDK sync:
https://github.com/meta-llama/llama-stack-client-python/pull/108

**Node**
- tested w/
https://gist.github.com/yanxi0830/b782f4b91e21dcccdfef8898ce55157e (SDK
udpate follow up)


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-30 18:03:02 -08:00
Ashwin Bharambe
0d96070af9
Update OpenAPI generator to add param and field documentation (#896)
We desperately need to document our APIs. This is the basic requirement
of having a Spec :)

This PR updates the OpenAPI generator so documentation for request
parameters and object fields can be properly added to the OpenAPI specs.
From there, this should get picked by Stainless, etc.

## Test Plan:

Updated client-sdk (See
https://github.com/meta-llama/llama-stack-client-python/pull/104) and
then ran:

```bash
cd tests/client-sdk
LLAMA_STACK_CONFIG=../../llama_stack/templates/fireworks/run.yaml pytest -s -v inference/test_inference.py agents/test_agents.py
```
2025-01-29 10:04:30 -08:00
Ashwin Bharambe
9f709387e2 Kill X-LlamaStack-{Client-Version, Provider-Data} from OpenAPI spec
ClientVersion: We don't need each SDK method to support this parameter
because you wouldn't be passing a different client version each time you
make an API call.

ProviderData: although in this case, you _could_ be passing different
API keys depending on which SDK call you make, it makes for a confusing
experience. It is best to initialize the LlamaStackClient with all the
keys which are then passed in each request.
2025-01-28 13:30:23 -08:00
Ashwin Bharambe
ec3ebb5bcf
Use ruamel.yaml to format the OpenAPI spec (#892)
Stainless ends up reformatting the YAML when we paste it in the Studio.
We cannot have that happen if we are going to ever partially automate
stainless config updates.

Try ruamel.yaml, specifically `block_seq_indent` to avoid that.
2025-01-28 11:27:40 -08:00
Ashwin Bharambe
1a7490470a
[memory refactor][3/n] Introduce RAGToolRuntime as a specialized sub-protocol (#832)
See https://github.com/meta-llama/llama-stack/issues/827 for the broader
design.

Third part:
- we need to make `tool_runtime.rag_tool.query_context()` and
`tool_runtime.rag_tool.insert_documents()` methods work smoothly with
complete type safety. To that end, we introduce a sub-resource path
`tool-runtime/rag-tool/` and make changes to the resolver to make things
work.
- the PR updates the agents implementation to directly call these typed
APIs for memory accesses rather than going through the complex, untyped
"invoke_tool" API. the code looks much nicer and simpler (expectedly.)
- there are a number of hacks in the server resolver implementation
still, we will live with some and fix some

Note that we must make sure the client SDKs are able to handle this
subresource complexity also. Stainless has support for subresources, so
this should be possible but beware.

## Test Plan

Our RAG test is sad (doesn't actually test for actual RAG output) but I
verified that the implementation works. I will work on fixing the RAG
test afterwards.

```bash
pytest -s -v tests/agents/test_agents.py -k "rag and together" --safety-shield=meta-llama/Llama-Guard-3-8B
```
2025-01-22 10:04:16 -08:00
Dinesh Yeduguru
7fb2c1c48d
More idiomatic REST API (#765)
# What does this PR do?

This PR changes our API to follow more idiomatic REST API approaches of
having paths being resources and methods indicating the action being
performed.

Changes made to generator:
1) removed the prefix check of "get" as its not required and is actually
needed for other method types too
2) removed _ check on path since variables can have "_"



## Test Plan

LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v
tests/client-sdk/agents/test_agents.py
2025-01-15 13:20:09 -08:00
Ashwin Bharambe
b78e6675ea llama-stack version alpha -> v1 2025-01-15 05:58:09 -08:00
Ashwin Bharambe
ffc6bd4805
Add X-LlamaStack-Client-Version, rename ProviderData -> Provider-Data (#735)
Add another header so client SDKs can identify their versions which can
be used for immediate detection of possible compatibility issues. A
semver mismatch against the wrong server should be immediately flagged
and requests should be denied.

Also change `X-LlamaStack-ProviderData` to `X-LlamaStack-Provider-Data`
since that hyphenation is better.
2025-01-09 11:51:36 -08:00
Xi Yan
d97cfaa9d9
[docs] add openapi spec to docs (#508)
# What does this PR do?
- modify openapi generator to add coming soon tag for unimplemented api
- sphinx-redocs extension for openapi spec to readthedocs page

## Test Plan



https://github.com/user-attachments/assets/b4c7eebc-2361-4198-a987-dbfbcff914cf






## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-22 17:54:32 -08:00
Ashwin Bharambe
8ed79ad0f3 Fix the pyopenapi generator avoid potential circular imports 2024-11-18 23:37:52 -08:00
Ashwin Bharambe
0dc7f5fa89
Add version to REST API url (#478)
# What does this PR do? 

Adds a `/alpha/` prefix to all the REST API urls.

Also makes them all use hyphens instead of underscores as is more
standard practice.

(This is based on feedback from our partners.)

## Test Plan 

The Stack itself does not need updating. However, client SDKs and
documentation will need to be updated.
2024-11-18 22:44:14 -08:00
Ashwin Bharambe
bba6edd06b Fix OpenAPI generation to have text/event-stream for streamable methods 2024-11-14 12:51:38 -08:00
Ashwin Bharambe
37b330b4ef
add dynamic clients for all APIs (#348)
* add dynamic clients for all APIs

* fix openapi generator

* inference + memory + agents tests now pass with "remote" providers

* Add docstring which fixes openapi generator :/
2024-10-31 14:46:25 -07:00
Ashwin Bharambe
ec4fc800cc
[API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92)
This is yet another of those large PRs (hopefully we will have less and less of them as things mature fast). This one introduces substantial improvements and some simplifications to the stack.

Most important bits:

* Agents reference implementation now has support for session / turn persistence. The default implementation uses sqlite but there's also support for using Redis.

* We have re-architected the structure of the Stack APIs to allow for more flexible routing. The motivating use cases are:
  - routing model A to ollama and model B to a remote provider like Together
  - routing shield A to local impl while shield B to a remote provider like Bedrock
  - routing a vector memory bank to Weaviate while routing a keyvalue memory bank to Redis

* Support for provider specific parameters to be passed from the clients. A client can pass data using `x_llamastack_provider_data` parameter which can be type-checked and provided to the Adapter implementations.
2024-09-23 14:22:22 -07:00
Xi Yan
2c1ad10710 move openapi from rfcs->docs 2024-09-18 16:09:17 -07:00