feat: introduce API leveling, post_training to v1alpha

Rather than have a single `LLAMA_STACK_VERSION`, we need to have a `_V1`, `_V1ALPHA`, and `_V1BETA` constant.

This also necessitated addition of `level` to the `WebMethod` so that routing can be handeled properly.

move post_training to `v1alpha` as it is under heavy development and not near its final state

Signed-off-by: Charlie Doern <cdoern@redhat.com>
This commit is contained in:
Charlie Doern 2025-09-12 13:23:57 -04:00
parent 6b855af96f
commit 8095602697
9 changed files with 37 additions and 29 deletions

View file

@ -172,7 +172,7 @@
}
}
},
"/v1/post-training/job/cancel": {
"/v1alpha/post-training/job/cancel": {
"post": {
"responses": {
"200": {
@ -2035,7 +2035,7 @@
]
}
},
"/v1/post-training/job/artifacts": {
"/v1alpha/post-training/job/artifacts": {
"get": {
"responses": {
"200": {
@ -2078,7 +2078,7 @@
]
}
},
"/v1/post-training/job/status": {
"/v1alpha/post-training/job/status": {
"get": {
"responses": {
"200": {
@ -2121,7 +2121,7 @@
]
}
},
"/v1/post-training/jobs": {
"/v1alpha/post-training/jobs": {
"get": {
"responses": {
"200": {
@ -4681,7 +4681,7 @@
}
}
},
"/v1/post-training/preference-optimize": {
"/v1alpha/post-training/preference-optimize": {
"post": {
"responses": {
"200": {
@ -5382,7 +5382,7 @@
}
}
},
"/v1/post-training/supervised-fine-tune": {
"/v1alpha/post-training/supervised-fine-tune": {
"post": {
"responses": {
"200": {