mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
chore(ui): use proxy server for backend API calls; simplified k8s deployment (#2350)
# What does this PR do? - no more CORS middleware needed ## Test Plan ### Local test llama stack run starter --image-type conda npm run dev verify UI works in browser ### Deploy to k8s temporarily change ui-k8s.yaml.template to load from PR commit <img width="604" alt="image" src="https://github.com/user-attachments/assets/87fa2e52-1e93-4e32-9e0f-5b283b7a37b3" /> sh ./apply.sh $ kubectl get services go to external_ip:8322 and play around with UI <img width="1690" alt="image" src="https://github.com/user-attachments/assets/5b7ec827-4302-4435-a9eb-df423676d873" />
This commit is contained in:
parent
7c1998db25
commit
d96f6ec763
5 changed files with 109 additions and 17 deletions
|
@ -1,5 +1,6 @@
|
|||
import LlamaStackClient from "llama-stack-client";
|
||||
|
||||
export const client = new LlamaStackClient({
|
||||
baseURL: process.env.NEXT_PUBLIC_LLAMA_STACK_BASE_URL,
|
||||
baseURL:
|
||||
typeof window !== "undefined" ? `${window.location.origin}/api` : "/api",
|
||||
});
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue