llama-stack-mirror/llama_stack/distribution
Vladislav Bronzov 09299e908e
Add windows support for build execution (#889)
# What does this PR do?

This PR implements windows platform support for build_container.sh
execution from terminal. Additionally, it resolves "no support for
Terminos and PTY for Window PC" issues.

- [x] Addresses issue (#issue)
Releates issues: https://github.com/meta-llama/llama-stack/issues/826,
https://github.com/meta-llama/llama-stack/issues/726

## Test Plan

Changes were tested manually by executing standard scripts from LLama
guide:
- llama stack build --template ollama --image-type container
- llama stack build --list-templates
- llama stack build

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-28 07:41:41 -08:00
..
routers [memory refactor][6/n] Update naming and routes (#839) 2025-01-22 10:39:13 -08:00
server [memory refactor][3/n] Introduce RAGToolRuntime as a specialized sub-protocol (#832) 2025-01-22 10:04:16 -08:00
store Update OpenAPI generator to output discriminator (#848) 2025-01-22 22:15:23 -08:00
ui Sambanova inference provider (#555) 2025-01-23 12:20:28 -08:00
utils Add windows support for build execution (#889) 2025-01-28 07:41:41 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
build.py Add windows support for build execution (#889) 2025-01-28 07:41:41 -08:00
build_conda_env.sh Make llama stack build not create a new conda by default (#788) 2025-01-16 13:44:53 -08:00
build_container.sh Ensure llama stack build --config <> --image-type <> works (#879) 2025-01-25 11:13:36 -08:00
build_venv.sh Miscellaneous fixes around telemetry, library client and run yaml autogen 2024-12-08 20:40:22 -08:00
client.py use API version in "remote" stack client 2024-11-19 15:59:47 -08:00
common.sh API Updates (#73) 2024-09-17 19:51:35 -07:00
configure.py [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
configure_container.sh More generic image type for OCI-compliant container technologies (#802) 2025-01-17 16:37:42 -08:00
datatypes.py [memory refactor][1/n] Rename Memory -> VectorIO, MemoryBanks -> VectorDBs (#828) 2025-01-22 09:59:30 -08:00
distribution.py [memory refactor][1/n] Rename Memory -> VectorIO, MemoryBanks -> VectorDBs (#828) 2025-01-22 09:59:30 -08:00
inspect.py REST API fixes (#789) 2025-01-16 13:47:08 -08:00
library_client.py remove logger handler only in notebook (#868) 2025-01-23 16:58:17 -08:00
request_headers.py Add X-LlamaStack-Client-Version, rename ProviderData -> Provider-Data (#735) 2025-01-09 11:51:36 -08:00
resolver.py [memory refactor][3/n] Introduce RAGToolRuntime as a specialized sub-protocol (#832) 2025-01-22 10:04:16 -08:00
stack.py [memory refactor][3/n] Introduce RAGToolRuntime as a specialized sub-protocol (#832) 2025-01-22 10:04:16 -08:00
start_conda_env.sh Make llama stack build not create a new conda by default (#788) 2025-01-16 13:44:53 -08:00
start_container.sh Ensure llama stack build --config <> --image-type <> works (#879) 2025-01-25 11:13:36 -08:00