forked from phoenix/litellm-mirror
* fix(utils.py): support dropping temperature param for azure o1 models * fix(main.py): handle azure o1 streaming requests o1 doesn't support streaming, fake it to ensure code works as expected * feat(utils.py): expose `hosted_vllm/` endpoint, with tool handling for vllm Fixes https://github.com/BerriAI/litellm/issues/6088 * refactor(internal_user_endpoints.py): cleanup unused params + update docstring Closes https://github.com/BerriAI/litellm/issues/6100 * fix(main.py): expose custom image generation api support Fixes https://github.com/BerriAI/litellm/issues/6097 * fix: fix linting errors * docs(custom_llm_server.md): add docs on custom api for image gen calls * fix(types/utils.py): handle dict type * fix(types/utils.py): fix linting errors |
||
---|---|---|
.. | ||
blog/2021-08-26-welcome | ||
docs | ||
img | ||
src | ||
static | ||
.gitignore | ||
babel.config.js | ||
Dockerfile | ||
docusaurus.config.js | ||
index.md | ||
package-lock.json | ||
package.json | ||
README.md | ||
sidebars.js |
Website
This website is built using Docusaurus 2, a modern static website generator.
Installation
$ yarn
Local Development
$ yarn start
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
Build
$ yarn build
This command generates static content into the build
directory and can be served using any static contents hosting service.
Deployment
Using SSH:
$ USE_SSH=true yarn deploy
Not using SSH:
$ GIT_USER=<Your GitHub username> yarn deploy
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the gh-pages
branch.