forked from phoenix/litellm-mirror
* fix(vertex_llm_base.py): Handle api_base = "" Fixes https://github.com/BerriAI/litellm/issues/5798 * fix(o1_transformation.py): handle stream_options not being supported https://github.com/BerriAI/litellm/issues/5803 * docs(routing.md): fix docs Closes https://github.com/BerriAI/litellm/issues/5808 * perf(internal_user_endpoints.py): reduce db calls for getting team_alias for a key Use the list gotten earlier in `/user/info` endpoint Reduces ui keys tab load time to 800ms (prev. 28s+) * feat(proxy_server.py): support CONFIG_FILE_PATH as env var Closes https://github.com/BerriAI/litellm/issues/5744 * feat(get_llm_provider_logic.py): add `litellm_proxy/` as a known openai-compatible route simplifies calling litellm proxy Reduces confusion when calling models on litellm proxy from litellm sdk * docs(litellm_proxy.md): cleanup docs * fix(internal_user_endpoints.py): fix pydantic obj * test(test_key_generate_prisma.py): fix test |
||
---|---|---|
.. | ||
blog/2021-08-26-welcome | ||
docs | ||
img | ||
src | ||
static | ||
.gitignore | ||
babel.config.js | ||
Dockerfile | ||
docusaurus.config.js | ||
index.md | ||
package-lock.json | ||
package.json | ||
README.md | ||
sidebars.js |
Website
This website is built using Docusaurus 2, a modern static website generator.
Installation
$ yarn
Local Development
$ yarn start
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
Build
$ yarn build
This command generates static content into the build
directory and can be served using any static contents hosting service.
Deployment
Using SSH:
$ USE_SSH=true yarn deploy
Not using SSH:
$ GIT_USER=<Your GitHub username> yarn deploy
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the gh-pages
branch.