docs(proxy_server.md): add docker image details to docs

This commit is contained in:
Krrish Dholakia 2023-10-11 08:27:25 -07:00
parent 24ea5e00b7
commit ca7e2f6a05
2 changed files with 21 additions and 9 deletions

View file

@ -240,7 +240,7 @@ task = Task(agent, name="my-llm-task")
task.run()
```
Credits [@pchalasani](https://github.com/pchalasani) for this tutorial.
Credits [@pchalasani](https://github.com/pchalasani) and [Langroid](https://github.com/langroid/langroid) for this tutorial.
</TabItem>
</Tabs>
@ -323,6 +323,18 @@ This will return your logs from `~/.ollama/logs/server.log`.
### Deploy Proxy
<Tabs>
<TabItem value="docker" label="Ollama/OpenAI Docker">
Use this to deploy local models with Ollama that's OpenAI-compatible.
It works for models like Mistral, Llama2, CodeLlama, etc. (any model supported by [Ollama](https://ollama.ai/library))
**usage**
```shell
docker run --name ollama litellm/ollama
```
More details 👉 https://hub.docker.com/r/litellm/ollama
</TabItem>
<TabItem value="self-hosted" label="Self-Hosted">
**Step 1: Clone the repo**

View file

@ -7,14 +7,14 @@ load_dotenv()
from importlib import resources
import shutil, random
list_of_messages = [
"The thing I wish you improved is...:",
"A feature I really want is...:",
"The worst thing about this product is...:",
"This product would be better if...:",
"I don't like how this works...:",
"It would help me if you could add...:",
"This feature doesn't meet my needs because...:",
"I get frustrated when the product...:",
"'The thing I wish you improved is...'",
"'A feature I really want is...'",
"'The worst thing about this product is...'",
"'This product would be better if...'",
"'I don't like how this works...'",
"'It would help me if you could add...'",
"'This feature doesn't meet my needs because...'",
"'I get frustrated when the product...'",
]
def generate_feedback_box():