mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 10:44:24 +00:00
docs openai codex with litellm
This commit is contained in:
parent
685fcb6b16
commit
5845b5c657
1 changed files with 89 additions and 18 deletions
|
@ -1,39 +1,70 @@
|
|||
import Image from '@theme/IdealImage';
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Using LiteLLM with OpenAI Codex
|
||||
|
||||
This tutorial walks you through setting up and using [OpenAI Codex](https://github.com/openai/codex) with LiteLLM Proxy. LiteLLM enables you to use various LLM models (including Gemini) through the Codex interface.
|
||||
This guide walks you through connecting OpenAI Codex to LiteLLM. Using LiteLLM with Codex allows teams to:
|
||||
- Access 100+ LLMs through the Codex interface
|
||||
- Use powerful models like Gemini through a familiar interface
|
||||
- Track spend and usage with LiteLLM's built-in analytics
|
||||
- Control model access with virtual keys
|
||||
|
||||
<Image img={require('../../img/litellm_codex.gif')} />
|
||||
|
||||
## Prerequisites
|
||||
## Quickstart
|
||||
|
||||
- LiteLLM Proxy running (see [Docker Quick Start Guide](../proxy/docker_quick_start.md) for setup details)
|
||||
- Node.js and npm installed
|
||||
Make sure to set up LiteLLM with the [LiteLLM Getting Started Guide](../proxy/docker_quick_start.md).
|
||||
|
||||
## Step 1: Install OpenAI Codex
|
||||
## 1. Install OpenAI Codex
|
||||
|
||||
Install the OpenAI Codex CLI tool globally using npm:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="npm" label="npm">
|
||||
|
||||
```bash
|
||||
npm i -g @openai/codex
|
||||
```
|
||||
|
||||
## Step 2: Configure Codex to use LiteLLM Proxy
|
||||
|
||||
Set the required environment variables to point Codex to your LiteLLM Proxy:
|
||||
</TabItem>
|
||||
<TabItem value="yarn" label="yarn">
|
||||
|
||||
```bash
|
||||
# Point to your LiteLLM Proxy server
|
||||
export OPENAI_BASE_URL=http://0.0.0.0:4000
|
||||
|
||||
# Use your LiteLLM API key
|
||||
export OPENAI_API_KEY="sk-1234"
|
||||
yarn global add @openai/codex
|
||||
```
|
||||
|
||||
## Step 3: LiteLLM Configuration
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Ensure your LiteLLM Proxy is properly configured to route to your desired models. Here's the example configuration:
|
||||
## 2. Start LiteLLM Proxy
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="docker" label="Docker">
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
|
||||
-p 4000:4000 \
|
||||
ghcr.io/berriai/litellm:main-latest \
|
||||
--config /app/config.yaml
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="pip" label="LiteLLM CLI">
|
||||
|
||||
```bash
|
||||
litellm --config /path/to/config.yaml
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
LiteLLM should now be running on [http://localhost:4000](http://localhost:4000)
|
||||
|
||||
## 3. Configure LiteLLM for Model Routing
|
||||
|
||||
Ensure your LiteLLM Proxy is properly configured to route to your desired models. Create a `litellm_config.yaml` file with the following content:
|
||||
|
||||
```yaml
|
||||
model_list:
|
||||
|
@ -52,7 +83,19 @@ litellm_settings:
|
|||
|
||||
This configuration enables routing to OpenAI, Anthropic, and Gemini models.
|
||||
|
||||
## Step 4: Run Codex with Gemini
|
||||
## 4. Configure Codex to Use LiteLLM Proxy
|
||||
|
||||
Set the required environment variables to point Codex to your LiteLLM Proxy:
|
||||
|
||||
```bash
|
||||
# Point to your LiteLLM Proxy server
|
||||
export OPENAI_BASE_URL=http://0.0.0.0:4000
|
||||
|
||||
# Use your LiteLLM API key (if you've set up authentication)
|
||||
export OPENAI_API_KEY="sk-1234"
|
||||
```
|
||||
|
||||
## 5. Run Codex with Gemini
|
||||
|
||||
With everything configured, you can now run Codex with Gemini:
|
||||
|
||||
|
@ -60,14 +103,42 @@ With everything configured, you can now run Codex with Gemini:
|
|||
codex --model gemini/gemini-2.0-flash --full-auto
|
||||
```
|
||||
|
||||
<Image img={require('../../img/litellm_codex.gif')} />
|
||||
|
||||
The `--full-auto` flag allows Codex to automatically generate code without additional prompting.
|
||||
|
||||
## 6. Advanced Options
|
||||
|
||||
### Using Different Models
|
||||
|
||||
You can use any model configured in your LiteLLM proxy:
|
||||
|
||||
```bash
|
||||
# Use Claude models
|
||||
codex --model anthropic/claude-3-opus-20240229
|
||||
|
||||
# Use OpenAI models
|
||||
codex --model openai/gpt-4o
|
||||
```
|
||||
|
||||
### Enabling Debugging
|
||||
|
||||
For troubleshooting, enable verbose output:
|
||||
|
||||
```bash
|
||||
export DEBUG=1
|
||||
codex --model gemini/gemini-2.0-flash --full-auto
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- If you encounter connection issues, ensure your LiteLLM Proxy is running and accessible at the specified URL
|
||||
- Verify your LiteLLM API key is valid
|
||||
- Verify your LiteLLM API key is valid if you're using authentication
|
||||
- Check that your model routing configuration is correct
|
||||
- For model-specific errors, ensure the model is properly configured in your LiteLLM setup
|
||||
|
||||
## Additional Resources
|
||||
|
||||
For more details on starting and configuring LiteLLM, refer to the [Docker Quick Start Guide](../proxy/docker_quick_start.md).
|
||||
- [LiteLLM Docker Quick Start Guide](../proxy/docker_quick_start.md)
|
||||
- [OpenAI Codex GitHub Repository](https://github.com/openai/codex)
|
||||
- [LiteLLM Virtual Keys and Authentication](../proxy/virtual_keys.md)
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue