OpenAI’s Codex CLI is a developer command-line interface that lets you leverage AI models to generate, refactor, and document code directly from your terminal.
What is it?
This video provides a full demo and explanation, and I highly recommend watching it. It shows just how powerful the codex
CLI is, they even create an SPA version of Mac’s Photo Booth app using an uploaded screenshot and short description. It’s pretty amazing.
To start using it in your day-to-day workflow as a developer you’ll need to install and configure it. The guidelines in the readme.md
aren’t all that great, so I’m documenting the steps I’ve taken here along with many undocumented aspects I learned along the way.
To work with the Codex CLI we need to start with a clean setup. Here’s the process as I see it, which is more detailed than the official documentation but includes some critical (in my mind required) steps like setting up persistent configuration, adding default context instructions for all commands, and most important: adding ignore rules for input.
Getting Provider API Keys
First you’ll need to set up your API keys for the AI APIs you’ll be working with. As of today there is support for:
openai (default)
openrouter
gemini
ollama
mistral
deepseek
xai
groq
Other providers that are compatible with the OpenAI API will also work, so there may be other options. I use Gemini and OpenAI, which is pretty common so if you’re planning to use these you can setup API keys at the following URLs:
API Keys Added to CLI Profile
Once you have those you’ll need to add them to your command line profile so that they’re available to node’s process.env
globals. I use Fish CLI, so for me this looks like this:
# Codex CLI
set -x OPENAI_API_KEY {YOUR API KEY}
set -x GEMINI_API_KEY {YOUR API KEY}
It’s a good idea to verify that these are configured and available, so after you save them reload your CLI profile and then verify they’re good to go with the following test commands:
echo $OPENAI_API_KEY
node -e 'console.log(process.env.OPENAI_API_KEY)'
echo $GEMINI_API_KEY
node -e 'console.log(process.env.GEMINI_API_KEY)'
If you see the correct values printed after running each of these then you’re good to go.
Configuration File
After that we’ll create a config file at ~/.codex/config.json
and adjust the settings for however you want to use it. Reference the official configuration guide provided in the openai/codex
GitHub repository for all the options. I use either OpenAI’s gpt-4.1
or Gemini’s gemini-exp-1206
(as of May 7th, 2025). I’m sure this will change in the future.
Here’s what my config looks like:
{
"model": "gpt-4.1",
"provider": "openai",
"reasoningEffort": "medium",
"providers": {
"openai": {
"name": "OpenAI",
"baseURL": "https://api.openai.com/v1",
"envKey": "OPENAI_API_KEY"
},
"gemini": {
"name": "Gemini",
"baseURL": "https://generativelanguage.googleapis.com/v1beta/openai",
"envKey": "GEMINI_API_KEY"
}
},
"history": {
"maxSize": 10000,
"saveHistory": true,
"sensitivePatterns": []
}
}
If you want to use Gemini then the top of the config would look like this:
{
"provider": "gemini",
"model": "gemini-exp-1206",
...
}
These models will probably change and there will be better ones in the future, so be sure to reference the latest Gemini and OpenAI models and choose the best ones for your specific use case, budget, and expected outcomes.
Ignore Rules
Right now there isn’t a standard way to define ignore rules for the codex CLI, and it doesn’t respect the .gitignore
either. There are open pull-requests that add support for both of these features, hopefully in the near future we’ll see those merged into main
.
Global Context with Custom Instructions
Add a ~/.codex/instructions.md
file to provide the CLI with a global context that is used for all commands you run. This will be passed to every command you run and can include things like:
- use comments minimally, especially when inline
- adhere to any coding style guide configurations found
- inline comments have an empty line above, and are on their own line
Using it to Read/Write Code
It’s a new tool and I’m actively working with it. After a few weeks I plan to either update this portion of the article or write a new one with details on quirks and tips about how effective it is, and what will make using it easier.
Coding modes
One of the most important aspects of the new CLI is the coding model, which has 3 different configurations:
--suggest
Generates code suggestions based on your input, but does not modify files automatically. You manually review and apply the suggestions you want.
openai codex --suggest
--auto-edit
Automatically applies code changes to files based on your prompts, but asks for confirmation before each change.
openai codex --auto-edit
--full-auto
Fully automates code changes without asking for confirmation, directly applying updates based on your instructions.
openai codex --full-auto
Pricing & Costs
The big question: how much will using it cost you? Here’s a simple breakdown of what it costs to use each model:
Model | Input Cost per 1M Tokens | Output Cost per 1M Tokens |
---|---|---|
o4-mini | $1.10 | $4.40 |
o3 | $10.00 | $40.00 |
For example, if you provide about 200 words of input and the tool generates around 300 lines of code, using o4-mini would cost you around $0.003. If you use o3
instead, it would cost you around $0.03.
o4-mini
might work fine for simple things like HTML or CSS, but if you’re working with real application code, you’ll probably want o3
for better results.
Note: These prices are accurate as of April 27th, 2025. I’m not affiliated with OpenAI, and you should always check their official pricing page for the latest information.