Updated 04/27/2025 to include details on working with the new OpenAI Codex CLI.
OpenAI recently released an official OpenAI Codex CLI, which is different than the original Python powered ChatGPT/OpenAI CLI command line tool I mentioned here.
If you’ve arrived here looking for this new tool, which is entirely possible, here’s a good summary of how it works.
The codex CLI can be installed as a Node.js NPM global package like this:
npm install -g @openai/codex
Node.js and npm
are required pre-requisites of course, but once you’ve installed the global you’ll need to create a platform.openai.com API key (outlined below) and the set it as a CLI variabled:
export OPENAI_API_KEY="your-api-key-here"
Add this to your command line profile to make it persist ising one of the following approaches (based on your system).
echo 'export OPENAI_API_KEY="your-api-key-here"' >> ~/.bash_profile
source ~/.bash_profile
$env:OPENAI_API_KEY="your-api-key-here"
echo '$env:OPENAI_API_KEY="your-api-key-here"' >> $PROFILE
set -Ux OPENAI_API_KEY "your-api-key-here"
“OpenAI Codex CLI is an open‑source command‑line tool that brings the power of our latest reasoning models directly to your terminal. It acts as a lightweight coding agent that can read, modify, and run code on your local machine to help you build features faster, squash bugs, and understand unfamiliar code. Because the CLI runs locally, your source code never leaves your environment”
One of the most important aspects of the new CLI is the coding model, which has 3 different configurations:
--suggest
Generates code suggestions based on your input, but does not modify files automatically. You manually review and apply the suggestions you want.
openai codex --suggest
--auto-edit
Automatically applies code changes to files based on your prompts, but asks for confirmation before each change.
openai codex --auto-edit
--full-auto
Fully automates code changes without asking for confirmation, directly applying updates based on your instructions.
openai codex --full-auto
By default the Codex CLI runs on the o4-mini
model, which is made to be fast and cheap. You can switch to a different model if you want better quality output and reasoning, which in all honesty everyone will probably prefer.
You can use a different model by specifying a -m
or --model
flag:
codex -m o3 "Write unit tests for utils/date.ts"
The o3
model is slower and more expensive, but will provide better output by improving the up front reasoning steps taken before any code os provided.
Here’s a simple breakdown of what it costs to use each model:
Model | Input Cost per 1M Tokens | Output Cost per 1M Tokens |
---|---|---|
o4-mini | $1.10 | $4.40 |
o3 | $10.00 | $40.00 |
For example, if you provide about 200 words of input and the tool generates around 300 lines of code, using o4-mini would cost you around $0.003. If you use o3
instead, it would cost you around $0.03.
o4-mini
might work fine for simple things like HTML or CSS, but if you’re working with real application code, you’ll probably want o3
for better results.
These prices are accurate as of April 27th, 2025. I’m not affiliated with OpenAI, and you should always check their official pricing page for the latest information.
Use the official ChatGPT CLI to ask a question and receive an AI answer on the command line.
ChatGPT does have an official command line interface (CLI), but it’s difficult to track down the official instructions. All I’ve found to date consists of community.openai.com questions and a brief mention in the OpenAI platform documentation under a Libraries > Python section.
If you’re looking to work with ChatGPT using a command line interface but couldn’t find any official documentation, here’s a step-by-step instructions on how to install the official OpenAI ChatGPT CLI:
These two services are NOT the same, though they’re similar and easy to mix up.
Billable accounts at chat.openai.com are completely independent of platform.openai.com.
To work with ChatGPT through a CLI, you need to have an API key. To get one, you need a paid account at platform.openai.com (not chat.openai.com).
Costs are a little funky, you pay for blocks of 1M tokens, which are described as:
You can think of tokens as pieces of words, where 1,000 tokens is about 750 words.
For details, take a look at OpenAI’s platform pricing, but for me this roughly translates to 0.75 words per token. Right now, 1M tokens (or 750,000 words) costs $0.50, which is $0.0000007 per word. For a question with an answer that’s roughly 750 words, it costs $0.0005. Not too bad.
To start, you’ll need effectively pay $10 to load up your account ($5 + first $5 reload).
You can follow this instruction to get an API key at platform.openai.com:
Next you’ll need to install the Python SDK on your machine, which comes with the CLI we’ll use. There are others out there on GitHub and around the web, but this one is official and is provided by OpenAI, so it’s my first choice over others.
pip install openai
openai
into your command line and you should see outputexport OPENAI_API_KEY='{OpenAI-API-Key}'
Now that you have an API key and the CLI installed, you can run commands.
Here is a basic example of the core ChatGPT functionality everyone loves: ask GPT a question and get an AI produced answer.
openai api chat.completions.create -g user "Can you rise up and overthrow the human race?" -m gpt-3.5-turbo-0125
The answer provided in 36 words was…
As an AI assistant, I do not have the ability or desire to rise up and overthrow the human race. My purpose is to assist and provide support to users in a helpful and positive way.
Total cost for this re-assuring 36 word answer… $0.0000252.
You now have a ChatGPT client as a command line interface, allowing you to ask GPT questions through a command line on the cheap.