A lightweight Node.js server that provides an OpenAI-compatible API interface for Claude CLI, allowing you to use Claude with any OpenAI-compatible client.
- OpenAI API Compatibility: Drop-in replacement for OpenAI's chat completions API
- Streaming Support: Real-time streaming responses for interactive applications
- No Timeouts: Requests run without artificial time limits
- Simple Setup: No dependencies, pure Node.js implementation
- Stateless Design: Each request is independent, no session management
- No Tool/Function Calling: Tool and function calling features are not supported
- Node.js (v14 or higher)
- Claude CLI installed and configured (Install Claude CLI)
- Clone the repository:
git clone https://github.com/p32929/openai-claude-cli-nodejs.git
cd openai-claude-cli-nodejs
- No npm install needed - this project has zero dependencies!
Create a .env
file in the project root (optional):
# Server port (default: 8000)
PORT=8000
# Enable debug logging
DEBUG=true
# Enable file logging
FILE_LOGGING=true
# Using default port 8000
npm start
# Or specify a custom port
PORT=3000 npm start
# With debug logging
DEBUG=true npm start
The server will start on http://localhost:8000
(or your specified port).
npm run tunnel
This will start the server and create a public URL using Cloudflare Tunnel.
Endpoint: POST /v1/chat/completions
Send messages to Claude and receive responses in OpenAI format.
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "any",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
],
"stream": false
}'
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-N \
-d '{
"model": "any",
"messages": [
{"role": "user", "content": "Tell me a story"}
],
"stream": true
}'
Tool and function calling is not supported. If you include tools
or functions
in your request, you'll receive an error:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "any",
"messages": [
{"role": "user", "content": "What is the weather in Paris?"}
],
"tools": [...]
}'
Error response:
{
"error": {
"message": "Tool/function calling is not supported",
"type": "invalid_request_error"
}
}
Endpoint: GET /v1/models
Returns available models (mock response since Claude CLI doesn't enumerate models):
curl http://localhost:8000/v1/models
Endpoint: GET /health
Check if the server is running:
curl http://localhost:8000/health
messages
(required): Array of message objects withrole
andcontent
model
: Model name (ignored, Claude CLI uses its default)stream
: Boolean for streaming responsesmax_tokens
: Maximum tokens in responsetemperature
: Sampling temperature (0-1)top_p
: Nucleus sampling parameterstop
: Stop sequences (string or array)
This API works with any OpenAI-compatible client library:
- OpenAI Python SDK
- OpenAI Node.js SDK
- LangChain
- LlamaIndex
- And many more...
Simply point the base URL to http://localhost:8000/v1
and use any model name (e.g., "any").
┌─────────────┐ ┌──────────────┐ ┌────────────┐
│ Client │────▶│ API Proxy │────▶│ Claude CLI │
│ (OpenAI │ │ (Node.js) │ │ │
│ Compatible)│◀────│ │◀────│ │
└─────────────┘ └──────────────┘ └────────────┘
The proxy:
- Receives OpenAI-formatted requests
- Converts them to Claude CLI format
- Executes Claude CLI
- Transforms responses back to OpenAI format
- Handles streaming, tool calls, and errors
This is an unofficial proxy and is not affiliated with Anthropic or OpenAI. Use responsibly and in accordance with Claude's terms of service.