A simple Express.js server that provides an API endpoint for interacting with OpenAI's GPT-4o model.
- Node.js 18+ installed on your system
- An OpenAI API key from OpenAI Platform
-
Clone or download this repository
-
Install dependencies
npm install
Note for Windows PowerShell users: If you encounter an execution policy error, either:
- Use Command Prompt (cmd.exe) instead, or
- Run in PowerShell as Administrator:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
-
Set up environment variables
Create a
.envfile in the project root:OPENAI_API_KEY=your_openai_api_key_here PORT=5000
Development mode (with auto-restart on file changes):
npm run devProduction mode:
node index.jsThe server will start on port 5000 (or your specified PORT) and display:
Server listening on port 5000
Health check endpoint that lists available endpoints.
Response: "Hello World. Available endpoints: /single-response, /system-context, /conversation"
Example 1: Single Response - Basic user prompt without conversation history.
This demonstrates the simplest form of interaction with ChatGPT.
Request body:
{
"prompt": "What is JavaScript?"
}Response (success):
{
"success": true,
"message": "Single response - basic user prompt",
"data": "JavaScript is a programming language..."
}curl example:
curl -X POST http://localhost:5000/single-response \
-H "Content-Type: application/json" \
-d "{\"prompt\": \"What is JavaScript?\"}"Example 2: System Context - Using a system message to set AI behavior and personality.
The system message sets the context for how the AI should respond. You can customize the AI's role, tone, and behavior.
Request body:
{
"prompt": "Explain what a variable is",
"systemContext": "You are a helpful coding instructor. Explain concepts clearly with examples."
}Note: If systemContext is not provided, it defaults to: "You are a helpful coding instructor. Explain concepts clearly with examples."
Response (success):
{
"success": true,
"message": "Response with system context",
"systemContextUsed": "You are a helpful coding instructor...",
"data": "A variable is a container for storing data..."
}curl example:
curl -X POST http://localhost:5000/system-context \
-H "Content-Type: application/json" \
-d "{\"prompt\": \"Explain what a variable is\", \"systemContext\": \"You are a friendly teacher who uses simple language.\"}"Example 3: Multi-step Conversation - Maintains full conversation history for context.
This endpoint accepts an array of messages to maintain conversation context across multiple turns. The AI will remember previous messages and respond accordingly.
Request body:
{
"messages": [
{
"role": "system",
"content": "You are a helpful coding instructor."
},
{
"role": "user",
"content": "What is a function?"
},
{
"role": "assistant",
"content": "A function is a reusable block of code..."
},
{
"role": "user",
"content": "Can you show me an example?"
}
]
}Message roles:
system- Sets the AI's behavior (optional, but recommended as first message)user- Messages from the userassistant- Previous responses from the AI
Response (success):
{
"success": true,
"message": "Multi-step conversation response",
"data": "Sure! Here's an example of a function...",
"fullConversation": [
// ... all previous messages plus the new assistant response
]
}curl example:
curl -X POST http://localhost:5000/conversation \
-H "Content-Type: application/json" \
-d "{\"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}, {\"role\": \"assistant\", \"content\": \"Hi! How can I help?\"}, {\"role\": \"user\", \"content\": \"Tell me about arrays\"}]}"Response (error - all endpoints):
{
"success": false,
"error": "Error message"
}You can test the API using curl, Postman, or any HTTP client:
curl -X POST http://localhost:5000/letschat \
-H "Content-Type: application/json" \
-d "{\"prompt\": \"Hello, how are you?\"}"- express - Web framework
- cors - Enable CORS
- dotenv - Environment variable management
- openai - OpenAI API client
- nodemon - Development auto-restart (dev dependency)
Three JavaScript trigger files are provided for use with Articulate Storyline:
Required Storyline Variables:
userprompt(Text) - The user's question/promptgptresponse(Text) - Will be populated with AI response
Usage: Simple one-off questions without conversation history.
Required Storyline Variables:
userprompt(Text) - The user's question/promptsystemcontext(Text) - Optional: Custom system message (uses default if empty)gptresponse(Text) - Will be populated with AI responsesystemcontextused(Text) - Will show which system context was used
Usage: Control AI personality/behavior with custom system messages.
Required Storyline Variables:
userprompt(Text) - The user's current question/promptconversationhistory(Text) - JSON string of conversation (auto-managed)gptresponse(Text) - Will be populated with AI response
Usage: Maintain context across multiple exchanges. The conversation history is automatically saved and updated.
Important: To reset a conversation, set conversationhistory to an empty string.
The server uses the following OpenAI API parameters:
- Model:
gpt-4o - Max tokens: 500
- Temperature: 0.5-0.7 (varies by endpoint)
- Frequency penalty: 0.5-0.75 (varies by endpoint)
These can be modified in index.js.
ISC