With some async javascript programming and the dinamic use of iFrames we can create an automatic bot that creates webpages from your prompts, using an aray memory system and a structured system prompt.
Introduction
This article aims to show how to create a dev bot that generates a web page visually, using a system prompt and ChatGPT-4o model.
Background
The introduction of Claude3.5 sonnet showed a new way to create apps and web pages, using the so called 'artifacts', an inline interpretation of a generated code. This function can be reproduced in a simple and immediate way with chatbots using chatGPT3.5 and ChatGPT-4o models by OpenAI, simply using dynamic iframing catching the generated code in a blob via JavaScript.
Using the code
The HTML interface of the bot should be simple, consisting mainly of a prompt input area, an input field for the API key, and an empty div where all messages (guest and chatbot) will be appended.
<main>
<h1>OpenAI Web Dev Bot</h1>
<div class="userinput">
<input type="password" id="apikey" placeholder="Your API key"><br>
<textarea rows=6 type="text" id="prompt" placeholder="Your prompt" autocomplete="off" autofocus></textarea><br>
<button class="btncopy" onclick="sendMessage()">Send</button>
</div>
<section id="content">
<div id="chathistory">
</div>
</section>
</main>
There are three main functions to implement:
sendMessage()
showMessage()
getChatGPTResponse()
The sendMessage()
function aims to gather all the relevant inputs, i.e. prompt and API key, and send them to the subsequent functions:
async function sendMessage() {
const apikey = document.getElementById("apikey").value;
console.log(apikey);
if (apikey === "") {
alert("No OpenAI API Key found.");
} else {
console.log(apikey);
}
const inputElement = document.getElementById("prompt");
const userInput = inputElement.value.trim();
if (userInput !== "") {
showMessage("Guest", userInput, "");
chatMemory = await getChatGPTResponse(userInput, chatMemory);
inputElement.value = "";
}
}
The function is using a memory array to save the previous steps of the creation, and also the system prompt.
This memory array can be handled easily:
let chatMemory = [];
chatMemory = createMemory([{
role: "system",
content: "You are a web developer bot. You don't talk, you don't explain your answers. Your only output is made of code to satisfy the received request. You will not explain the code, you will not introduce it, you will not greet or thank the user for the request, you will only produce a well structured code line by line without interruptions and without dividing it in sections."
}]);
function createMemory(messages) {
const memory = [];
for (const msg of messages) {
memory.push({
role: msg.role,
content: msg.content
});
}
return memory;
}
The showMessage()
function is a bit more complicated, as it handles bot the request sent to the model and the response received from it. In this function we implement all sorts of furnishing in the shown answers, and the main section of the answer consisting of a dynamic iFrame which content is created via a blob.
For the sake of clarity and for instructional purpose, in this implementation we also added managing of tokens and costs, based on the latest OpenAI rates.
function showMessage(sender, message, tokens, downloadLink) {
const chatContainer = document.getElementById("chathistory");
const typingIndicator = document.getElementById("typing-indicator");
if (typingIndicator && sender === "Chatbot") {
chatContainer.removeChild(typingIndicator);
}
const messageElement = document.createElement("div");
if (sender === "Guest") {
messageElement.innerHTML = `+[${hour}:${minute}] - ${sender}: ${message}`;
messageElement.classList.add("user-message");
} else {
const timestampElement = document.createElement("p");
timestampElement.innerHTML = `-[${hour}:${minute}] - ${sender}: `;
timestampElement.classList.add("chatgpt-message");
messageElement.appendChild(timestampElement);
const iframe = document.createElement("iframe");
iframe.style.width = "100%";
iframe.style.height = "600px";
iframe.style.border = "1px solid black";
messageElement.appendChild(iframe);
const blob = new Blob([`
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Generated Code</title>
</head>
<body>
${message}
</body>
</html>
type: 'text/html'
});
// Create a fictitious URL for the Blob
const url = URL.createObjectURL(blob);
// Set the iFrame to the Blob URL
iframe.src = url;
// Show the chatbot answer and token message
const separator = document.createElement("p");
separator.innerHTML = `${tokens}`;
//messageElement.innerHTML += `-[${hour}:${minute}] - ${sender}: `;
messageElement.classList.add("chatgpt-message");
messageElement.appendChild(separator);
// Add a link to download the generated code
const downloadElem = document.createElement("div");
downloadElem.innerHTML = downloadLink;
messageElement.appendChild(downloadElem);
}
// Append the message to the chat container
chatContainer.appendChild(messageElement);
chatContainer.scrollTop = chatContainer.scrollHeight;
}
The getChatGPTResponse()
function plays the central role: it interrogates the OpenAI model, gathers the JSON object in response, and stringifies it to extract the text content. A bunch of regular expressions are added to sanitize the response. We also calculate tokens and the cost of the request and response, and create the link to download the generated code.
async function getChatGPTResponse(userInput, chatMemory = []) {
const apikey = document.getElementById("apikey").value;
if (apikey === "") {
alert("No OpenAI API Key found.");
} else {
}
const chatContainer = document.getElementById("chathistory");
const typingIndicator = document.createElement("p");
typingIndicator.id = "typing-indicator";
typingIndicator.innerHTML =
'<img src="preloader.gif" class="preloader" alt="Loading...">';
chatContainer.appendChild(typingIndicator);
try {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer " + apikey
},
body: JSON.stringify({
model: "gpt-4o-mini",
messages: [...chatMemory, {
role: "user",
content: userInput
}]
})
});
if (!response.ok) {
throw new Error("Error while requesting to the API");
}
const data = await response.json();
if (
!data.choices ||
!data.choices.length ||
!data.choices[0].message ||
!data.choices[0].message.content
) {
throw new Error("Invalid API response");
}
const chatGPTResponse = data.choices[0].message.content.trim();
var cleanResponse = chatGPTResponse.replace(
/(||||||.net|||)(.*?)/gs,
"$2"
);
console.log(chatGPTResponse);
cleanResponse = cleanResponse.replace(
cleanResponse = cleanResponse.replace(/\*\*(.*?)\*\*/g, "$1");
const tokenCount = document.createElement("p");
if (data.usage.completion_tokens) {
const requestTokens = data.usage.prompt_tokens;
const responseTokens = data.usage.completion_tokens;
const totalTokens = data.usage.total_tokens;
const pricepertokenprompt = 0.15 / 1000000;
const pricepertokenresponse = 0.60 / 1000000;
const priceperrequest = pricepertokenprompt * requestTokens;
const priceperresponse = pricepertokenresponse * responseTokens;
const totalExpense = priceperrequest + priceperresponse;
tokenCount.innerHTML = `<hr>Your request used ${requestTokens} tokens and costed ${priceperrequest.toFixed(6)} USD<br>This response used ${responseTokens} tokens and costed ${priceperresponse.toFixed(6)} USD<br>Total Tokens: ${totalTokens}. This interaction costed you: ${totalExpense.toFixed(6)} USD.`;
} else {
tokenCount.innerHTML = "Unable to track the number of used tokens.";
}
const blob = new Blob([cleanResponse], {
type: 'text/html'
});
const url = URL.createObjectURL(blob);
const downloadLink = `<a href="${url}" download="generated_code.html">Click here to download the generated HTML code</a>`;
showMessage("Chatbot", cleanResponse, tokenCount.innerHTML, downloadLink);
chatMemory.push({
role: "user",
content: userInput
}); chatMemory.push({
role: "assistant",
content: cleanResponse
});
return chatMemory;
}
catch (error) {
console.error(error);
alert(
"An error occurred during the request. Check your OpenAI account or retry later."
);
}
}
Points of Interest
The main points of interest in this code are:
- Authoring a structured and pragmatic system prompt that overrides the natural tendency of OpenAI models to be highly verbose and descriptive
- Implementing the dynamic iFrame structure and pointing a code blob to it, in order to isolate the generated HTML and CSS from the container webpage, avoiding the abusive restyling of the container webpage
- This version uses a local input field to catch the API Key. In a safer version, the API key can be stored on localstorage by a panel setup:
<input type="text" id="openaikeyInput" placeholder="Enter OpenAI key"><br>
<p id="modalfeedback"></p>
<button id="saveKeys">Save Key</button>
<script>
function checkLocalStorageKeys() {
if(localStorage.getItem('openaikey')) {
var openaiKey = localStorage.getItem('openaikey');
document.getElementById('openaikeyInput').value = openaiKey;
} else {
document.getElementById('modalfeedback').innerText = "Configura le API key";
}
}
checkLocalStorageKeys();
</script>
and then the API key can be retrieved by the localstorage in the sendMessage()
and getChatGPTResponse()
functions by modifying the const
definition:
const apikey = localStorage.getItem("openaikey");
document.getElementById("apikey").value = apikey;
History
Version 1.0