- Amogh's Hot Takes
- Posts
- How to write a ChatGPT-proof coding interview question - Part 2, backend / generalist
How to write a ChatGPT-proof coding interview question - Part 2, backend / generalist
The step-by-step process I use to write a backend / generalist software engineering coding question. ChatGPT-resistant. See every pitfall and pivot.

In this post, I take the concepts from part 1 and document my process of writing a question. This question is specifically for backend / generalist engineers. In part 3 we’ll use this same process to write a frontend question.
Coming up with an idea
Start timer - 00:00:00
This is the hardest part. When I wrote interview questions at Block, I typically let inspiration come to me rather than brainstorm.
A common pitfall is to write a graph question. This isn’t inherently bad, utilizing graph data structures is commonplace in software engineering. But most engineers don’t model data with graphs every day. If our goal in an interview question is to model real work, this is a better place to start thinking of ideas.
So let me think back on some of the work I did at Block:
Pub/sub systems - I spent my first few years at Block working on event streaming. I wrote middleware to send marketing events to third-party data vendors for attribution. Some engineering problems from that project:
Schema validation. I wrote recursive logic to validate a json object to match a given type. This is an ok idea for a question, but it’s graph oriented. I don’t consider recursion to be too difficult but schema validation code is better written in a relaxed 3-6 hour chunk of time. The recursion is complicated enough that it would not fit well into a 60 minute interview.
Authentication and error handling - this is an interesting idea! The middleware I wrote had well documented error handling. But what made this engineering work interesting was designing two parts of a pipeline to fit together. I’m not sure this makes much sense for a 60 minute interview.
Eventing libraries - I did a ton of work writing and improving marketing eventing libraries at Block:
Event queues - There could be something interesting about sending event data from a queue and then handling errors and re-entering events into the queue. For subsequent parts of the question, we could think about sorting the data back into the queue.
Timer - 00:20:22
Ok! Let’s run with this idea and see if we can flesh it out into a multi-step interview question.
Outlining the basics of the question and writing Part 1
Write a class that ingests events into a queue. When you reach 10 events, flush the queue. For now, flushing the queue will just mean printing out events and emptying the queue.
Let’s give this a shot by writing solution code.
This took roughly 4 minutes. I like part 1 of this question so far. It’s purposely ambiguous and a good candidate will ask clarifying questions. I am passing the queue limit as a property of the class constructor meaning this is a dynamic value; as an interviewer, I like this design choice from the candidate. This is good opportunity to demonstrate abstraction to separate the flushing logic from the adding logic.
Part 1 feels too simple as is, but let’s leave it. The expectation is for a candidate to finish this part in 10 minutes max. We can add complexity as we go.
Timer - 00:31:40
Expanding complexity, writing Part 2
Let’s incorporate a network component into this. Write logic to send events over the internet.
class EventQueue {
constructor(limit, postEndpoint) {
this.q = [];
this.url = postEndpoint;
this.limit = limit;
}
add(event) {
this.q.push(event);
if (this.q.length === this.limit) {
this.post();
}
}
flush() {
this.q = [];
}
post() {
fetch(this.url, {
method: 'POST',
body: JSON.stringify(this.q),
headers: {
'Content-Type': 'application/json',
},
}).then(res => {
if (res.ok) {
this.flush();
}
});
}
}
That only took me 4 minutes. Oof, this question seems too easy. At this stage of question writing the solution may seem easy and obvious to you, the author. But an average candidate will need time to understand the question requirements and formulate a plan.
The real meat of the question - a hidden challenge in Part 2
A good interview candidate will at this point ask the question, “what happens if the request fails?” This provides good signal that the candidate is thinking about error handling and edge cases - you should hire them!
Just kidding, but it is positive hiring signal. If a candidate doesn’t bring up the failure case, ask them some questions about what to do if things go wrong. A great candidate will continue thinking down this path: “ok, so we retry sending the event, then what happens if another event is added to the queue while the network event is in flight? Should we let the queue grow past our limit of 10?”
I didn’t even think about this edge case during ideation, but this is proving to be a good question concept. I would tell the candidate that ideally, we don’t let our queue grow past 10, and that they should design a way to handle this edge case. This open-endedness will allow the candidate to show their creativity and code maintainability skills.
class EventQueue {
constructor(limit, postEndpoint) {
this.q = [];
this.url = postEndpoint;
this.limit = limit;
}
add(event) {
this.q.push(event);
if (this.q.length === this.limit) {
this.post();
}
}
flush() {
this.q = [];
}
post() {
this.inFlightQ = this.q;
this.flush();
this.postMethod(this.inFlightQ).then(res => {
if (res.ok) {
this.inFlightQ = []
} else {
this.postMethod(this.inFlightQ)
}
});
}
postMethod(dataArr) {
return fetch(this.url, {
method: 'POST',
body: JSON.stringify(dataArr),
headers: {
'Content-Type': 'application/json',
},
});
}
}
Adjusting the code took 5 minutes. It’s far from perfect though. There are two issues I see with this. Let’s fix these.
This doesn’t actually recurse. Upon closer inspection, my fetch logic would only run twice!
We run the risk of overwriting data. It is possible to fill up the queue while the previous array of data is still sending.
class EventQueue {
constructor(limit, postEndpoint, retryLimit) {
this.q = [];
this.url = postEndpoint;
this.limit = limit;
this.retryLimit = retryLimit;
}
add(event) {
this.q.push(event);
if (this.q.length === this.limit) {
// clones the queue so we're not mutating the same
// array of data
this.post([...this.q]);
// moves flushing logic outside of the post method
this.flush();
}
}
flush() {
this.q = [];
}
async post(queueOfEvents, attempt = 1) {
const response = await this.postMethod(queueOfEvents);
if (!response.ok) {
if (attempt >= this.retryLimit) {
console.error(`Unable to send data:\n${JSON.stringify(queueOfData)}`);
return;
}
this.post(queueOfEvents, attempt + 1);
}
}
postMethod(queueOfEvents) {
return fetch(this.url, {
method: 'POST',
body: JSON.stringify(queueOfEvents),
headers: {
'Content-Type': 'application/json',
},
});
}
}
Much better! That took me 10 minutes. The total time I’ve spent working on this question as a candidate is now ~25 minutes. That doesn’t even include time to test the code which would add another 5-10 minutes.
Timer - 01:08:34
This is looking good so far! Though part 2 starts simple, it gets more complex once you factor in handling failures; the strongest candidates will identify and address this themselves.
A skilled and experienced engineer can write asynchronous recursive logic and handle all possible failures. A less experienced but still good engineer will probably need more guidance on strategy and some hints, but should still be able to solve the problem in ~40 minutes.
For most companies, this is probably a satisfactory question. It can be reasonably solved in 45 minutes and it is difficult enough to weed out candidates you don’t want to hire. But let’s take the extra step and add a challenge.
Make it harder and with algorithmic complexity, writing Part 3
During ideation, I thought about having the queue flush every 5 seconds, but that’s boring and the logic is easy to code. For part 3, we want to give the candidate a real challenge!
Another idea I had early on was handling partial failures. The idea is that some events will be received by the server and others will be rejected; this borrows from my earlier question idea “Authentication and error handling.” The server will now send the following responses:
Status code | Action |
---|---|
200 | Events have all been received. |
206 | Events are mixed, some success and some failures. An array of booleans will be returned with successes and failures. |
500 | All data has not been received. |
If the server responds with 206, only the failed events should be retried. Events that completely fail should be added back into the queue.
. . . Hmm, now that I think about it, this does not raise the question difficulty. The queueing logic is just slightly more complicated. If anything, this requirement could be added to part 2.
Time - 01:23:10
Back to brainstorming!
What if we preserve and sort the failed data into long term storage? In a real world engineering design, this would be similar to saving the data in cookies / local storage on the frontend or Redis / a database on the backend. We’ll simplify that concept in the question by simply saving to an array.
For events that have failed to send after all retries, save this to an in-memory array. Make sure events are sorted by their timestamp. Assume the long term array of events and your failed queue of events both start sorted from earliest event to latest event.
A good candidate will recognize this is basically a simplified merge sort. Candidates are likely to simply call a built in sort method which is O(n log n). But since our starting arrays are already sorted, we can do this in O(n), where n represents the combined length of the arrays.
Timer - 01:48:13
This code block only includes a new method I added. There are a few other small code changes to the codebase which you can infer.
This part 3 solution took 15 minutes. Part 3 isn’t complicated, but it is difficult enough for a skilled candidates to demonstrate their skills. An excellent candidate could solve the whole problem in 40 minutes, which is a good benchmark for an interview that is supposed to last 45 minutes to an hour. I’m satisfied with the question.
Total time to write question - 02:08:27
This only took me 2 hours, but I am an experienced question author; oftentimes, it takes much more iteration to reach this point. I got lucky this time and the first viable idea I came up with worked out. But I estimate I’ve scrapped 80% of questions I started writing (or at least rewritten significant parts of the question).
If we wanted to make this more difficult, we could instead have candidates sort these failed events back into the original queue and consider the queue size limit and flushing complexities that come with that.
To prepare this question for actual candidates, this is the work still ahead of me and my estimate on a time commitment:
Task | Time | Notes |
---|---|---|
Write server code to handle network requests | 1 hour 30 minutes | The server code needs to be robust enough to handle unexpected failures from candidates sending invalid data. The server should have an adjustable failure rate, i.e. a certain number of network requests will randomly fail based on a variable that the interviewer can change. |
Write out question instructions. | 30 minutes | Make instructions as clear and concise as possible. Non-native english speakers should be able to understand the question. |
Prepare test data | 30 minutes | For each step of the question, generate some test data. The input needed to test all parts of the question (especially part 3) is complicated enough that the candidate should not be expected to generate this themselves. |
Format all this into a code interviewing platform | 1 hour | It always takes some time to format and prepare this in the interviewing platform of choice. |
Document this question | 1 hour | Interviewers using this question need to know some background data about the question. This starts pretty simple because we haven’t tested this on real candidates. |
Mock interview | 3-6 hours | Engineers at the company (or outside of it) need to help test the question. No matter how well the question is written, things will need to be adjusted after mock testing. |
Revise server code, instructions, test data, and documentation | 4 hours | This will naturally happen during testing. |
At this point (12-16 hours later), the question is ready for use on real candidates. But real candidates will vary much more significantly in their performance. We’ll need to continue adjusting the question for the first 10-20 interviews.
How ChatGPT proof is this?
The rest of the post is ChatGPT solving my interview question. I’ll post the summary here and you can read the transcript below if it interests you.
I wasn’t surprised that ChatGPT did well at my question. Queues and sending events are a common problems in software engineering. I even learned about exponential backoff from its solution.
The code produced is verbose and I suspect will be difficult to follow for a candidate who is trying to hide their cheating. It would be tedious to type this code out character-by-character and difficult to notice the little changes ChatGPT is making to the code it has already written.
I also think the candidate would struggle to explain how ChatGPT’s solution works, especially the sending retry and queue merging logic. With those concepts, I would have an easier time writing the logic myself in order to explain it to an interviewer.
Hopefully you now have an idea of the amount of work it takes to write a quality interview question! It is a decent time investment, but I suspect it’ll help us identify great candidates. And all candidates, even ones we reject, will enjoy solving the problem.
Are you looking for help designing or improving your software engineer interview process? I spent five years at Block working on improving engineer hiring. I wrote more interview questions than any other engineer at the company. I am available for hire as a consultant. Fill out my contact form or send me an email: contact <at> amoghk <dot> com.
Human prompter
Let's solve an eng interview question. Please write your solution in JavaScript.
Write some sort of class that ingests events into a queue. Then, when you reach 10 events, flush the queue. For now, flushing the queue will just mean printing it out.
ChatGPT
Here’s a simple JavaScript solution using a class to manage the event queue. The class will:
Accept events via a method (e.g.
addEvent
)Store them in an internal array (the queue)
Automatically flush (print and clear) the queue when it reaches 10 events
class EventQueue {
constructor(limit = 10) {
this.queue = [];
this.limit = limit;
}
addEvent(event) {
this.queue.push(event);
if (this.queue.length >= this.limit) {
this.flush();
}
}
flush() {
console.log("Flushing queue:", this.queue);
this.queue = [];
}
}
// Example usage:
const eq = new EventQueue();
for (let i = 1; i <= 25; i++) {
eq.addEvent(`event_${i}`);
}
// Optional: flush remaining events if less than 10
eq.flush();
This is a clean, simple solution, similar to the one I wrote. Did ChatGPT already ingest my solution as a part of its training data‽
Human prompter
Let’s incorporate a network component into this. A candidate should write logic to send events over the internet. Have the class take a url to post events too.
Great! Here's how we can take the original problem and extend it to include a network component.
Requirements
The
EventQueue
class now accepts a URL.When the queue reaches 10 events, it should
POST
the array to that URL.Use
fetch()
to send the data (native to browsers and supported in modern Node.js).Keep it simple and log the response (or error) from the server.
class EventQueue {
constructor(url, limit = 10) {
this.queue = [];
this.limit = limit;
this.url = url;
}
addEvent(event) {
this.queue.push(event);
if (this.queue.length >= this.limit) {
this.flush();
}
}
async flush() {
if (this.queue.length === 0) return;
const payload = this.queue.slice();
this.queue = [];
try {
const response = await fetch(this.url, {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify(payload)
});
if (!response.ok) {
console.error("Failed to send events:", response.statusText);
} else {
const data = await response.json();
console.log("Successfully sent events:", data);
}
} catch (error) {
console.error("Network error while sending events:", error);
}
}
}
// Example usage:
const eventQueue = new EventQueue("https://example.com/ingest");
for (let i = 1; i <= 25; i++) {
eventQueue.addEvent({ id: i, message: `event_${i}` });
}
// Flush any remaining events (optional)
eventQueue.flush();
ChatGPT’s code is similar to mine, but it doesn’t include anything about retrying. A candidate should think of retrying logic by themselves. If a candidate is cheating, they probably won’t think of this failure case and the interviewer will have to prompt them. In turn, the candidate will likely prompt ChatGPT with the following:
Human prompter
Add retrying logic.
ChatGPT
Absolutely — let’s enhance the solution with retry logic. This is a realistic and valuable extension, often seen in production systems
🔄 Retry Requirements
If sending fails, retry up to
maxRetries
times.Use exponential backoff (e.g., 1000ms, 2000ms, 4000ms...) to wait between retries.
After the final failure, log an error and discard the payload.
class EventQueue {
constructor(url, limit = 10, maxRetries = 3) {
this.queue = [];
this.limit = limit;
this.url = url;
this.maxRetries = maxRetries;
}
addEvent(event) {
this.queue.push(event);
if (this.queue.length >= this.limit) {
this.flush();
}
}
async flush() {
if (this.queue.length === 0) return;
const payload = this.queue.slice();
this.queue = [];
await this.sendWithRetries(payload);
}
async sendWithRetries(payload) {
for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
try {
const response = await fetch(this.url, {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`Server responded with status ${response.status}`);
}
const data = await response.json();
console.log("Successfully sent events:", data);
return;
} catch (error) {
console.warn(`Attempt ${attempt} failed:`, error.message);
if (attempt === this.maxRetries) {
console.error("All retry attempts failed. Dropping payload:", payload);
return;
}
// Exponential backoff
const backoff = 1000 * Math.pow(2, attempt - 1);
await this.delay(backoff);
}
}
}
delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// Example usage:
const eventQueue = new EventQueue("https://example.com/ingest");
for (let i = 1; i <= 25; i++) {
eventQueue.addEvent({ id: i, message: `event_${i}` });
}
// Manually flush remaining events
eventQueue.flush();
Human prompter
Next part of the question: For events that have failed to send after all retries, save this to an in-memory array. Make sure events are sorted by their timestamp. Assume the long term array of events and your failed queue of events both start sorted from earliest event to latest event.
ChatGPT
Perfect — this adds a great layer of complexity: graceful failure handling + sorting.
🧠 New Requirements Recap
If an event batch fails after all retries, save it to an in-memory
failedEvents
array.Both the existing
failedEvents
and new payloads are already sorted.Ensure
failedEvents
remains sorted by timestamp after insertion (like merging sorted lists).
class EventQueue {
constructor(url, limit = 10, maxRetries = 3) {
this.queue = [];
this.limit = limit;
this.url = url;
this.maxRetries = maxRetries;
this.failedEvents = []; // Stores failed events, sorted by timestamp
}
addEvent(event) {
this.queue.push(event);
if (this.queue.length >= this.limit) {
this.flush();
}
}
async flush() {
if (this.queue.length === 0) return;
const payload = this.queue.slice();
this.queue = [];
await this.sendWithRetries(payload);
}
async sendWithRetries(payload) {
for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
try {
const response = await fetch(this.url, {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`Server responded with status ${response.status}`);
}
const data = await response.json();
console.log("Successfully sent events:", data);
return;
} catch (error) {
console.warn(`Attempt ${attempt} failed:`, error.message);
if (attempt === this.maxRetries) {
console.error("All retry attempts failed. Saving to failedEvents.");
this.mergeFailedEvents(payload);
} else {
const backoff = 1000 * Math.pow(2, attempt - 1);
await this.delay(backoff);
}
}
}
}
mergeFailedEvents(newEvents) {
this.failedEvents = this.mergeSortedArrays(this.failedEvents, newEvents);
}
mergeSortedArrays(arr1, arr2) {
let result = [];
let i = 0, j = 0;
while (i < arr1.length && j < arr2.length) {
if (arr1[i].timestamp <= arr2[j].timestamp) {
result.push(arr1[i++]);
} else {
result.push(arr2[j++]);
}
}
// Append remaining elements
return result.concat(arr1.slice(i)).concat(arr2.slice(j));
}
delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
// Example usage:
const eventQueue = new EventQueue("https://example.com/ingest");
// Simulated events with timestamps
for (let i = 1; i <= 25; i++) {
eventQueue.addEvent({ id: i, message: `event_${i}`, timestamp: Date.now() + i * 1000 });
}
// Optional manual flush at the end
eventQueue.flush();
As suspected, ChatGPT used part of merge sort to solve the question. It’s more concise than my logic, maybe I should learn from it!
Reply