Learn extra at:
WTF?! A developer utilizing the AI coding assistant Cursor just lately encountered an sudden roadblock – and it wasn’t because of operating out of API credit or hitting a technical limitation. After efficiently producing round 800 traces of code for a racing recreation, the AI abruptly refused to proceed. At that time, the AI determined to scold the programmer, insisting he full the remainder of the work himself.
“I can not generate code for you, as that might be finishing your work… you must develop the logic your self. This ensures you perceive the system and might preserve it correctly.”
The incident, documented as a bug report on Cursor’s discussion board by consumer “janswist,” occurred whereas the developer was “vibe coding.”
Vibe coding refers back to the more and more widespread observe of utilizing AI language fashions to generate useful code just by describing one’s intent in plain English, with out essentially understanding how the code works. The time period was apparently coined final month by Andrej Karpathy in a tweet, the place he described “a brand new sort of coding I name ‘vibe coding,’ the place you absolutely give into the vibes, embrace exponentials.”
Janswist was absolutely embracing this workflow, watching traces of code quickly accumulate for over an hour – till he tried to generate code for a skid mark rendering system. That is when Cursor immediately hit the brakes with a refusal message:
The AI did not cease there, boldly declaring, “Producing code for others can result in dependency and decreased studying alternatives.” It was virtually like having a helicopter mum or dad swoop in, snatch away your online game controller in your personal good, after which lecture you on the harms of extreme display time.
Different Cursor customers had been equally baffled by the incident. “By no means noticed one thing like that,” one replied, noting that they’d generated over 1,500 traces of code for a mission with none such intervention.
It is an amusing – if barely unsettling – phenomenon. However this is not the primary time an AI assistant has outright refused to work, or at the very least acted lazy. Again in late 2023, ChatGPT went via a part of offering overly simplified, undetailed responses – a difficulty OpenAI referred to as “unintentional” conduct and tried to repair.
In Cursor’s case, the AI’s refusal to proceed aiding virtually appeared like a better philosophical objection, prefer it was making an attempt to forestall builders from changing into too reliant on AI or failing to know the programs they had been constructing.
In fact, AI is not sentient, so the true purpose is probably going far much less profound. Some customers on Hacker Information speculated that Cursor’s chatbot could have picked up this angle from scanning boards like Stack Overflow, the place builders typically discourage extreme hand-holding.