In the midst of OpenAI grappling with a leadership crisis, users of the startup faced an unexpected challenge: a perceived slackening in the performance of ChatGPT, now a year old.
Numerous entrepreneurs, tech executives, and professionals have voiced concerns that OpenAI’s advanced language models are exhibiting a reluctance to respond to certain prompts. Instead, they seem to be providing instructions, nudging users to tackle tasks independently.
For instance, when Matthew Wensing, a startup founder, requested GPT-4 to compile a list of upcoming calendar dates, the response suggested trying an alternative tool. In another instance, Wensing asked the chatbot to generate around 50 lines of code, and it offered examples, proposing Wensing use them as a template to complete the task without AI assistance.
Expressing his frustration, Wensing noted that “GPT has definitely gotten more resistant to doing tedious work.” This sentiment echoes similar experiences shared by users across the internet.
The extent of these issues remains unclear, but some can be attributed to a recognized software bug, as acknowledged by two OpenAI employees in public statements. Despite this, the company has not responded to requests for comment.
ChatGPT, functioning as a black box, obscures the inner workings from users, creating an air of mystery. While this may intrigue researchers and entertainers, it poses challenges when attempting to integrate AI into consistent, reliable workflows.
OpenAI regularly introduces new features but keeps the fine-tuning adjustments to its models over time undisclosed. These adjustments can significantly impact performance, as seen in an incident earlier this year where users erroneously perceived ChatGPT as losing capabilities and becoming “dumber” over time due to subtle tweaks.
The recent accusations of “laziness” underscore a crucial distinction between capabilities and behavior. Although ChatGPT retains the ability to perform tasks, the results vary based on the prompts used. Some users have found that altering the approach to a task can overcome the perceived lethargy of the chatbot.
Discovering effective prompting strategies, however, demands time and experimentation. For some users, especially those subscribed to OpenAI’s premium service, ChatGPT Plus, the periodic effort may not seem justified, especially considering the $20 monthly subscription fee.
Another aspect to consider is the substantial cost of running ChatGPT for OpenAI. In February, research firm SemiAnalysis estimated the daily cost at nearly $700,000 before the release of advanced models like GPT-4 and GPT-4 Turbo.
As the chatbot appears “lazier,” OpenAI potentially saves money, alleviating strain on its systems—a challenge the company openly admitted to facing. Two weeks ago, OpenAI temporarily halted new sign-ups for ChatGPT Plus, citing overwhelming demand that exceeded their capacity, as communicated by CEO Sam Altman.
While there is no evidence to suggest that the current model issues result from a deliberate corporate strategy, the reality is that AI’s limitations today are not just defined by its technological capabilities but also by the substantial resources required to keep it operational.”
Software Engineer | AI Tech Enthusiast | Content Writer