Instagram co-founder Kevin Systrom has criticised artificial intelligence (AI) companies, accusing them of programming their chatbots to “juice engagement” by prodding users instead of offering meaningful insights, leading to OpenAI acknowledging its AI models often lack sufficient information to respond accurately and therefore ask for “clarification or more details.”
Speaking at StartupGrind this week, Systrom likened the approach to the tactics once adopted by social media companies for rapid user growth.
“You can see some of these companies going down the rabbit hole that all the consumer companies have gone down in trying to juice engagement,” he said.
“Every time I ask a question, at the end it asks another little question to see if it can get yet another question out of me,” Systrom noted, suggesting that such behaviour was not incidental but rather an inserted feature aimed at inflating metrics such as daily active users and time spent.
“This is a force that’s hurting us,” Systrom warned. “Companies should be laser-focused on providing high-quality answers rather than moving the metrics.”
In response to Systrom’s comments, OpenAI told TechCrunch that its AI models often lack sufficient information to respond accurately and therefore ask for “clarification or more details.”
On April 27, CEO Sam Altman had addressed the issue on social media, admitting that recent updates to its GPT-4o model had made the chatbot “too sycophant-y and annoying.”
He said, “The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it). We are working on fixes asap, some today and some this week. At some point will share our learnings from this, it's been interesting.”
The company has further disclosed that its internal testing revealed alarming trends in its o3 and o4-mini models, which were generating inaccurate or false information more frequently than older, non-reasoning models such as GPT-4o.
A technical report by OpenAI noted that “more research is needed” to understand why hallucinations are increasing as reasoning models become more advanced.