Cursor AI's Own Support Bot Hallucinated Its Usage Policy

In a fitting bit of irony, users of Cursor AI experienced the limitations of AI firsthand when the programming tool's own AI support bot hallucinated a policy limitation that doesn't actually exist.

Users of the Cursor editor, designed to generate and fix source code in response to user prompts, have sometimes been booted from the software when trying to use the app in multiple sessions on different machines.

Some folks who inquired about the inability to maintain multiple logins for the subscription service across different machines received a reply from the company's support email indicating this was expected behavior.

But the person on the other end of that email wasn't a person at all, but an AI support bot. And it evidently made that policy up.

In an effort to placate annoyed users this week, Michael Truell co-founder of Cursor creator Anysphere, published a note to Reddit to apologize for the snafu.

"Hey! We have no such policy," he wrote. "You're of course free to use Cursor on multiple machines.

"Unfortunately, this is an incorrect response from a front-line AI support bot. We did roll out a change to improve the security of sessions, and we're investigating to see if it caused any problems with session invalidation."

Truell added that Cursor provides an interface for viewing active sessions in its settings and apologized for the confusion.

In a post to the Hacker News discussion of the SNAFU, Truell again apologized and acknowledged that something had gone wrong.

"We’ve already begun investigating, and some very early results: Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support."

He said the developer who raised this issue had been refunded. The session logout issue, now fixed, appears to have been the result of a race condition that arises on slow connections and spawns unwanted sessions.

Any AI responses used for email support are now clearly labeled as such

Truell did not immediately respond to our requests for comment.

AI models are well known to hallucinate, generating inaccurate or low quality responses to input prompts; for users, it appears the software just invents stuff out of thin air.

As noted in Nature earlier this year, hallucinations cannot be stopped, though they can be managed. AI model repository HuggingFace documents the phenomenon in its Hallucination Leaderboard, which compares how different AI models perform on different benchmark tests.

Marcus Merrell, principal technical advisor for Sauce Labs, an application testing biz, said more thorough testing of the support bot could have mitigated the risk of misstatements.

"This support bot fell victim to two problems here: Hallucinations, and non-deterministic results," Merrell told The Register.

"We all know about hallucinations, but the non-deterministic piece was at play here, too: if multiple people ask the same question, they're likely to get different results. So some users saw the message about the new policy change, and others didn't. This led to confusion within the company and online, as customers saw inconsistent messaging."

Merrell added, "For a support bot, this is unacceptable. Humans doing support usually have a script and a process. It's possible that the LLM can be refined in a way that mitigates these problems, but companies are racing to roll them out at scale – choosing to save staffing costs – and putting their brand at risk in the process. Letting users know 'this response was generated by AI' is likely to be an inadequate measure to recover user loyalty." ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more