AI Models Vulnerable To Reputation Sabotage, Study Finds
Image by Franz Bachinger from Pixabay
Digital agency Reboot Online has released findings from an experiment suggesting that Generative Engine Optimisation (GEO) can be used to manipulate AI responses. The research highlights an emerging risk where "Black Hat" tactics could be used to damage corporate reputations across the North’s growing digital economy.
The study tested whether Large Language Models (LLMs)—the technology behind AI chatbots—could be influenced to surface false information by planting unsubstantiated claims across third-party websites. Researchers created a fictional persona with no prior digital footprint and monitored how 11 different AI models responded to the "seeded" false data.
The results showed that while the majority of models ignored the false data, some platforms, including Perplexity and OpenAI’s ChatGPT, did cite the test sites. Perplexity incorporated the claims using cautious phrasing, while ChatGPT displayed higher levels of scepticism, often questioning the source credibility.
The findings are particularly relevant for the North of England’s professional services and tech hubs in Leeds, Manchester, and Newcastle. As Northern businesses move away from traditional search engines toward AI-driven "answer engines," the vulnerability of these models to coordinated "unsubstantiated claims" presents a new challenge for regional corporate communications and brand protection.
This experiment confirms that negative GEO is possible, and that at least some AI models can be influenced to surface false or damaging claims under specific conditions. In practice, long-term AI visibility continues to be shaped by authority, corroboration and trust, not isolated or low-quality tactics.