I’m in an investing WhatsApp group with about 100 other punters, awaiting financial advice from Sebastian Hatherleigh. In pictures, Hatherleigh is about 60, bespectacled, and looks dapper in a navy suit and vermilion tie. He describes himself as a senior strategic adviser at a “global investment and asset management firm”.
He is, in truth, nothing of the sort. In fact, he’s nothing of any sort. Despite his substantial online presence — social media accounts, press releases, quotes in online publications — Sebastian Hatherleigh doesn’t appear to exist at all.
There is no digital record of him before 2025, his images seem AI-generated, and there is nothing to back up his claims that he studied at Columbia or worked at McKinsey, Morgan Stanley, or as a faculty member at Wharton business school — the last two confirming to me that he’d never been employed there. When I contacted him on Facebook, his account — registered in Nepal — was deleted.
But Hatherleigh is part of a network stretching across borders and over platforms — an elaborate ecosystem of computer-generated or stolen photos, fake accounts on social media platforms, and press releases laundered through small press agencies.
It’s a level of intricate “world-building” that novelist George RR Martin would be proud of.
“This can only be done by organised crime gangs, organisations with the size and structure of major international businesses,” says Simon Miller, director of communications at fraud prevention service Cifas.
Why criminals are going to such lengths is obvious. “If you’re looking [to target] people with substantial savings who can make multiple deposits, by definition they’re likely to be more savvy, so the scam has to be contextualised with more data,” he says.
But how these scams are carried out relies on the rising availability of generative AI technology, and a series of failings from social media platforms and online publishing groups.
At first glance, the WhatsApp group seems harmless. Most online investment scams involve pushy agents asking users to hand over cash. By contrast, this involves “an expert” who posts information about market moves, while an “assistant” points members to certain stocks.
“Essentially it starts off seeming legitimate, and that’s where people get caught out,” says Richard Berry, founder of the Good Money Guide. “They’re not asking for money and they’re not generally referring to a broker [who provides a payout for each customer recommended].”
Instead of cashing out directly from customers, he says, these groups make money by winning trust. Any number of parallel Google searches or AI queries, “just to check”, will result in a stream of legitimate-seeming impressions.
Then, when the time is right, they will start pushing investors towards illiquid small-cap companies, stocks which the groups already own.
“There might be 100 or 1,000 different groups all pushing this particular stock — even a small amount of buying will push it up,” says Berry. “Then consumers see the stock going up, they buy it more, and it becomes a perpetual motion machine.”
When the stock is high enough, the organisers simply sell into the market, leaving consumers high and dry. It is a tech-enabled rendering of a much older racket: the “pump-and-dump” scam.
But getting to that point is a long road, which begins with social media ads. Though Facebook has come under particular scrutiny for hosting scam ads, I found the link to the Hatherleigh investment group through TikTok.
“Criminals use online platforms to pay for adverts and content to help make their scams seem legitimate,” says Alex Robinson, head of fraud prevention at TSB.
While high-profile celebrities, including consumer champion Martin Lewis, have been common targets, Miller warns there is an increasing shift to scammers masquerading as members of the financial services industry.
In Hatherleigh’s case, the first ad I saw that led to him featured the name and likeness of veteran fund manager Terry Smith. Several ads for Hatherleigh’s services aimed at French speakers featured the logo of Cathie Wood’s Ark Invest. There is no suggestion that either are involved with Hatherleigh (neither responded to requests for comment).
In total, there were 558 ads posted by the campaign on TikTok last September, mostly targeting the UK and Switzerland. Though TikTok does not provide data on how many users clicked through on each ad, the most popular was seen by more than 150,000 people.
Tiktok removed them late last year. The company said the ads breached its advertising policy on misleading and fake content. It added that, in just the second quarter of 2025, it had removed more than 5.7mn ads that violated its policies.
But it is a game of whack-a-mole — similar ads linked to fake personas are still being served up in 2026.
TikTok’s ad library shows the ads came from Xiamen Younan Yigou E-commerce Co Ltd, a Chinese company, with the marketing campaign run by Hong Kong-based MarketLogic Technology. Marketlogic did not respond to requests for comment; similar requests sent to a Facebook page linked to Xiamen Younan went unanswered, as did a letter sent to its registered address.
But the network extends further, both beyond these companies and TikTok.
When I copied and pasted the text from a disclaimer on a site linked to the company that supposedly employs Hatherleigh, I found 40 financial advisory companies using the same language, all of which seem to have been created in 2025.
The techniques they use show a high level of sophistication in creating fake identities for experts and realistic-looking discussions about their companies — probably the work of professional criminals, says Brian Dilley, an economic crime prevention consultant and former group director of economic crime prevention at Lloyds Banking Group.
“There are people who specialise in creating a synthetic identity, selling packages of things that give you a presence on the internet,” he says. “Everything is crime as a service.”
Perhaps the most ubiquitous tool in this spider’s web of scam websites is the press release. These are pumped out via news wires and press agencies across the scammers’ own websites and across a host of local news sites, LinkedIn accounts and even podcasts.
For real businesses, the value of such websites is limited, says Andrew Bloch, a communications executive who has been the official spokesperson and PR adviser to Lord Alan Sugar for more than two decades.
“There’s nothing illegal or immoral about it, but in my personal opinion it’s a bit of a waste of time — it’s rather like throwing mud at a wall and seeing what sticks,” he says. “Some of these sites have negligible audiences.”
But they are ideal for creating hundreds of syndicated media appearances for a one-off payment, instantly creating a realistic-looking profile with stories of product launches, conferences and new hires.
“The editorial standards on some of these sites are what I would call ‘low or no’,” says Bloch. One example is Grand Newswire, which claims to be based in Delaware.
Between advertisements for car replacement services, AI water purifiers and various exhibitions around the world, Grand Newswire has published press releases from financial services companies run by individuals who have left no trace of their existence beyond self-published content and nearly identical websites. It boasts that it costs just $15 to get exposure on more than 200 news sites. Grand Newswire did not respond to requests for comment.
Another example is Digital Journal, a Canadian company founded in 1998, which has published content from a range of questionable experts. Digital Journal is more expensive: sponsored content rates start at £274. Though it states that it does not allow “overly promotional language”, this seems open to interpretation. An article featuring Hatherleigh that was syndicated from Grand Newswire breathlessly reports on a new research division which offers “internal excellence”. Digital Journal did not respond to requests for comment.
In other cases, scammers have turned to social media manipulation. The Facebook page for Hatherleigh pumps out market briefings on a regular basis, while fellow expert Dr Rowan Penfield, who also leaves no online trace beyond his self-published content, asks visitors if they are “Tired of NOT being a crypto billionaire yet?”
I have found at least three Reddit forums run by accounts whose sole purpose is to launder the reputation of investment fraudsters, constantly posting about dubious investment sites that are part of the same network. Meanwhile on Quora, accounts posing as financial services professionals began posting about these exchanges earlier last year.
“It’s like an onion,” says Jonathan Gilbert, a lecturer at the University of the West of England Bristol, who focuses on financial crime and criminal justice. “It can be . . . one type of fraud, but involve other variants.”
Use of AI images is rife within these scams. One of the subreddits (topic-focused communities) mentioned above includes a post promoting an expert, Arvin Roberts, describing him as “a distinguished entrepreneur, standing confidently in front of a grand neoclassical building”. But there’s no evidence outside his self-published content to show he actually exists, and he changes race between the two accompanying images. Almost all of the posts in the subreddit were calling out Roberts as a scammer and the community had very low engagement and upvotes (which directly affect the visibility of content). Reddit declined to comment and Quora did not respond to requests for comment.
Perhaps more concerning is that information manipulation is increasingly assimilated into large language models — an echo of the so-called “LLM grooming” used by pro-Russian sources to attempt to shape opinions of the invasion of Ukraine. Essentially, by flooding the information space with slop (much of which appears to be AI-generated), it is possible to “convince” AI chatbots that the information is reputable.
When I tested different LLMs’ ability to see through the “Hatherleigh” scam network, results varied significantly. Meta’s AI drew unquestioningly on the fake back stories and press releases; even when prompted again, it stated that Hatherleigh was a legitimate source. ChatGPT, by contrast, recognised red flags when asked follow up questions. Google’s Gemini displayed suspicion from the first prompt. Chinese LLM DeepSeek did not identify the scams, instead simply hallucinating fictional movie characters when asked about the fake investors.
Deepseek and Alphabet did not respond to a request for comment. OpenAI claimed that ChatGPT is used to identify scams up to three times more often than it is used by scammers.
Meta told me that generative AI systems are not always accurate, that LLM manipulation is similar to previous efforts targeting search engines and that the company continues to fight scams.
“There’s an arms race between organisers and preventers of fraud,” says Gilbert. “If anything reduces their ability to victimise, they’ll seek to adapt.”
And AI is the new battleground. For retail investors, the level of sophistication and co-ordination in these “world-building” scams poses major challenges — and is a reflection of the amount of money criminals are prepared to invest in these scam campaigns.
While the ads on TikTok are Chinese, Facebook accounts linked to the fictional experts were administered from countries including Nepal, India and Cambodia. One of the Instagram pages which promotes scams is based in Hong Kong, another in Kazakhstan. An app promoted by one of the sites linked back to a Nigerian computer science graduate, who at first denied a connection, then claimed he had given his account to a friend who had in turn sold it to an Indian or Pakistani scammer. He failed to provide documentation showing this.
That level of international criminal co-operation means more teamwork between public and private sectors is vital, says Mark Tierney, chief executive of cross-industry collaboration Stop Scams UK.
“Combating fraud requires co-ordinated action across government and industry to stay ahead of these evolving threats — including the use of AI as a tool for good, to detect and disrupt harmful activity,” he says.
Miller at Cifas echoes that, calling for “a rapid exchange of information at scale” between financial services companies and the tech platforms where many scams start.
“We need to become far more literate as a society about [how] LLMs can circumvent safeguards — consumers need to have far greater scepticism,” he adds.
Dilley agrees, reflecting that moving towards distrust of anything seen online is, perhaps, not a bad thing.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Inside the elaborate online world of computer-generated scams
I’m in an investing WhatsApp group with about 100 other punters, awaiting financial advice from Sebastian Hatherleigh. In pictures, Hatherleigh is about 60, bespectacled, and looks dapper in a navy suit and vermilion tie. He describes himself as a senior strategic adviser at a “global investment and asset management firm”.
He is, in truth, nothing of the sort. In fact, he’s nothing of any sort. Despite his substantial online presence — social media accounts, press releases, quotes in online publications — Sebastian Hatherleigh doesn’t appear to exist at all.
There is no digital record of him before 2025, his images seem AI-generated, and there is nothing to back up his claims that he studied at Columbia or worked at McKinsey, Morgan Stanley, or as a faculty member at Wharton business school — the last two confirming to me that he’d never been employed there. When I contacted him on Facebook, his account — registered in Nepal — was deleted.
But Hatherleigh is part of a network stretching across borders and over platforms — an elaborate ecosystem of computer-generated or stolen photos, fake accounts on social media platforms, and press releases laundered through small press agencies.
It’s a level of intricate “world-building” that novelist George RR Martin would be proud of.
“This can only be done by organised crime gangs, organisations with the size and structure of major international businesses,” says Simon Miller, director of communications at fraud prevention service Cifas.
Why criminals are going to such lengths is obvious. “If you’re looking [to target] people with substantial savings who can make multiple deposits, by definition they’re likely to be more savvy, so the scam has to be contextualised with more data,” he says.
But how these scams are carried out relies on the rising availability of generative AI technology, and a series of failings from social media platforms and online publishing groups.
At first glance, the WhatsApp group seems harmless. Most online investment scams involve pushy agents asking users to hand over cash. By contrast, this involves “an expert” who posts information about market moves, while an “assistant” points members to certain stocks.
“Essentially it starts off seeming legitimate, and that’s where people get caught out,” says Richard Berry, founder of the Good Money Guide. “They’re not asking for money and they’re not generally referring to a broker [who provides a payout for each customer recommended].”
Instead of cashing out directly from customers, he says, these groups make money by winning trust. Any number of parallel Google searches or AI queries, “just to check”, will result in a stream of legitimate-seeming impressions.
Then, when the time is right, they will start pushing investors towards illiquid small-cap companies, stocks which the groups already own.
“There might be 100 or 1,000 different groups all pushing this particular stock — even a small amount of buying will push it up,” says Berry. “Then consumers see the stock going up, they buy it more, and it becomes a perpetual motion machine.”
When the stock is high enough, the organisers simply sell into the market, leaving consumers high and dry. It is a tech-enabled rendering of a much older racket: the “pump-and-dump” scam.
But getting to that point is a long road, which begins with social media ads. Though Facebook has come under particular scrutiny for hosting scam ads, I found the link to the Hatherleigh investment group through TikTok.
“Criminals use online platforms to pay for adverts and content to help make their scams seem legitimate,” says Alex Robinson, head of fraud prevention at TSB.
While high-profile celebrities, including consumer champion Martin Lewis, have been common targets, Miller warns there is an increasing shift to scammers masquerading as members of the financial services industry.
In Hatherleigh’s case, the first ad I saw that led to him featured the name and likeness of veteran fund manager Terry Smith. Several ads for Hatherleigh’s services aimed at French speakers featured the logo of Cathie Wood’s Ark Invest. There is no suggestion that either are involved with Hatherleigh (neither responded to requests for comment).
In total, there were 558 ads posted by the campaign on TikTok last September, mostly targeting the UK and Switzerland. Though TikTok does not provide data on how many users clicked through on each ad, the most popular was seen by more than 150,000 people.
Tiktok removed them late last year. The company said the ads breached its advertising policy on misleading and fake content. It added that, in just the second quarter of 2025, it had removed more than 5.7mn ads that violated its policies.
But it is a game of whack-a-mole — similar ads linked to fake personas are still being served up in 2026.
TikTok’s ad library shows the ads came from Xiamen Younan Yigou E-commerce Co Ltd, a Chinese company, with the marketing campaign run by Hong Kong-based MarketLogic Technology. Marketlogic did not respond to requests for comment; similar requests sent to a Facebook page linked to Xiamen Younan went unanswered, as did a letter sent to its registered address.
But the network extends further, both beyond these companies and TikTok.
When I copied and pasted the text from a disclaimer on a site linked to the company that supposedly employs Hatherleigh, I found 40 financial advisory companies using the same language, all of which seem to have been created in 2025.
The techniques they use show a high level of sophistication in creating fake identities for experts and realistic-looking discussions about their companies — probably the work of professional criminals, says Brian Dilley, an economic crime prevention consultant and former group director of economic crime prevention at Lloyds Banking Group.
“There are people who specialise in creating a synthetic identity, selling packages of things that give you a presence on the internet,” he says. “Everything is crime as a service.”
Perhaps the most ubiquitous tool in this spider’s web of scam websites is the press release. These are pumped out via news wires and press agencies across the scammers’ own websites and across a host of local news sites, LinkedIn accounts and even podcasts.
For real businesses, the value of such websites is limited, says Andrew Bloch, a communications executive who has been the official spokesperson and PR adviser to Lord Alan Sugar for more than two decades.
“There’s nothing illegal or immoral about it, but in my personal opinion it’s a bit of a waste of time — it’s rather like throwing mud at a wall and seeing what sticks,” he says. “Some of these sites have negligible audiences.”
But they are ideal for creating hundreds of syndicated media appearances for a one-off payment, instantly creating a realistic-looking profile with stories of product launches, conferences and new hires.
“The editorial standards on some of these sites are what I would call ‘low or no’,” says Bloch. One example is Grand Newswire, which claims to be based in Delaware.
Between advertisements for car replacement services, AI water purifiers and various exhibitions around the world, Grand Newswire has published press releases from financial services companies run by individuals who have left no trace of their existence beyond self-published content and nearly identical websites. It boasts that it costs just $15 to get exposure on more than 200 news sites. Grand Newswire did not respond to requests for comment.
Another example is Digital Journal, a Canadian company founded in 1998, which has published content from a range of questionable experts. Digital Journal is more expensive: sponsored content rates start at £274. Though it states that it does not allow “overly promotional language”, this seems open to interpretation. An article featuring Hatherleigh that was syndicated from Grand Newswire breathlessly reports on a new research division which offers “internal excellence”. Digital Journal did not respond to requests for comment.
In other cases, scammers have turned to social media manipulation. The Facebook page for Hatherleigh pumps out market briefings on a regular basis, while fellow expert Dr Rowan Penfield, who also leaves no online trace beyond his self-published content, asks visitors if they are “Tired of NOT being a crypto billionaire yet?”
I have found at least three Reddit forums run by accounts whose sole purpose is to launder the reputation of investment fraudsters, constantly posting about dubious investment sites that are part of the same network. Meanwhile on Quora, accounts posing as financial services professionals began posting about these exchanges earlier last year.
“It’s like an onion,” says Jonathan Gilbert, a lecturer at the University of the West of England Bristol, who focuses on financial crime and criminal justice. “It can be . . . one type of fraud, but involve other variants.”
Use of AI images is rife within these scams. One of the subreddits (topic-focused communities) mentioned above includes a post promoting an expert, Arvin Roberts, describing him as “a distinguished entrepreneur, standing confidently in front of a grand neoclassical building”. But there’s no evidence outside his self-published content to show he actually exists, and he changes race between the two accompanying images. Almost all of the posts in the subreddit were calling out Roberts as a scammer and the community had very low engagement and upvotes (which directly affect the visibility of content). Reddit declined to comment and Quora did not respond to requests for comment.
Perhaps more concerning is that information manipulation is increasingly assimilated into large language models — an echo of the so-called “LLM grooming” used by pro-Russian sources to attempt to shape opinions of the invasion of Ukraine. Essentially, by flooding the information space with slop (much of which appears to be AI-generated), it is possible to “convince” AI chatbots that the information is reputable.
When I tested different LLMs’ ability to see through the “Hatherleigh” scam network, results varied significantly. Meta’s AI drew unquestioningly on the fake back stories and press releases; even when prompted again, it stated that Hatherleigh was a legitimate source. ChatGPT, by contrast, recognised red flags when asked follow up questions. Google’s Gemini displayed suspicion from the first prompt. Chinese LLM DeepSeek did not identify the scams, instead simply hallucinating fictional movie characters when asked about the fake investors.
Deepseek and Alphabet did not respond to a request for comment. OpenAI claimed that ChatGPT is used to identify scams up to three times more often than it is used by scammers.
Meta told me that generative AI systems are not always accurate, that LLM manipulation is similar to previous efforts targeting search engines and that the company continues to fight scams.
“There’s an arms race between organisers and preventers of fraud,” says Gilbert. “If anything reduces their ability to victimise, they’ll seek to adapt.”
And AI is the new battleground. For retail investors, the level of sophistication and co-ordination in these “world-building” scams poses major challenges — and is a reflection of the amount of money criminals are prepared to invest in these scam campaigns.
While the ads on TikTok are Chinese, Facebook accounts linked to the fictional experts were administered from countries including Nepal, India and Cambodia. One of the Instagram pages which promotes scams is based in Hong Kong, another in Kazakhstan. An app promoted by one of the sites linked back to a Nigerian computer science graduate, who at first denied a connection, then claimed he had given his account to a friend who had in turn sold it to an Indian or Pakistani scammer. He failed to provide documentation showing this.
That level of international criminal co-operation means more teamwork between public and private sectors is vital, says Mark Tierney, chief executive of cross-industry collaboration Stop Scams UK.
“Combating fraud requires co-ordinated action across government and industry to stay ahead of these evolving threats — including the use of AI as a tool for good, to detect and disrupt harmful activity,” he says.
Miller at Cifas echoes that, calling for “a rapid exchange of information at scale” between financial services companies and the tech platforms where many scams start.
“We need to become far more literate as a society about [how] LLMs can circumvent safeguards — consumers need to have far greater scepticism,” he adds.
Dilley agrees, reflecting that moving towards distrust of anything seen online is, perhaps, not a bad thing.