Dr. Joel Bervell, a physician known to his hundreds of thousands of followers on social media as the “Medical Mythbuster,” has built a reputation for debunking false health claims online.
Earlier this year, some of those followers alerted him to a video on another account featuring a man who looked exactly like him. The face was his. The voice was not.
“I just felt mostly scared,” Bervell told CBS News. “It looked like me. It didn’t sound like me… but it was promoting a product that I’d never promoted in the past, in a voice that wasn’t mine.”
It was a deepfake – one example of content that features fabricated medical professionals and is reaching a growing audience, according to cybersecurity experts. The video with Bervell’s likeness appeared on multiple platforms – TikTok, Instagram, Facebook and YouTube, he said.
A CBS News investigation over the past month found dozens of accounts and more than 100 videos across social media sites in which fictitious doctors, some using the identities of real physicians, gave advice or tried to sell products, primarily related to beauty, wellness and weight loss. Most of them were found on TikTok and Instagram, and some of them were viewed millions of times.
Most videos reviewed by CBS News were trying to sell products, either through independent websites or well-known online marketplaces. They often made bold claims. One video touted a product “96% more effective than Ozempic.”
Cybersecurity company ESET also recently investigated this kind of content. It spotted more than 20 accounts on TikTok and Instagram using AI-generated doctors to push products, according to Martina López, a security researcher at ESET.
“Whether it’s due to some videos going viral or accounts gaining more followers, this type of content is reaching an increasingly wider audience,” she said.
CBS News contacted TikTok and Meta, the parent company of Instagram, to get clarity on their policies. Both companies removed videos flagged by CBS News, saying they violated platform policies. CBS News also reached out to YouTube, which said its privacy request process “allows users to request the removal of AI-generated content that realistically simulates them without their permission.”
YouTube said the videos provided by CBS News didn’t violate its Community Guidelines and would remain on the platform. “Our policies prohibit content that poses a serious risk of egregious harm by spreading medical misinformation that contradicts local health authority (LHA) guidance about specific health conditions and substances,” YouTube said.
TikTok says that between January and March, it proactively removed more than 94% of content that violated its policies on AI-generated content.
After CBS News contacted Meta, the company said it removed videos that violated its Advertising Standards and restricted other videos that violated its Health and Wellness policies, making them accessible to just those 18 and older.
Meta also said bad actors constantly evolve their tactics to attempt to evade enforcement.
Scammers are using readily available AI tools to significantly improve the quality of their content, and viewing videos on small devices makes it harder to detect visual inconsistencies, ESET’s chief security evangelist, Tony Anscombe, said.
ESET said there are some red flags that can help someone detect AI-generated content, including glitches like flickering, blurred edges or strange distortions around a person’s face. Beyond the visuals, a voice that sounds robotic or lacks natural human emotion is a possible indicator of AI.
Finally, viewers should be skeptical of the message itself and question overblown claims like “miracle cures” or “guaranteed results,” which are common tactics in digital scams, Anscombe said.
“Trust nothing, verify everything,” Anscombe said. “So if you see something and it’s claiming that, you know, there’s this miracle cure and this miracle cure comes from X, go and check X out … and do it independently. Don’t follow links. Actually go and browse for it, search for it and verify yourself.”
Bervell said the deepfake videos featuring his likeness were taken down after he asked his followers to help report them.
He also said he’s concerned videos like these will undermine public trust in medicine.
“When we have fiction out there, we have what are thought to be experts in a field saying something that may not be true,” he said. “That distorts what fact is, and makes it harder for the public to believe anything that comes out of science, from a doctor, from the health care system overall.”
Passenger arrested after allegedly causing disturbance midair and forcing flight to divert
Tropical Storm Erin expected to become a major hurricane
When could Tropical Storm Erin become a hurricane?
The post Deepfake videos impersonating real doctors push false medical advice and treatments appeared first on CBS News.