In the run-up to this week’s presidential election, OpenAI fielded thousands of requests to generate fake images of candidates.
Earlier in the year, the artificial intelligence company said its AI products had “guardrails” to prevent abuse, such as deepfakes or chatbots impersonating candidates.
The announcement came amid concerns that AI would wreak havoc during the campaign, churning out deepfakes and conspiracy theories for users to spread online. In January, New Hampshire voters received deepfake robocalls from a fake President Joe Biden discouraging them from voting in the state’s presidential primary that month.
OpenAI said ChatGPT rejected an estimated 250,000 requests to generate images of the candidates using DALL-E, the company’s AI art generator, in the month before the election.
“In the month leading up to Election Day, we estimate that ChatGPT rejected over 250,000 requests to generate DALL·E images of President-elect Trump, Vice President Harris, Vice President-elect Vance, President Biden, and Governor Walz,” OpenAI said in a blog post on Friday.
OpenAI previously said it would instruct ChatGPT to answer logistical questions about voting by directing users to CanIVote.org, a US voting information site run by the National Association of Secretaries of State. ChatGPT gave about 1 million responses telling users to check the voting site in the month prior to November 5, the company said.
OpenAI also said that on Election Day, it would answer questions on election results by referring users to news organizations like the Associated Press and Reuters.
“Around 2 million ChatGPT responses included this message on Election Day and the day following,” the company said in Friday’s blog post.
ChatGPT also avoided expressing political opinions on candidates, unlike chatbots like Elon Musk’s Grok AI, which expressed excitement that Trump won.
The post ChatGPT denied 250,000 requests to generate deepfake images of candidates in the month before the election appeared first on Business Insider.