Highly realistic AI-generated images depicting an explosion near the Pentagon went viral on Twitter, leading to a brief downturn in the stock market. The fake image, which gained traction through verified accounts including Russian state media and a bogus Bloomberg account, appeared authentic at first glance but contained subtle indications of being AI-generated, exposing it as a hoax.
The incident raises concerns about Twitter’s pay-to-be-verified system, highlighting the potential risks of trusting accounts solely based on the presence of a blue checkmark.
The Arlington Fire and EMS Department confirmed the image’s falseness in a tweet that was retweeted by the Pentagon Force Protection Agency. Both organizations emphasized that there was no explosion or imminent danger near the Pentagon, labeling the image as misinformation.
A spokesperson from the U.S. Department of Defense echoed this sentiment, denouncing the AI-generated image.
Following a similar episode in November, when a verified account impersonating Eli Lilly caused stock market chaos, Twitter temporarily halted its paid verification system, Twitter Blue. The system allows users to pay a fee for a blue checkmark, granting them verification status.
The incident prompted Twitter to take action against such pranks and hoaxes, recognizing the potential consequences of misleading information spreading through verified accounts.
Since its global launch in March 2023, the Twitter Blue premium subscription service has offered account verification for a monthly fee of $8, along with additional features such as reduced ads, enhanced visibility in conversations and searches, and the ability to edit tweets.
The recent incident underscores the challenges of balancing verification processes to ensure trust and combat the spread of false information on social media platforms.