The proliferation of fake and misleading information on social media platforms is a growing concern, particularly during times of conflict and crises. In the aftermath of Iran’s announcement of its drone and missile attack on Israel on April 13, numerous misleading posts started circulating on X, a popular social media platform. These posts utilized AI-generated videos, repurposed footage from other conflicts, and photos to portray the strikes and their impact. What’s even more alarming is that many of these posts were shared by verified accounts, which have paid X for the “blue tick” verification and enjoy increased visibility through the platform’s algorithm.
According to the Institute for Strategic Dialogue (ISD), just 34 of these misleading posts received over 37 million views. The impact of this misinformation is amplified by the fact that some of the accounts responsible for spreading it claim to be open source intelligence (OSINT) experts, which adds an air of legitimacy to their posts. The misinformation includes videos falsely depicting rockets launching into the night, explosions, and even President Joe Biden in military fatigues. The intent behind these posts is questionable, ranging from seeking clout to financial gain.
One striking example is a post that claimed “WW3 has officially started,” accompanied by a video showing rockets being shot into the night. However, the video was actually from a YouTube video posted in 2021. Another post falsely claimed to show the Iron Dome, Israel’s missile defense system, in action during the attack, but the video was from October 2023. Both posts garnered hundreds of thousands of views within hours of being shared, contributing to the spread of misinformation.
Even Iranian state media outlets got involved in spreading misinformation, as they shared a video of the wildfires in Chile earlier this year, claiming it showed the aftermath of the attacks. This deliberate attempt to mislead the audience by passing off unrelated footage as evidence of military success undermines the credibility of information sources and further erodes the ability of audiences to discern truth from falsehood.
X, the social media platform where these misleading posts were shared, did not respond to requests for comment. The situation is exacerbated by the fact that X, under Elon Musk’s leadership, has significantly scaled back on content moderation. This has created an environment where disinformation can thrive, making it increasingly difficult for legitimate OSINT researchers to surface accurate and reliable information during times of crisis.
To combat the spread of misinformation, X has implemented a crowd-sourced community notes function. This feature allows users to add context and fact-check information shared on the platform. However, its effectiveness has been mixed. Only a small fraction of the misleading content identified by ISD has received community notes as of the time of publication.
This pattern of misinformation becoming rampant during times of crisis is a grave concern. Premium accounts on platforms like X have been found to contribute to the proliferation of half-truths and falsehoods, either through the misidentification of media or the deliberate use of false imagery to assign blame to specific actors or states. This phenomenon challenges society’s ability to discern what is real and what is not, further eroding trust in information sources.
Moreover, the potential financial incentives for those who are part of X’s subscription and ad revenue sharing models add a concerning dimension to the spread of misinformation. While it is unclear if the users identified by ISD were monetizing their content, a separate report by the Center for Countering Digital Hate (CCDH) found that between October 7 and February 7, ten influencers, including far-right influencer Jackson Hinkle, grew their followings by posting antisemitic and Islamophobic content about the conflict. Six of these accounts were part of X’s subscription program, and all ten were verified users. This suggests that going viral on the platform can potentially translate into financial gain.
Addressing the issue of misinformation on social media platforms requires a multi-faceted approach. Platforms like X must prioritize content moderation and invest in algorithms and tools that can effectively detect and flag misleading and false information. Verified accounts should be subject to stricter scrutiny to ensure that they are not contributing to the spread of misinformation. Additionally, educating users about the dangers of misinformation and promoting media literacy can empower individuals to critically evaluate the information they consume.
It is crucial to recognize that the spread of misinformation not only undermines the credibility of reliable information sources but also has real-world consequences. When inaccurate information circulates during times of conflict, it can contribute to the escalation of tensions and the perpetuation of violence. Therefore, combating misinformation should be a collective effort involving tech companies, governments, civil society organizations, and individuals alike.
Source link