Governments are increasingly concerned about the realistic but AI-generated images, audio, and videos. Such deepfakes cause widespread misinformation and, in some cases, harm national security by endangering public trust in institutions and elections, as well as inciting political violence. On the other hand, the general public and digital platform users can’t differentiate between AI-generated fake and real content, causing misinformation, polarisation, and the commodification of private data by large tech companies. Accordingly, the rapid movements of deepfakes drive the need to act to set the environment for the new reality.

 

Relying on tech companies to mitigate misinformation is highly challenging, as these companies face the problem of regulating deepfake content due to its wide accessibility through numerous companies. Therefore, regulating such content by one company will certainly decrease its profit, as users will shift to another supplier. Additionally, these companies financially benefit from publishing advertisements on misinformation websites unintentionally. Hence, there is a pressing need for an outside source to force regulations and strategies to mitigate the proliferation of online misinformation.

Challenges of Online Misinformation

Deepfakes are AI-generated materials that appear realistic and are designed to make it seem as if a person has made a statement or performed an action they never actually did, often without their consent, raising legal and security risks. For instance, AI-generated videos or images showing a candidate making a false statement, endorsing a controversial policy, or appearing with a problematic politician could be believed by voters and manipulate electoral results. These misleading materials emerge during campaigns to either bolster or harm a candidate’s reputation and credibility, polarising electorates to vote for or against them.

 

Similarly, deepfakes distort publicly available information about government operations. Fake AI-generated materials could distort government transparency in terms of the reasons behind their decision on a certain issue, prompting citizens to accuse the government of failing to perform properly or making poor policy decisions. Deepfakes could alter a government official’s speech or a government record, then mislead the public’s understanding and support of the government decision. This undermines the citizens’ trust in the government, most likely disturbs the decision-making process, weakens policy effectiveness, and could lead to social instability, such as protests.

 

Moreover, the public is concerned about the AI-generated deepfake pornography. This phenomenon proliferates on adult websites and social media and, in many cases, involves child exploitation, raising alarming legal and ethical concerns regarding violation of private data, identity theft, non-consensual creation, and defamation. This content has been used to target private individuals and political figures, as well as for blackmail or revenge reasons. Websites are now able to regenerate people’s images from stored data to such an extent that the spread of this content becomes difficult to control.

 

The rapid development of AI deepfakes causes misinformation proliferation, threatening individuals and governments is becoming a new reality. The first player to look for mitigation efforts is the tech companies that produce and monitor all online information. Nevertheless, efforts to mitigate these risks are hampered by tech companies’ limited incentives to cooperate due to the profit-misinformation chain. As a result, policymakers are being looked at as the key actors to develop mandatory strategies to mitigate deepfake risks while adapting to this reality.

The Profit-Misinformation Nexus

The question arises about the correlation between tech companies’ regulating misinformation and profit. Companies and digital platforms unintentionally finance misinformation websites by placing advertisements on them. This is done when these companies use digital advertising platforms that distribute the advertisements automatically across various websites, which may include misinformation sites. Both companies and misinformation outlets make a lot of profit from this dynamic, displaying a strong correlation between online misinformation and advertisement revenue and explaining the lack of incentive for these companies to mitigate misinformation as they risk losing money.

 

In addition, tech companies benefit greatly from how viral the fake content can spread, especially the companies that intentionally circulate fake news. These companies that manage social media platforms, news corporations, influencers, and opinion leaders depend on high user engagement regardless of the trustworthiness of the information provided. Furthermore, the majority of general users reshare the content they relate to without critically evaluating its credibility. Therefore, tech companies are reluctant to curb online misinformation and provide fact-checking codes as they financially profit from them.

 

Another perspective on the matter concentrates on the competitiveness between digital platforms. Moderating content differs from one platform to another, where the limit of misinformation content moderation is linked to the competition between these platforms, as users may switch to another platform if they feel that the moderation rules in the current platform limit their access or expression over content or news they believe is valuable. So, the risk of losing users, namely profit, financially discourages such platforms from executing moderation policies.

 

Accordingly, the correlation between profit and misinformation explains the lack of incentive for tech companies and online platforms to voluntarily initiate regulations to curb misinformation, leaving the regulatory gap to be handled by policymakers to enforce strategies to mitigate online misinformation. The rapid AI developments and the deepfakes generated from the misuse of such technology grow the need for strict strategies by policymakers to condition tech companies and digital platforms to adopt such changes in order to operate in countries.

Fighting Deepfakes, Could Policymakers Do It?

Policymakers have the capacity to mitigate online misinformation through several interconnected strategies. Labelling deepfakes comes as a priority to ensure transparency. Online platforms will be obliged to display watermarks and digital labels to explicitly indicate modifications or synthetic changes to the original content. Once an unlabelled AI-generated content is detected, tech companies and social media platforms should be responsible for reporting and removing it. Furthermore, mandatory transparency programs could be introduced for digital advertising platforms to label the misinformation sites and give access to advertisers to choose which websites to place their advertisements on.

 

Second, social media platforms and websites process personal data to enhance user experience, but as deepfakes proliferates through the unauthorised use of this data, and as companies sell such information to third parties, policymakers’ intervention becomes essential to safeguard personal data. Tech companies will be obliged to display a simple privacy polices document without difficult legal and technical language, understandable to the general public for informed consent, instead of documents with many complex words. Along with that, there is a need to make data collection optional and permissions more flexible, where platforms provide a simple way to opt out of data collection anytime.

 

Third, there is a pressing legal gap in addressing deepfake-related crimes with legal loopholes that tech companies and online platforms can exploit. This prompts the need for new legislations specifically addressing misinformation with a clear and accurate description of what constitutes a deepfake, private data, synthetic consent, and defamation. At the same time, amendments of existing national laws need to focus on incorporating provisions customized to curb violations related to deepfake content, including expanding defamation, privacy, and intellectual property laws. In general, regular updates to the legal framework are crucial as existing laws will inevitably be outdated, failing to keep up with the rapid AI developments.

 

Fourth, the continuous ability of individuals to create convincing deepfakes challenges judicial systems to detect fake content. Hence, a collaboration between governments and tech companies to develop cybersecurity strategies to detect the more complicated deepfakes would be highly beneficial. Following the path of “weaponization for defence,” professionals could create their own deepfakes to study how to spot them and learn how they operate, so they could develop more precise tools to detect them in the future. This cooperation could also expand to entail establishing guidelines and accountability measures for AI usage to detect and remove misinformation and fake websites.

 

Finally, governments could launch awareness campaigns via media channels and social media platforms to enhance the public’s digital literacy and responsible digital behaviour by promoting tools and methods of detecting fake news. Media literacy programs could also be incorporated into the national educational system and agendas of civil society organizations directed at young people and students. Raising awareness about fact-checking the information provided instead of immediately believing it will help suppress the rapid spread of the fake materials, especially when it comes to a high-profile persona or during a sensitive time a nation faces.

 

In sum, generative AI materials are spreading at a quick pace, incorporating online misinformation that threatens both governments and the public. Tech companies are challenged by the severe competition between each other to crack down on deepfakes, risk losing users, besides the problems in the automatic tech advertisement system, which makes the profit-misinformation nexus deepen. This new reality is hard to ignore or curb the inevitable technological developments; in that case, regulatory bodies’ interference is considered the optimal way to create an environment resilient to the digital reality. Policymakers could keep in mind that effective application of the safeguarding strategies requires international coordination to ensure integration of the legal and ethical framework across digital ecosystems.

References

Ahmad, Wajeeha, et al. (2024). Companies inadvertently fund online misinformation despite consumer backlash. Nature 630: 123–131. https://doi.org/10.1038/s41586-024-07404-1

 

Electronic Privacy Information Center (EPIC). “Social Media Privacy.” Accessed November 18, 2025. https://epic.org/issues/consumer-privacy/social-media-privacy/

 

Furizal, et al. (2025). “Social, legal, and ethical implications of AI-Generated deepfake pornography on digital platforms: A systematic literature review.” Social Sciences & Humanities Open 12:1-25. https://doi.org/10.1016/j.ssaho.2025.101882

 

Hanlon, Annmarie, and Karen Jones. (2023). “Ethical concerns about social media privacy policies: do usershave the ability to comprehend their consent actions?.” Journal of Strategic Marketing 1-18. https://doi.org/10.1080/0965254X.2023.2232817 

 

Hogan, Megan. Replicating Reality: Advantages and Limitations of Weaponized Deepfake Technology. Brief No. 12.4, PIPS (Project on International Peace & Security), Global Research Institute, College of William & Mary, April 2020. https://www.wm.edu/offices/global-research/research/pips/white_papers/2019-2020/hogan-final.pdf

 

Hynek, Nik, et al. (2025). Risks and benefits of artificial intelligence deepfakes: Systematic review and comparison of public attitudes in seven European Countries. Journal of Innovation & Knowledge, 10(5):1-19. https://doi.org/10.1016/j.jik.2025.100782

 

Krasadakis, George. “Fake News and Misinformation: How digital tech can help.” The Innovation Mode. October 28, 2025, Accessed November 18, 2025. https://www.theinnovationmode.com/the-innovation-blog/misinformation-online-a-solution-powered-by-state-of-the-art-tech

 

Osano. “Data Privacy Laws: What You Need to Know in 2025.” September 26, 2025, Accessed November 18, 2025. https://www.osano.com/articles/data-privacy-laws#:~:text=The%20CDPA%20requires%20companies%20covered,data%20processing%2C%20among%20other%20requirements.

 

S. Vishnu. Deceptive realities: Deepfakes and the battle for privacy. (2023). ICREP Journal of Interdisciplinary Studies, 2(2). http://dx.doi.org/10.2139/ssrn.5073233

 

Sasaki, So, and Cedric Langbort. (2024). Misinformation Regulation in the Presence of Competition Between Social Media Platforms. IEEE Transactions on Control of Network Systems 12(1): 1080-1090.  https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10607922

 

Sciences Po. “Combating Misinformation on Social Media.” Understanding Our Times. February 26, 2025. https://www.calameo.com/sciencespo/read/004160454741ba87f6772

 

Taylor, Josh. “Tech companies consider giving up efforts to combat misinformation online in Australia.” The Guardian. October 1, 2025, Accessed November 17, 2025. https://www.theguardian.com/media/2025/oct/01/tech-companies-consider-giving-up-efforts-to-combat-misinformation-online-in-australia

 

Wang, Xinzhi. (2025). The Impact of Deepfake on Government Transparency from a Policy Perspective. Modern Economics & Management Forum, 6(2): 145-149. https://en.front-sci.com/index.php/memf/article/view/3805/4110

 

Weiner, Daniel I. and Lawrence Norden. “Regulating AI Deepfakes and Synthetic Media in the Political Arena.” Brennan Center for Justice. December 5, 2023, Accessed November 6, 2025. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena

 

Villasenor, John. “Artificial intelligence, deepfakes, and the uncertain future of truth.” Brookings. February 14, 2019, Accessed November 03, 2025. https://www.brookings.edu/articles/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/#:~:text=Deepfakes%20are%20videos%20that%20have,real%20and%20what%20is%20not.

Comments

Write a comment

Your email address will not be published. Required fields are marked *