Analisis Dampak Konten Visual Palsu Berbasis Artificial Intelligence Terhadap Kepercayaan Masyarakat
Keywords:
Artificial Intelligence, Fake Visual Content, Deepfakes, Public Trust, Digital LiteracyAbstract
The development of Artificial Intelligence (AI) technology has brought significant changes to the creation and distribution of digital visual content. AI's ability to generate highly realistic images and videos has given rise to the phenomenon of fake visual content, such as deepfakes, which have the potential to mislead the public. This study aims to analyze the impact of AI-based fake visual content on public trust in digital information. The research method used was a qualitative approach through a literature review of national and international scientific journals relevant to the topics of AI, visual disinformation, and public trust. The study results indicate that the increasing spread of AI-based fake visual content contributes to a decline in public trust in digital media, fosters public skepticism, and triggers an information credibility crisis. Low levels of digital literacy and AI literacy are key factors exacerbating these impacts. Therefore, this study recommends efforts to improve digital literacy, strengthen regulations related to the use of AI, and implement ethical principles for technology to maintain and restore public trust.
References
Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147–155.
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Gani, A. G., & Amalia, S. (2022). Literasi digital sebagai upaya menangkal hoaks di era media sosial. Jurnal Komunikasi Global, 11(2), 145–158.
Hasan, R., Salah, K., Jayaraman, R., et al. (2020). Deep learning-based solutions for detection of fake images and videos: A survey. IEEE Access, 8, 203284–203311. https://doi.org/10.1109/ACCESS.2020.3036663
Jannah, M., & Prasetyo, B. (2021). Pengaruh hoaks visual terhadap kepercayaan publik di media sosial. Jurnal Ilmu Komunikasi, 19(1), 33–47.
Kietzmann, J., McCarthy, I. P., & Pitt, L. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135–146. https://doi.org/10.1016/j.bushor.2019.11.006
Lazer, D. M. J., Baum, M. A., Benkler, Y., et al. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Livingstone, S. (2019). Audiences in an age of datafication: Critical questions for media research. Television & New Media, 20(2), 170–183.
Mulyana, D. (2020). Kepercayaan publik dan tantangan komunikasi digital. Jurnal Komunikasi Indonesia, 9(1), 1–14.
Nasrullah, R. (2021). Media sosial: Perspektif komunikasi, budaya, dan sosioteknologi. Bandung: Simbiosa Rekatama Media.
Rozali, C., et al. (2024). Artificial intelligence dan tantangan etika dalam transformasi digital. Jurnal Teknologi Informasi dan Komunikasi, 12(1), 66–71.
Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73–100). Cambridge, MA: MIT Press.
Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news”. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143
Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1), 1–13.
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe Report.
Wibowo, A., & Nugroho, Y. (2022). Disinformasi visual dan tantangannya terhadap kepercayaan publik di Indonesia. Jurnal Penelitian Komunikasi, 25(2), 101–115.
Zhou, X., & Zafarani, R. (2020). A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Computing Surveys, 53(5), 1–40. https://doi.org/10.1145/3395046












