Manipulation of Artifical Intelligence in Image Based Data: Adversarial Examples Techniques

Main Article Content

Emsal Aynaci Altinay Utku Kose http://orcid.org/0000-0002-9652-6415

Abstract

Artificial intelligence systems are widely used in all fields of life. While the solutions of artificial intelligence have had phenomenal success there is also a dangerous side employing efforts to design attack techniques against Artificial Intelligence and its sub-field, machine learning. Thanks to such techniques, intelligent systems can be fooled to cause misclassified output results. While artificial intelligence builds the future of humanity on intelligent systems, it also has future concerns as various applications from self-driving cars, disease detection to security will be done with autonomous intelligent systems without human beings. Moving from explanations, in this thesis study, artificial intelligence is manipulated by using adversarial examples techniques in image-based data. Adversarial examples are defined as training data that can deceive a machine learning technique into misleading it about the target problem, resulting in a failed or malicious intelligent system. Machine learning models are not robust to adversarial examples. This study shows how artificial intelligence systems are deceived by applying current adversarial example techniques. Obtained results show that the applied techniques provide sufficient attack success rates.

Article Details

How to Cite
AYNACI ALTINAY, Emsal; KOSE, Utku. Manipulation of Artifical Intelligence in Image Based Data: Adversarial Examples Techniques. Journal of Multidisciplinary Developments, [S.l.], v. 6, n. 1, p. 8-17, july 2021. ISSN 2564-6095. Available at: <http://www.jomude.com/index.php/jomude/article/view/88>. Date accessed: 21 jan. 2025.
Section
Natural Sciences - Regular Research Paper

References

[1] Nabiyev, V.V. (2016). Artificial Intelligence: Human-Computer Interaction. Seçkin Publishing, Ankara.

[2] Kose, U., & Koc, D. (Ed.). (2014). Artificial Intelligence applications in distance education. IGI Global.

[3] Pavaloiu, A., & Kose, U. (2017). Ethical artificial intelligence-an open question. arXiv preprint arXiv:1706.03021.

[4] Kose, U. (2017). Development of artificial intelligence based optimization algorithms. PhD. Thesis. Selcuk University, Institute of Natural Sciences, Dept. of Computer Engineering.

[5] Deperlioglu, O., Kose, U., Gupta, D., Khanna, A., & Sangaiah, A. K. (2020). Diagnosis of heart diseases by a secure Internet of Health Things system based on Autoencoder Deep Neural Network. Computer Communications, 162, 31-50.

[6] Neill, Daniel B. (2013). Using Artificial Intelligence to Improve Hospital Inpatient Care. IEE Computer Society.

[7] Krittanawong C, Zhang H, Wang Z, Aydar M, Kitai T. (2017). Artificial intelligence in precision cardiovascular medicine. J Am Coll Cardiol. 2017; 69:2657–64.

[8] Walton-Rivers, J., Williams, P.R., Bartle, R., Perez-Liebana D., Lucas, S.M. (2017). Evaluating and modelling hanabi-playing agents, 2017 IEEE Congress on Evolutionary Computation (CEC).

[9] Rajpurkar, P., Irvin, J., Zhu, K. (2017). CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. 05225

[10] Pan, Y.H. (2016). Heading toward artificial intelligence 2.0. Engineering, 2(4), 409–413. http://dx.doi.org/10.1016/J.ENG.2016.04.018.

[11] McFarland, M. (2017). Google uses AI to help diagnose breast cancer, Erişim Tarihi:1.04.2021.http://money.cnn.com/2017/03/03/technology/google-breast-cancer-ai/.

[12] Cheung, C.W., Tsang I.T., Wong, K.H. (2017). Robot Avatar: A Virtual Tourism Robot for People With Disabilities. International Journal of Computer Theory And Engineering, Singapore, (9)3, 229-234.

[13] Johnson, K.W., Soto, J.T., Glicksberg, B.S., Shameer, K., Miotto, R., Ali, M., Ashley, E., Dudley, J.T. (2018). Artificial Intelligence in Cardiology. Journal of the American College of Cardiology, 71(23), 2668-2679.

[14] Park SH & Han K. (2018). Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology; 286:800–809.

[15] Vassev, E. (2016). Safe artificial intelligence and formal methods. In International Symposium on Leveraging Applications of Formal Methods (pp. 704-713). Springer, Cham.

[16] Yampolskiy, R. V. (2016). Taxonomy of pathways to dangerous artificial intelligence. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.

[17] Köse, U. (2018). Are we safe enough in the future of artificial intelligence? A discussion on machine ethics and artificial intelligence safety. BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 9(2), 184-197.

[18] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014). Intriguing properties of neural networks. 6199.

[19] Goodfellow, I. J., Shlens, J., Szegedy, C. (2015). Explaining and harnessing adversarial examples. 6572.

[20] Hu, W., Tan, Y. (2017). Generating adversarial malware examples for black-box attacks based on GAN.05983.

[21] Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 24-28 October, Vienna, 1528-1540.

[22] Liu, Y., Chen, X., Liu, C., Song, D., (2016). Delving into transferable adversarial examples and black-box attacks. 02770.

[23] Liang, B., Li, H., Su, M., Bian, P., Li, X., Shi, W. (2017). Deep text classification can be fooled. 08006.

[24] Gümüş, F. (2019). Artificial Intelligence Applications, Effects and Future in Museums. Istanbul University, Institute of Social Sciences, MS Thesis, Istanbul. Hosseini, H., Kannan, S., Zhang, B., Poovendran, R., 2017. Deceiving Google's Perspective API Built for Detecting Toxic Comments.08138.

[25] Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., and Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506-519). ACM.

[26] Samanta, S., Mehta, S. (2018). Generating Adversarial Text Samples. In Advances in Information Retrieval, Proceedings of the 40th European Conference on Information Retrieval Research, 26–29 March, Grenoble, 744-749.

[27] Ebrahimi, J., Lowd, D., Dou, D. (2018). On Adversarial Examples for Character-Level Neural Machine Translation. 09030.

[28] Carlini, N., Wagner, D. (2018). Audio adversarial examples: Targeted attacks on speech to-text. In 2018 IEEE Security and Privacy Workshops (SPW) (pp. 1-7). IEEE.

[29] Akhtar, N., Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410-14430.

[30] Ilyas, A., Engstrom, L., Athalye, A., Lin, J. (2018). Black-box adversarial attacks with limited queries and information. arXiv preprint arXiv:1804.08598.

[31] Su J., Vargas, D.V., Sakurai K. (2017). One pixel attack for fooling deep neural networks, CoRR, 08864.

[32] Vasiljevic, I., Chakrabarti, A., & Shakhnarovich, G. (2016). Examining the impact of blur on recognition by convolutional networks. arXiv preprint arXiv:1611.05760.

[33] Zheng, S., Song, Y., Leung, T., & Goodfellow, I. (2016). Improving the robustness of deep neural networks via stability training. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 4480-4488).