THREAT MODEL OF DECISION SUPPORT SYSTEMS OPERATING ON THE BASIS OF ELEMENTS OF ARTIFICIAL INTELLIGENCE

DOI: 10.31673/2412-4338.2023.033140

Authors

  • А. О. Бойко, (Boiko A. O.) State University of Information and Communication Technologies, Kyiv
  • Д. Д. Шулімова, (Shulimova D. D.) State University of Information and Communication Technologies, Kyiv

Abstract

The modern development of artificial intelligence (AI) affects many aspects of society's life and activities, including decision support systems. AI is already successfully implemented in various fields such as business, medicine, automated data processing and many others. However, this development also brings with it various threats and challenges that require careful study and security measures. This article is devoted to the development and analysis of a threat model for decision support systems that function on the basis of elements of artificial intelligence (AI). The relevance of the research lies in the rapid development of AI technologies and their increasingly widespread use in various fields, including management, medicine, finance and many others. The article describes the structure and main component models of threats that may arise in AI-based decision support systems. An overview of existing attacks on artificial intelligence (AI) is given, which cover the stages of training, the use of machine learning algorithms, as well as the information infrastructure of the system. Classification of attack data was carried out, based on the analysis of the AI functioning process, and attack categories most relevant and dangerous for AI systems were identified. A threat model for a decision support system based on artificial intelligence technologies is proposed, the feature of which is the consideration of threats to the confidentiality of data of AI systems, threats to the functioning of AI systems, and threats to information systems. This model is of practical importance for developers of AI systems and for specialists in the field of cyber security, as it contributes to increasing the level of security and reliability of decision support systems based on artificial intelligence elements in the modern digital environment.

Keywords: artificial intelligence, machine learning, neural networks, human-machine interaction, data visualization, computer security, technology vulnerabilities.

References
1. Clark, G.A Malicious Attack on the Machine Learning Policy/G.Clark, M.Doran, W.Glisson//17th IEEE International Conference On Trust, Security And Privacy in Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2018.
2. Adversarial attacks in machine learning: What they are and how to stop them. -https://venturebeat.com/2021/05/29/adversarial-attacks-in-machine-learning-what-the-are-and-how-to-stop-them.
3. Ateniese, G. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers/G. Ateniese, G. Felici, L.V. Mancini 1 et al.//International Journal of Security and Networks.-201.-Vol.10, Issue 3 .-P.137-150.
4. How Adversarial Attacks Work.-https://medium.com/xix-ai/how-adversarial-attacks-work-87495b81da2d.
5. Artificial intelligence and cybersecurity.-https://www.techopedia.com/artificial-intelligence-in-cybersecurity/2/34390.
6. Springer, J.M. A little Robustness Goes a Long Way Leveraging Robust features for Targeted Transfer Attacks/J.M.Springer, M.Mitchell, G.T.Kenyon//35th Conference on Neural Information Processing Systems (NeurIPS, 2021).
7. Pang, T. Accumulative Posing Attacks on Real-time Data/T. Pang, X. Yang, Y. Dong et al.//35th Conference on Neural Information Processing Systems (NeurIPS, 2021).
8. How to attack Machine Learning (Evasion, Poisoning, Inference, Trojans, Backdoors).-https://towardsdatascience.com/how-to-attack-machine-learning-evasion-poisoning-inference-trojans-backdoors-a7cb5832595c.
9. Poisoning attacks on Machine Learning.-https://towardsdatascience.com/poisoning-attacks-on-machine-learning-1ff247c254db.
10. Liu, G. Provably Efficient Black-Box Poisoning Attacks Against Reinforcement Learning/G. Liu, L. Lai//35 Conference on Neural Information Processing Systems (NeurIPS, 2021).
11. Optical Adversarial Attack Can Change Meaning of Road Signs. -https://www.unite ai/optical-adversarial-attack-can-change-the-meaning-of-road-signs/.
12. Shokri, R. Membership Inference Attack against Machine Learning Models /R.Shok, M.Stronati, C.Song, V.Shmatikov//IEEE Symposium on Security and Privacy, 2017.
13. Niu, D. Moire Attack (MA): A New Potential Risk Of Screen Photos/D.Niu, R.Guo, Y.Wang//35th Conference on Neur Information Processing Systems (NeurIPS,2021).
14. Su, J. One pixel attack for fooling deep neural networks/J.Su, D.V. Vargas, S. Kouichi//IEEE Transactions on Evolutionary Computation.-2019.-Vol.23 Issue 5.-P.828-841.

Published

2023-11-01

Issue

Section

Articles