Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

PL EN


2020 | 3 (57) | 65-72

Article title

Attack vectors on supervised machine learning systems in business applications

Authors

Content

Title variants

PL
Wektory ataków na nadzorowane systemy uczące się w zastosowaniach biznesowych

Languages of publication

EN

Abstracts

EN
 Machine learning systems have become incredibly popular and now have practical applications in many fields. An area of business applications has been developing particularly well, starting from the prediction of customers’ purchase preferences and up to the automation of critical business processes. In this context, the security of such systems in a situation of a threat of intentional attacks carried by organized crime is extremely important. A theoretical framework of attacks on supervised machine learning systems, which are the most popular in business applications, is set out in this article. The possible attack vectors are widely discussed. The main contribution of this article is to recognize that the black box type attack scenario is the most probable, therefore the scenario of this kind of attacks was described extensively.
PL
Systemy uczące się stają się coraz bardziej popularne i mają wiele praktycznych zastosowań. Szczególnie istotny i szybko rozwijający się jest obszar zastosowań biznesowych. W tym kontekście bezpieczeństwo informacyjne takich systemów jest niezwykle ważne, zwłaszcza przy dużej aktywności zorganizowanych grup cyberprzestępców. W artykule przedstawiono taksonomię intencjonalnych ataków na systemy uczące się pod nadzorem, które to są obecnie najpopularniejsze w zastosowaniach biznesowych. Omówiono także potencjalne wektory ataków. Wskazano ataki typu „czarna skrzynka” jako najbardziej prawdopodobne scenariusze ataków i omówiono je bardziej szczegółowo.

Year

Issue

Pages

65-72

Physical description

Contributors

author
  • Warsaw School of Economics

References

  • Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A. and Mukhopadhyay, D. (2018). Adversarial attacks and defences: a survey. Retrieved from arXiv:1810.00069
  • Barreno, M., Nelson, B., Sears, R., Joseph, A., Tygar, J. (2006). Can machine learning be secure? In ASIACCS’06, 16-25.
  • Barreno, M. Nelson, B., Joseph, A., and Tygar, J. (2010). The security of machine learning. Machine Learning, 81(2), 121-148.
  • Brewster, T. (2019). Hackers use little stickers to trick tesla autopilot into the wrong lane. Forbes Magazine, April 1.
  • Bubnicki, Z. (1974). Identyfikacja obiektów sterowania. Warszawa: PWN.
  • Dalvi, N., Domingos, P., Sumit, M., and Verma, D. (2004). Adversarial classification (Proceedings of the tenth ACM SIGKDD international conference on Knowledge Discovery and Data Mining (KDD’04), pp. 99-108) ACM Press.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative Adversarial Networks, arXiv:1406.2661
  • Huang, L., Joseph, A., Nelson, B., Rubinstein, B., Tygar, J. (2011). Adversarial machine learning (Proceedings of the 4th ACM workshop on Security and Artificial Intelligence (AISec ’11), pp. 43-58), ACM Press.
  • Kurzyński, M. (1997). Rozpoznawanie obrazów. Metody statystyczne. Wrocław: Oficyna Wydawnicza Politechniki Wrocławskiej.
  • Laskov, P., and Lippmann, R. (2010). Machine learning in adversarial environments. Machine Learning, 81(2), 115-119.
  • Muńoz-González, L. (2019). The security of machine learning systems. In L.F. Sikos (Ed.), AI in cybersecurity (pp. 47-79). Springer.
  • Nelson, B. (2010). Behavior of machine learning algorithms in adversarial environments (Technical
  • Report No. UCB/EECS-2010-140. Electrical Engineering and Computer Sciences). University of California at Berkeley.
  • McDaniel, P., Papernot, N., and Celik, Z. (2016). Machine learning in adversarial settings. IEEE Security & Privacy, May/June, 68-72.
  • Papernot, N, McDaniel, P., Goodfellow, I., Jha, S., Celik, Z., and Swami, A. (2017). Practical Black-Box Attacks against Machine Learning (Proceedings of the 2017 ACM on Asia Conference on
  • Computer and Communications Security (ASIA CCS ’17), pp. 506-519). ACM: New York, NY, USA.
  • Shankar, R., Nyström, M., Lambert, J., Marshall, A., Goertzel, M., Comissoneru, A., Swann, M., and Xia, S. (2020). Adversarial machine learning – industry perspectives. Retrieved from arXiv:2002.05646
  • Surma, J. (2011). Business intelligence: making decisions through data analytics. New York: Business Expert Press.
  • Surma, J. (2020). Hacking machine learning: towards the comprehensive taxonomy of attacks against
  • machine learning systems (ICIAI 2020: ACM Proceedings of the 2020 the 4th International Conference on Innovation in Artificial Intelligence, pp. 1-4).

Document Type

Publication order reference

Identifiers

YADDA identifier

bwmeta1.element.desklight-eb245233-1f57-4141-ad74-4b1913a4dfce
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.