How AI is Mishandled to Become a Cybersecurity Risk | eWEEK – eWeek

The rapid evolution of artificial intelligence algorithms has turned this technology into an element of critical business processes. The caveat is that there is a lack of transparency in the design and practical applications of these algorithms, so they can be used for different purposes.

Whereas infosec specialists use AI for benign purposes, threat actors mishandle it to orchestrate real-world attacks. At this point, it is hard to say for sure who is winning. The current state of the balance between offense and defense via machine learning algorithms has yet to be evaluated.

There is also a security principles gap regarding the design, implementation and management of AI solutions. Completely new tools are required to secure AI-based processes and thereby mitigate serious security risks.

The global race to develop advanced AI algorithms is accelerating non-stop. The goal is to create a system in which AI can solve complex problems (e.g., decision-making, visual recognition and speech recognition) and flexibly adapt to circumstances. These will be self-contained machines that can think without human assistance. This is a somewhat distant future of AI, however.

At this point, AI algorithms cover limited areas and already demonstrate certain advantages over humans, save analysis time and form predictions. The four main vectors of AI development are speech and language processing, computer vision, pattern recognitionin addition to reasoning and optimization.

Huge investments are flowing into AI research and development along with machine learning methods. Global AI spending in 2019 amounted to $37.5 billion, and it is predicted to reach a whopping $97.9 billion by 2023. China and the U.S. dominate the worldwide funding of AI development.

Transportation, manufacturing, finance, commerce, health care, big-data processing, robotics, analytics and many more sectors will be optimized in the next five to 10 years with the ubiquitous adoption of AI technologies and workflows.

With reinforcement learning in its toolkit, AI can play into attackers hands by paving the way for all-new and highly effective attack vectors. For instance, the AlphaGo algorithm has given rise to fundamentally new tactics and strategies in the famous Chinese board game Go. If mishandled, such mechanisms can lead to disruptive consequences.

Let us list the main advantages of the first generation of offensive tools based on AI:

At the same time, AI can help infosec experts to identify and mitigate risks and threats, predict attack vectors and stay one step ahead of criminals. Furthermore, it is worth keeping in mind that a human being is behind any AI algorithm and its practical application vectors.

Let us try to outline the balance between attacking and defending via AI. The main stages of an AI-based attack are as follows:

Now, let us provide an example of how AI can be leveraged in defense:

The expanding range of attack vectors is only one of the current problems related to AI. Attackers can manipulate AI algorithms to their advantage by modifying the code and abusing it at a completely different level.

AI also plays a significant role in creating Deepfakes. Images, audio, and video materials fraudulently processed with AI algorithms can wreak information havoc making it difficult to distinguish the truth from the lies.

To summarize, here are the main challenges and systemic risks associated with AI technology, as well as the possible solutions:

The current evolution of security tools: The infosec community needs to focus on AI-based defense tools. We must understand that there will be an incessant battle between the evolution of AI attack models and AI defenses. Enhancing the defenses will be pushing the attack methods forward, and therefore this cyber-arms race should be kept within the realms of common sense. Coordinated action by all members of the ecosystem will be crucial to eliminating risks.

Operations security (OPSEC): A security breach or AI failure in one part of the ecosystem could potentially affect its other components. Cooperative approaches to operations security will be required to ensure that the ecosystem is resilient to the escalating AI threat. Information sharing among participants will play a crucial role in activities such as detecting threats in AI algorithms.

Building defense capabilities: The evolution of AI can turn some parts of the ecosystem into low-hanging fruit for attackers. Unless cooperative action is taken to build a collective AI defense, the entire systems stability could be undermined. It is important to encourage the development of defensive technologies at the nation-state level. AI skills, education, and communication will be essential.

Secure algorithms: As industries become increasingly dependent on machine learning technology, it is critical to ensure its integrity and keep AI algorithms unbiased. At this point, approaches to concepts such as ethics, competitiveness, and code-readability of AI algorithms have not yet been fully developed.

Algorithm developers can be held liable for catastrophic errors in decisions made by AI. Consequently, it is necessary to come up with secure AI development principles and standards that are accepted not only in the academic environment and among developers, but also at the highest international level.

These principles should include secure design (tamper-proof and readable code), operational management (traceability and rigid version control) and incident management (developer responsibility for maintaining integrity).

David Balaban is a computer security researcher with over 17 years of experience in malware analysis and antivirus software evaluation. He runs MacSecurity.net and Privacy-PC.com projects that present expert opinions on contemporary information security matters, including social engineering, malware, penetration testing, threat intelligence, online privacy, and white hat hacking. Mr. Balaban has a strong malware troubleshooting background, with a recent focus on ransomware countermeasures.

Read more from the original source:
How AI is Mishandled to Become a Cybersecurity Risk | eWEEK - eWeek

Related Posts

Comments are closed.