AI Cyber-Attacks: What Is It and How to Protect Against Them

12:44 AM CET - August 07, 2020

Experts forecast a drastic increase in AI-orchestrated cybercrimes over the next few years. Here is what you need to know about this type of attacks and how you should protect yourself against them.

Artificial intelligence is a dangerous power that can be used both for good and bad purposes. The problem with AI is that it is indifferent to morality. It can equally easily enhance national, corporate, and private security — and breach it, create vaccines against terminal diseases — and assist in spreading deadly viruses.

So far, the news that we have read and heard of AI has been predominantly positive. It contributes to scientific progress, automates manufacturing processes, saves human specialists lots of time and effort in multiple spheres of life. However, once evil minds start to use AI as a weapon, the losses will be colossal.

In Which Way AI Cyberattacks Are Different from Human Ones

AI is a tough and ruthless enemy. If you have ever tried to play cards or chess against a computer, you know that it thinks, acts, and calculates much faster than the most intellectual human. AI never gets tired or loses motivation. If you hire such a hacker, you won't need to pay them: just press the button and wait until the task is completed.

AI can customize advanced attacks by pulling information from multiple sources. It will identify particularly vulnerable victims with great precision and assault them when they are most helpless. Governments agencies and large-scale companies can withstand such attacks, but private users will remain defenseless — at least if they don't considerably upgrade their current protection level.

Goals and Targets of AI Cyberattacks

When human hackers attack a victim, they normally want to steal their funds or identity. Violators do it because they want to become rich or, maybe, tarnish someone's reputation for personal reasons.

AI might complete the same tasks much more quickly and efficiently. It can crack passwords, steal passport data and financial credentials, organize denial of service attacks. But it can also target larger entities such as the national security system, power plants, or healthcare databases. As a result of an attack, thousands of people might lose access to their savings, electricity, or basic medical services.

However, AI lacks selfish motives. A nefarious human mind should orchestrate its attacks to achieve goals that affect other human's lives. Criminals might do it to blackmail global corporations. Terrorists might hire AI to put pressure on governments. This might sound like science fiction but it's the reality we live in. According to expert forecasts, the world will see a significant increase in AI cyberattacks over the next couple of years.

Machine Learning as a Mixed Blessing

Machine learning is a lesser-known aspect of AI, but it might pose a great threat to the modern cybersecurity systems. As most of us understand AI, we feed the machine with certain data and it acts according to this data. The machine can't expand its knowledge and become uncontrollable — this is what we think. Unfortunately, this stereotype doesn't correspond to reality anymore.

Modern AI is just as capable of learning as humans. The more data it consumes, the more informed it becomes, and the more sophisticated conclusions it makes. It learns to perform increasingly complex operations at a breathtaking speed. To develop, an AI needs to "digest" as many correct answers to the same problem as possible. When it becomes aware of all the possible answers, it starts to generate its own ones. Highly likely, these new answers will be more elaborate and efficient than those created by humans. In prospect, this opens unprecedented opportunities for good causes — as well as for abuse.

Let's consider a speculative situation when hackers deploy a botnet for spam attacks. Once it fails, the bots will be able to learn from their own mistakes. Their second attempt will be more accurate and productive than the first one, and the success rate of the third one will be even higher.

How to Protect Yourself Against AI

On a global level, automation of organizational processes and innovative data management can be considered as an efficient protection tool. People will use AI to identify threats created by another AI, detect malware and abnormal network behavior. It will respond to hacking attacks in real time and repel them before they can cause any damage.

But AI won't be able to act on its own initiative. Just as hackers set targets for their malicious AI, human specialists will need to configure, teach, and control their systems. Forward-thinking algorithms, augmented intelligence, and aid intelligence analysis will expand the capabilities of human experts but will hardly ever replace them. AI might be unbeatable at predicting the behavior of large complicated systems, but it inevitably fails when trying to forecast the behavior of an isolated individual. Our personal behavior is influenced by too many irrational factors that even the most sophisticated computer can't calculate.

On a private level, the standard antiviruses that we install on our devices are efficient only against the so-called "zero-day attacks". This term is used to describe threats that are already known to your systems. The antiviruses of the previous generations relied upon the reactive method of defense, which means they were helpless against new unknown threats. Now they need to become proactive — that is, identify and neutralize threats long before they reach us. Such programs have been available for quite some time. We won't focus on them in detail here, but you can read reviews on MacUpdateto get to know how they work and in which way they can be helpful for you.


AI is a double-edged sword whose destructive part has remained largely unknown to us so far. But it will inevitably reveal itself in the nearest future, and the outcome might be unpredictable. On a global scale, government agencies and huge corporations will be responsible for preventing and warding off AI-orchestrated cyberattacks on infrastructural objects, national security systems, and other large entities. What we private users can do is to timely install advanced software that functions along with the proactive defense principles and can identify and neutralize new threats long before they reach us.

Image Credit: Shutterstock Licensed Image by PabloLagarto
Edited on: Aug. 7, 2020, 12:44 a.m.

Artificial Intelligence, Security | tags: big data security, attack, Artificial Intelligence

Article Author: Annie Qureshi

Annie Q is serial blogger and entrepreneur. She has been contributing for several years to well-known platforms. She is currently working at Catalyst For Business as a Senior Editor. Follow her on posts on twitter.

Load more
Add ArticlesPost a Job

Personal Suggestions

In order to optimize the website and to continuously improve Datafloq, we use cookies. For more information click More information