More than a handful of experts and journalists are hyping intelligent security as the coup de grace for our seemingly endless vulnerability problem. Some experts are arguing AI will soon be a necessity for all future security programs. But Silicon Valley’s good guys aren’t the only ones utilizing AI technology.
Hackers have already shown they can disrupt anti-malware algorithms with machine learning. And cybercriminals have developed situationally aware malware that calculates decisions based on their local environment.
Is this the end of cybercrime or mutually assured destruction?
This arms race isn’t between two global superpowers, rather an industry aimed at defending average people, industries and governments against an innovative society of digital anarchists state-sponsored hackers and cyber-mercenaries.
While some are lauding these new additions to the cybersecurity market, I am more than hesitant to say they will do any more than maintain the status quo. Not to say there aren’t some benefits for security that will come in the form of machine learning or adaptive intelligence, but the real beneficiaries here are the hackers themselves.
The malware could, theoretically, spread across millions of endpoints and lay silent until growth has optimized, then hit each device at the same time. Those endpoints can include anything from medical equipment to power lines. The virus would then hold laptops, generators, servers and anything else you can think of hostage until the owner pays up.
Other intelligent malware has proven an ability to mislead and disrupt AI security programs. Chinese researchers at Peking University successfully tricked a malware-detection algorithm into changing its malware classifiers, making it impossible for the algorithm to detect malware.
“Malware authors are able to frequently change the probability distribution by retraining [the algorithm],” the researchers explained. “This process makes machine learning based malware detection algorithms unable to work.”
Malware authors have developed programs to infiltrate your computer, and observe your actions, writing tendencies and personal information. The programs can then mimic people you communicate with and deliver custom-tailored messages asking for sensitive data.
Imagine receiving an email from your friend that contains inside jokes and an anecdote she always tells. It mentions an event on your calendar that you’re both attending. It has a minimally suspicious link, shares a seemingly harmless document, or asks for a piece of data it needs.
Tailored phishing attacks can also customize to imitate newsletters you follow, applications you use, your doctor or your boss.
Interconnected devices running bots have already delivered enormous attacks like 2016’s Dyn DDoS (distributed denial of service) attack that used 100,000 endpoints to deliver traffic loads of 1.2 TBps, bringing down websites including Twitter, Netflix and reddit.
Intelligent botnet attacks could be even more devastating. The Dyn attack was orchestrated by real humans who used bots to infect endpoints and execute the attack when the hackers saw fit. Some experts like Kalev Leetaru, a tech mogul and George Washington University fellow, believe we’re closer than we think to an AI-powered DDoS attack.
“While we’re not quite at the point where deep learning is capable of the complex open-ended problem solving needed to launch its own autonomous cyberattack,” Leetaru explains. “We’re getting exceptionally close to having all of the necessary building blocks in place to start seeing autonomous cyber weaponry in action.”
Even tech giant Elon Musk is worried about AI botnets learning to take down an algorithm to shut down or take over large portions of the internet as a whole.
Only a matter of time before advanced AI is used to do this. Internet is particularly susceptible to a gradient descent algo. https://t.co/a6AdF7o7AZ
— Elon Musk (@elonmusk) November 3, 2016
The type of algorithm Musk mentioned uses linear regression to find the optimal solution for a complex function. Intelligent programs are really good at performing this type of process.
Developers at Cloudsek, a security startup in India, created an autonomous program that navigates the web and uses a learning model to gather information and find vulnerabilities in sites and applications.
Their program used information online to find 10 flaws in LinkedIn’s security. It was able to find any LinkedIn user’s email address, delete user LinkedIn requests and download every exercise on Lynda without paying. Similar systems could, theoretically, perform similar practices on virtually any login page.
It’s hard to predict the extent to which intelligent security software will be able to defend intelligent malware or botnets. But it is pretty obvious that your average user cannot afford to license an intelligent security product for personal use.
Many applications may improve their security practices and better defend against attacks, but that can’t be guaranteed for anywhere close to the entire internet. AI in the hands of criminals appears, at least to me, as a much more scalable and adaptable entity.
A person’s user information or a small business’ sensitive data can be just as valuable as that from a global company. Until an intelligent security platform is available to the average user, hackers will be one step ahead of the security industry.