Breaking News

Artificial intelligence and its role in enhancing electronic security


E-security has become a major factor in our daily lives, especially with the increasing reliance on Internet technologies, the things that connect everything that is specific to us to the Internet. Therefore, artificial intelligence is expected to be one of the main pillars of e-security in the coming years, .

Where you can get the impression that digital defense solutions have reached the height of their evolution as you roam through the huge exhibition halls of the recent RSA Cyber ​​Security Conference in San Francisco.

There is also a significant increase in the sales of e-security software, which promised unbreakable defenses because of their reliance on artificial intelligence technologies that enable them to instantly detect any malicious software on the network, respond to incidents, and detect breaches before they begin.

This rosy vision of what artificial intelligence can offer is not entirely wrong. What next-generation technologies do, however, seems far more unclear than marketers of these technologies would like to recognize. Fortunately, researchers who develop new defense programs in businesses and academia largely agree on both potential benefits and challenges, And a clear definition of terminology.

"I do not think many of these companies use artificial intelligence techniques, but they only develop automated learning," said Marcin Klinsinski, chief executive of Malwarebytes Electronic Security, which promoted its own threat tracking program using automated learning techniques at the RSA conference. They are called artificial intelligence programs, which is causing great confusion for users. "

More reliance on machinery

Electronic security companies rely on automated learning algorithms in general to train large data sets to learn what to watch on networks and how to interact with different situations. This is the opposite of what artificial intelligence systems offer. Most existing security applications can not find new conclusions without new training data.

Automated learning is an entirely appropriate approach to defense against viruses and malware testing. For decades, anti-virus programs have been based on what is known as signature matching. In other words, e-security companies have identified specific malicious programs and have a unique signature that represents a distinctive signature for each type, and then monitors customers' devices to ensure that none of these signatures appear.

The malware-based malware scan works in a somewhat similar way, where algorithms are trained on massive catalogs of malware to find out what to look for. An automated learning approach is also an added benefit, because the scanning tool learns to look for malware properties Rather than their own signatures.

Hackers can block traditional anti-virus programs by making minor modifications to their malicious programs that eliminate signature, while automated learning systems rely on the most diverse screening methods offered by all major e-security companies at this stage, but still These systems need regular updates to new training data, but the more comprehensive view shows that the pirates' task is harder.

"The nature of malware is constantly evolving, so people who write signatures for certain families of malware are constantly challenged," says Phil Roth, data scientist at Endgame Electronic Security, based on automated learning.

Often hackers rely on old methods or use existing ones, because if they write malicious software from scratch, they will do a lot of effort for an attack that may not have much impact, so you can learn from all the techniques in the algorithm training group to identify new patterns that may Used by the attackers and then you can overcome them. "

Similarly, automated learning is indispensable in the fight against spam, phishing and phishing. Eli Porsetin, who leads Google's anti-abuse research team, points out that Gmail uses automated learning techniques for filtering Since it was launched 18 years ago.

But over time, attack strategies and data phishing schemes have evolved to become more harmful. As a result, Gmail and other Google services need to adapt to hackers who know these methods. Google and other top service providers will need to rely more on automation and automated learning techniques to keep up with that.

As a result, Google has relied on automated learning applications in almost all of its services, particularly deep learning technology, which allows algorithms to make more independent adjustments and self-regulation during training and development.

"We live in a world where the more data the more problems you have, but with the deeper learning technology the more data the better, the more we rely on it to prevent the emergence of violent images, the survey of comments, the detection of phishing and the detection of malware in the Google App Store , And we also use it to detect fraudulent payments, protect cloud computing solutions, and detect hacked computers everywhere. "

Therefore, the greatest strength in automated learning is training to understand what is normal for the system and then to put anything unusual for human review. This concept applies to all types of threat detection with the help of automated learning, so researchers say that the interaction between machine learning and human is the decisive force For these techniques. In 2016, IBM estimated that the organization averaged more than 200,000 security threats per day.

Latest cyber attacks

Although many of the tools for automated learning have already shown promising results in providing defense, researchers warn unanimously of the ways that hackers began to use them as they became dependent on automated learning techniques in the attack, and the most common types of attacks is the pirates to overcome Captcha test is used to prove that someone who enters the data input is a person and not a machine to avoid malware that is recording.

Another threat to automated learning is data poisoning, if hackers can figure out how to set up algorithms or where training data for these algorithms is pulled, making them able to detect ways to enter misleading data that build counter-algorithms around the content. For example, hackers may run campaigns on thousands of accounts to flag messages or comments to appear as malicious and unwanted in an attempt to bypass this algorithm.

In another example, researchers from Cyxtera, a specialist in cloud computing, have built a phishing-based phishing generator that trains more than 100 million effective historical attacks to generate fraudulent e-mail links and e-mails more efficiently.

"On average, phishing-based phishing attacks will exceed 0.3 percent of the time, but artificial intelligence has allowed hackers to bypass the system more than 15 percent of the time," said Alejandro Correa Bahansen, vice president of research at Cyxtera. And when we wanted to know how the attacker built it. We found that all the data were available to him since all the libraries were open source. "

The researchers noted that this is why it is important to create automated learning systems based on human existence, because systems are not independent rulers. Automated learning systems should have the option to say, "I have never seen this before" and ask for help from man. "There is no real artificial intelligence in electronic security systems," said Battista Biggio, associate professor at the Italian University of Cagliari. "It's data conclusions or data-based relationships so people have to be aware that this technology has limitations."

To this end, the research community has worked to understand how to reduce blind spots in automated learning systems so that they can be strengthened against attacks. At the RSA Security Conference, Endgame researchers released an open source threat training group called EMBER, To focus on e-security collaboration based on automated learning.

"There are good reasons why the e-security industry does not have many open data sets. These types of data may contain personal information or give attackers information about what the structure of the company's network looks like," says Roth in Endgame. Work to clear the EMBER data set, but my hope is to encourage more research and the e-security consortium to work together. "

This collaboration may be necessary for e-security companies to stay ahead of hackers who use automated learning techniques in attacks, especially with clear results of automated e-security learning that will develop over the coming years.

No comments