Can We Use Behavioural Analysis to Control Cyber Security?

Zurückschlagen: mit Verhaltensanalyse gegen Cyber-Kriminalität?

There has been a change in the way that cybercrime is committed. In the early days of personal computing, back in the 1980s, malware threats like the infamous Brain virus, were distributed via floppy disk and were easily localized and handled.  But then along came the Internet and cybercrime became massively distributed.  Since then we have seen methods of attack increasing in sophistication and becoming more and more prevalent. One of the most sinister and difficult things to counter about all of these cyber threats is that they are now truly personal.

The Modern Age of Cybercrime: Using Our Behaviour Against Us.

Cybercrime really became personal when we started to use email and the Internet. The Internet was like a big open book for the cybercriminal. They could simply create an email, stick an attachment in it containing malware and send out, en masse, to all and sundry, even using a person’s own email address list to do it for them. Some “punter” would be bound to open that attachment which would auto-run a piece of executable code and voilà! they’re infected with malware. This type of attempt at mass infection via email came into its own in the late 90s with the Melissa email worm. The trouble with this type of tactic is that it gets old pretty quickly. Human beings have this tendency to learn things and over the years, we learnt not to trust attachments in emails as much and we also set up effective spam filters. This meant that cybercriminals had to up their game.

The result of this game changer has been a much more personalised approach to cybercrime with the use of ‘social engineering’ as the cybercriminal weapon of choice. Social engineering is a way of using our own behaviour against us. It uses psychological tricks to get us to perform actions that we really shouldn’t, like opening attachments, or clicking on links in a possibly suspicious email. This type of behaviour manipulation is nothing new. Confidence tricksters have been using this since human beings came into existence. The fact that cybercriminals are now using it is no surprise.

One of the most successful methods of social engineering and one responsible for some of the biggest cyberattacks in recent years is ‘spear phishing’. Spear phishers use the same old malware tricks as the early hackers used, but this time they really focus in on a target. Spear phishers get to know their audience. They watch to see what websites they use and trust. They find out who their line manager is, they create emails with the right logos and signatures on, so when they arrive in an inbox, they look like they really have come from that person’s boss. Spear phishing is very successful because of this personalization. It is estimated that 91% of cyber-attacks now start with a spear phishing email and the open rate is 70% compared to a non-personalized mass mailed phishing email which has an open rate of only 3%; personalization works and makes it hard to differentiate between what is legitimate and what isn’t.

Hitting Back at Modern Cybercrime with Behavioural Analysis

As cybercriminals have changed their game plan and use our own behaviour against us, we too can use the same methodologies, by using our knowledge of expected behaviour to help us identify and mitigate cyber threats. Traditional security tools, let’s call them, ‘security 1.0’, gave us anti-virus and firewalls as our main methods of tackling cyber threats. These systems are still needed, and many of the underlying architectures of these security 1.0 tools have been updated to accommodate new threats. However, they don’t go far enough, in what is becoming an increasingly complex attack surface with clever tactics to get at us through our technology. An example of this is the increasing use by hackers of Advanced Persistent Threats (APT).

An APT is a stealth worker. It sits on a network server, often over many months. It is specially designed to work under the covers. Once in situ, hackers use ‘command and control’ (C&C) communications, which cannot be easily detected, as they blend in with normal Internet traffic, to update the malware, keeping it hidden from traditional tools like anti-virus. APTs are the bane of the enterprise and are becoming a very popular method of extracting data over a long period. A recent example of an APT attack was the Carbanak cyberattack that affected over 100 banks, the hackers making around $1 billion out of the APT, the malware being initially installed via spear phishing emails. The next generation of security tools, security 2.0 now use a much more sophisticated approach to security to tackle these stealth attacks, initiated through manipulation of our own behaviour; these tools use behavioural analysis.

So what exactly is behavioural analysis in the context of security? Behavioural analysis is a technique that uses profiles of known behaviour and expected usage patterns to spot anomalies, which may be a sign of an imminent cyber attack or an on-going infection.

Behavioural analysis can call upon a number of techniques, depending on the product used, these include:

  1. Security intelligence and threat knowledge. There is much information out there about the type of attacks being perpetrated and how they are being initiated. Security companies build up profiles of attack vectors and malware instances and use these to predict next moves and identify incoming threats
  2. Profile analysis: To understand and determine any changes in behaviour, you have to understand the behaviour first.   Behavioural analysis works by analysing normal behavioural patterns.  A simple example, known as credential behavioural monitoring, would be to apply user specific questions to a login attempt that looks like it may be a brute force, or is coming in from an unusual location, etc. An example of its implementation could be if the monitored user is genuine, instead of locking their account, which is both annoying and can lead to DOS attacks, you can ask them some personal questions; if they answer correctly they are logged in.  Another example is analysing a particular action, say a database query; Malware presents a very different profile when extracting data, than a human being performing the same operation.
  3. Monitoring, analysis and detection: This involves understanding  your baseline of  expected behaviour on a network, for example, knowing which are trusted sites, the types of files accessed by individuals, the types of access to servers, external sites and as and so on that are normal for that network. You can use the profile analysis information as a basis for your monitoring and detection of potential cyber attacks. Traffic behaviour is one area that can give a lot of information and allow early detection of anomalies. It can also help in the fight against Botnets, which are typically difficult to detect.

Can Behavioural Analysis Work Against Cyber Crime?

Like any arms race, the two sides are continuously upping their game to win the next battle. Cybercrime is no different. Just as we develop sophisticated tools like behavioural analysis to combat their use of social engineering and sophisticated stealth malware, the cybercriminals are bound to develop malware that will looks just like regular human behaviour when using technology. For now, behavioural analysis is providing a more intelligent and considered method of dealing with complex cyberattacks. Used with traditional security tools and coupled with web security mitigation techniques, they are an important part of our new security arsenal. However, we should never become complacent, the next major cybercrime technology is just round the corner, perhaps this time it will use gamification techniques to arm the hacker, engaging us in our own malware infection.

4 thoughts on “Can We Use Behavioural Analysis to Control Cyber Security?

  1. Very good article with a lot of insides, thank you very much. Have you heard of “tailgaiting”? If you haven’t, you should have. This is the phenomenon in which hackers, thieves, and spies seek to enter facilities or portion of buildings that they’re not supposed to. Here is how it works: the attacker, who is most likely dressed to look like either like an IT staffer or senior executive, walks closely behind a worker while that worker is using a key-card or other device to enter the workplace. The attacker stays close, hence he terminology, and slips through the door or gate before it closes. If the attacker is questioned by the employee, he or she grins sheepishly and mumbles about forgetting the key-card today.
    Do you wonder why it works? Tailgaiting works because most people have an aversion to confrontation. This is understandable: after all, what if you challenge a talgater and she turns out to be the company CFO? Here is the thing to remember: If that’s the case, the CFO will actually be grateful that you challenge her, as it demonstrates that you take security seriously.

    1. Thank you for your feedback, Emanuel, and for pointing out tailgaiting. You are spot-on: This attack method from the physical security domain abuses human behaviour just like the attack types illustrated in this article.

  2. And this is why, today, people are still the weakest link. More emphasis on information security awareness and insider threat training should be exercised annually. As we have a health check for systems, there should be a people-check for people. .

    1. I fully agree with you, Rami. No matter how ingenious our technological defenses ever become, human error, ignorance and lack of awareness will always be our greatest vulnerability against cyber crime.
      Human errors can be reduced to a certain extent by means of bombproof processes and quality control – in the end, however, how far you can go is a matter of resources. Ignorance could be largely avoided by hiring the right people – yet, IT security ignorance as a killer criterion for every position hardly makes sense. Interestingly though, both human error and ignorance can be quite successfully battled by creating secure systems that are actually usable and less of an annoyance – check out this article on Usability vs Security.
      Finally, as you pointed out, the most powerful and cost-effective control at your disposal is creating awareness throughout your organization’s whole hierarchy. Find out more about United Security Providers’ user awareness consulting services here (German PDF only).

Leave a Reply