Insider threats and behavioral analysis

Employees, vendors, contractors and suppliers who have access to your organizations are insiders. Any threat caused by them are insider threats. What makes them dangerous is, being in your trusted circles, they can cause the most damage. Another major issue is, these threats are hard to detect. You can’t use traditional security measures like implementing white-list/black-list, blocking access, IP-filtering, patching system, adding firewall, intrusion detection etc. to thwart such threats. These systems are designed to keep the bad guys out. Naturally, they can’t do much when the danger is already inside. Dealing with insider threats requires a different strategy.

An effective way to deal with insider threats is to monitor user activities and identify behavioral anomalies, some of which might have malicious intent. That’s why vendors in security and risk management space are increasingly focusing on behavioral analysis techniques to develop their Insider Threat Prevention solutions. Modern employee/user activity monitoring (UAM), user & entity behavior analytics (UEBA), data loss prevention (DLP) etc. – all have started using some form of behavioral analytics feature. A few of the modern ones have started to adapt Machine Learning (ML) and AI to go beyond analytics and create intelligent, expert solutions.

We will discuss what machine learning is, how it works and why it’s becoming so popular in threat detection systems.

What is machine learning and how is it used to detect behavioral anomaly?

Machine learning is a subset of artificial intelligence (AI) that takes some inputs (called Training) then applies advanced algorithms, statistical and mathematical models to predict an outcome. There are several types of machine learning systems based on what algorithm or data processing technique used, such as:

  • Supervised
  • Unsupervised
  • Reinforced learning
  • Bayesian network
  • Neural network
  • Generative adversarial network (GAN)
  • Association rule learning
  • And other

They each have unique use cases, but the premise is basically the same: inference and optimization – that is, how well can you predict something based on what occurred already?

In the case of behavior analysis and anomaly detection, a modern threat detection software may use a mix of ML techniques. For example, a solution may use Classification in a Supervised ML algorithms to identify spam based on email content, Regression algorithms to dynamically identify risk levels while using the same software may use Unsupervised ML techniques to detect anomalies in data streams like network traffic.

Is rules engine a form of machine learning?

Not all rules engines are machine learning based. While a traditional rule-based system works by manually specifying the conditions of the rules, a ML-based rules engine can automatically ‘associate’ (that’s why they are often called Association Rule Learning) or correlated seemingly unrelated activities to discover relationships among variables in ‘big data’. For example, a traditional rule-based system may flag uploading a file to an unauthorized Cloud service but to protect the same file from different egress channels and to prevent steganography type frauds, data tagging and fingerprinting techniques are used. This type of advanced protection may require a rules engine that uses machine learning.

How good is machine learning for detecting insider threats? 

Insider threats come in different shapes and sizes. It can be malicious, inadvertent or accidental. Disgruntled or stressed employees, non-responders, collusion, attention seeking, willful recklessness, flight risk users and even innocent yet ignorant insiders – all are potential risks. Even if you knew what to look for, finding anomalous behavior then connecting the dots to develop a complete picture from a huge number of activities may turn out to be humanly impossible. Especially, if you have a large group of users. The data points can easily end up into hundreds of thousands, even exceeding millions. Machine learning can be very good at crunching such large data and finding patterns outside the nominal baseline.

Machine learning is also good at finding clues in datasets spread across multiple sources. For example, it can flag someone as a risky insider by looking at multiple activity: network login/logout time, location data, file transfer activity, social media interactions, job performance, travel history etc. It can then alert a security analyst for a closer look. The analyst can then utilize other tools such as, session recording to conduct further investigation to confirm if the behavior is truly malicious or just a natural progression (i.e. a user assigned a new project triggering a flurry of activities not performed by the user before). The analyst’s review and decision then can be fed back into the system to increase the accuracy of the detection algorithm.

Here are a few advantages of machine learning algorithms when used to detect insider threats:

Less supervision:

Machine learning leads to automation reducing the need for manual supervision. Once setup, the system can take care of most of the tasks involving discovery and classification and, in some cases, even respond to potential harmful user behaviors automatically.

Scalability:

ML can handle large amounts of data from multiple sources making it suitable for large deployments. In fact, the larger the dataset, the better the system can ‘learn’.

Establish correlation & regression:

ML can find and classify data at speed and efficiency a human can’t. It’s also very good at finding signal from the noise – which makes it suitable for the task of separating abnormal user behavior from their normal activities.

Reduced number of false positives:

False positives occur when a security system misrepresents a harmless action as malicious. It’s  a major concern among security professionals as they are a major cause of wasted time and effort. If enough of these occur, your security team will get overwhelmed. A more dangerous scenario is, when you security team keeps receiving  the same false alerts, starts ignoring them, and an actual threat slip through. Machine learning can help prevent such scenarios. It uses several techniques like Decision Tree, Rule-Based Classification, Self-Organizing Maps, Clustering etc. to reduce false positives and still provide a solid security coverage. 

Faster detection and response time:

With today’s optimized models and hardware, machine learning can make high speed risk analysis and anomaly detection in large volumes of data. As a result, you can respond to threats faster and better.

Continuous improvement:

This is probably one of the most attractive benefits of using machine learning in security applications. A self-evolving ML model/deep learning can improve as it processes more cases and takes feedback from human supervisor over time. Also, machine learning is an emerging technology and everyday improvements are made in this field. Which is good because the threat landscape is evolving and we need a solution that can keep pace with it.

How does insider threat detection using machine learning work?

The actual process of behavior analysis, threat detection, categorization and risk scoring can be a complex endeavour depending on what machine learning algorithms are used. However, a common approach used by many solutions is ‘anomaly detection’, also known as ‘outlier detection. The idea is: a user’s behavior should match with the rest in their group or past activities, called a baseline. Events or observations that deviate from this baseline is an anomaly. Typically, such an anomaly might be an indicator of fraud, sabotage, collusion, data theft or other malicious intent. Once an early deviation is detected, the algorithm can flag the incident for further investigation or if designed to do so, compare the incident with similar events recorded in the past. This record(s) could be the result of a previously executed Supervised algorithm where the anomalies were labeled as ‘normal’ or ‘abnormal’ by a human security analyst, acquired from previous training data or a crowd-sourced knowledgebase (for example, multiple customers sharing a threat intelligence database). Finally, the threat is reported with a risk score factoring in the frequency, resources involved, potential impact, number of nodes it’s affecting and other variables.

Her are some basic steps and process a machine learning system might go through to detect insider threats:

Data mining input:

The first step in machine learning involves getting the user behavior and entity datasets, i.e. the monitored objects like apps/websites, email, file system, network, meta data such as time of monitoring, user roles/access levels, content, work schedule etc. The more granular the data is the better the accuracy of the system.

Data classification:

This can be done with pre-defined classification lists such as PII, PHI, PFI, code snippets etc., semi-dynamic lists such as file properties and origin, or data types discovered on the fly with OCR type technologies. Both Supervised and Unsupervised classification algorithms can then be used to filter the raw data based on those lists. For example, in a Supervised classification algorithm that filters sensitive files can use ‘file upload’ as an input and a file property/tag ‘confidential’ as output.

User profiling:

Information such as user roles, department/groups, access levels etc. are fed into the system from the employee records/HR systems, Active Directory, system audit logs, slice and dice data and other sources. This can be utilized for personalized profiling in the behavior models or integrated with an access control and privilege management system later.

Behavioral model(s):

Different algorithms such as, Feature Extraction, Eigen-Value Decomposition, Density Estimation, Clustering etc. are used to generate behavior models. Sometimes specialized statistical/mathematical frameworks are adapted for this purpose. For example, Regression-based models can be used to predict future user action or to detect credit card frauds. Where as, a Clustering algorithm can be used to compare business processes with compliance objectives.

Optimizing baselines:

Once the behavior model generates a baseline, it can be fine tuned for specific purposes. For example, adding a time or frequency component to trigger different rules at different levels of deviation, assign risk scores etc. Additional layers of filtering can also be used to increase efficiency of the algorithm and reduce false positives. For example, adding a domain filter to website anomalies to limit the number of incidents they system needs to check. In most cases, such baselines can be customized for individual, group/department or at organizational levels.

Policies and rules integration:

Behavior baselines are used to identify threats and trigger alerts when something out of the ordinary happens. Some of the employee monitoring/UEBA/DLP combines these baselines with a policy and rules engine to proactively prevent threats. The engines support actions such as: warning the user, blocking an action, notifying admin, running specific commands or recoding the incident to facilitate forensic investigation.

Human feedback:

At the end of the day, no matter how good a machine learning system is, it will still make mistakes, generate false positives or fail to identify a threat. After all, modeling human behavior is beyond the reach of any current technology. So, a security analyst will need to take the output from the machine learning system and conduct threat assessment manually from time to time. The good news is, these systems are designed to be responsive to human input. With enough human training, the system can be improved requiring less and less intervention over time.

Conclusion:

Behavior analysis and machine learning isn’t the magic bullet to fight against insider threats. It has its limitations. The best way to think about ML is to treat it as an additional tool (albeit a powerful one) in your security toolbox. That said, as the threat landscape evolves, we need technologies that can adjust to dynamic insider threats such as malicious users, sabotage, espionage, frauds, data and IP theft, privilege misuse and other difficult to identify risks. Machine learning seems to be a promising technology moving in the right direction.