Detecting bad actors via behavioural analytics
September 2019 | FEATURE | RISK MANAGEMENT
Financier Worldwide Magazine
September 2019 Issue
When used to detect bad actors in the workplace, behavioural analytics can go some way toward assisting companies to establish the who, what, where, when and why of a threat before it can become reality.
Described by some as an emerging technology and others as a mature practice, behavioural analytics is a threat detection tool that is increasingly being utilised by chief information security officers (CISOs) to better understand the vagaries of human behaviour.
“The scope of intended functions for behavioural analytics technologies is broad and varied,” says Dr Alexander Stein, founder of Dolus Advisors. “Companies employ different analytics tools to detect, defend and respond to potential internal and external threats and breaches or maintain pre-set operational controls and standards.”
According to Imperial College London’s Behavioural Analytics Lab, the key goals of behavioural analytics are to: (i) understand and predict human behaviour from ubiquitous sensors and digital data; (ii) predict and evaluate human performance; (iii) infer internal or cognitive state (stress and risk) of individuals from behavioural dynamics; (iv) develop behavioural biomarkers of physiological and psychological well-being; and (v) provide bottom-up analysis of group and social dynamics from the decisions of individuals.
“Behavioural analytics uses machine learning (ML) algorithms on Big Data to create a behavioural baseline using profiling attributes from various data sources,” explains Saryu Nayyar, chief executive of Gurucul. “It detects when there is a deviation from established patterns, so it can quickly alert on insider threats, compromised accounts, fraud, brute-force attacks and the like. The most mature behavioural analytics solutions perform continuous risk scoring of users and entities based on historical and current behaviour, triggering automated risk-response based on these dynamic risk scores.”
That said, despite its scope, the adoption of behavioural analytics as a security solution tool is still a fairly recent development, having been utilised in other spheres for some years.
“Understanding how users interact with services and systems is critical to security, as well as being good business,” says Mark McGovern, founder of Ravenwill. “E-commerce sites and advertisers have developed sophisticated systems for collecting and analysing data about user activities over time. This is how they can identify the telltale behaviours that indicate what a user is looking to buy. Security solutions are now beginning to use the same type of behavioural approach. They are monitoring user activity over time and analysing it to identify when a user or their actions are potentially malicious.”
Collection and analysis
Before collecting, analysing and evaluating data in order to detect account privilege abuse, database, network traffic or geographical anomalies, CISOs should first establish what data is pertinent.
“Behavioural analytics works best when it has access to all the relevant data,” says Dr Scott Zoldi, chief analytics officer at FICO. “Unlike some applications of analytics that are hypothesis-led, that is, where an analyst may predetermine which data should be used to improve results, in security, artificial intelligence (AI) and ML are increasingly used to find patterns of behaviour from transaction data and anticipate what is likely to come next. This is routinely the case today with payments fraud detection, and is becoming more prevalent in cyber security.
“In the past, cyber security analytics was focused on gathering data about compromises, developing threat ‘signatures’, and using those signatures to protect against future threats,” he continues. “By contrast, behavioural analytics identify emerging threats by recognising anomalous transactional behaviour patterns in real time – independent of attack vector. While many companies label their signature-based detection methods as ‘analytics’, the analytics are largely static and built to block known threats, and therefore fall into the category of basic defences.”
In Dr Zoldi’s view, there are two main ways CISOs should harness analytics. First, by utilising self-calibrating models which constantly recalibrate based on behaviour of monitored entities, and score future transactions as likely anomalies. And second, via self-learning analytics which improve with each resolved alert, serving to systematically automate the insights of expert and in-demand human security analysts as they work cases. “These technologies provide, for the first time, the ability to sense and respond to the most egregious threats as they happen, and before damage is done,” he says.
“An effective behavioural analytics solution will collect and prepare data from diverse sources, obtaining a true view of the identity of users and host,” adds Ross Brewer, vice president and managing director EMEA at LogRhythm. “It will then apply scenario-based algorithms that utilise ML, statistical analysis, peer group analytics and other techniques to identify patterns and grade threats and anomalies.”
Reducing false positives
With false positives the leading cause of delayed breach detection, companies need to set thresholds that can detect genuine deviations from normal activities – anomalies that are a potential threat and those that are not – over an extended period of time, and report them accordingly.
“The biggest challenge in setting up a security program with the objective of detecting unknown threats is that you must go deep into the data to examine the behaviour,” explains Ms Nayyar. “Too many existing systems utilise a security approach that delivers a high volume of false positives. What is needed, and what a mature behavioural analytics solution provides, is context. True ML thrives on large repositories of Big Data for increased processing and data variety over legacy infrastructure. The richer and more inclusive the source of data, the higher potential for accurate context for risk scores, along with fewer false positives.”
False positives can also be reduced with specific recourse to AI and ML, according to Mr Brewer. “Through ML, the behavioural analytics system can compute a risk score – a probability that an event represents an anomaly or a security incident,” he explains. “A threshold is established and when the risk score of the user or entity exceeds this, the system creates a security alert and flags it to the system administrator. As more data is collected and a better view of identity and behaviour of users and hosts established, an effective system will use AI and ML to improve efficiency in detecting and responding to threats.”
Analytics at the core
Given the purported ability of behavioural analytics to detect illicit transactions and stop them as they occur, saving untold amounts of financial and reputational loss, many analytics practitioners believe that the tool should now be a core component of companies’ threat detection arsenal.
“Behavioural analytics is the next step in security and increasingly will be expected by customers, regulators and security professionals,” suggests Mr McGovern. As technology moves toward cloud-based, multi-tenant solutions and services, remote access is becoming the default way users access sensitive data and services. This requires defenders to put in place security measures that are smart, and wholly independent of the end user or their devices. Behavioural analytics meets these requirements and forward security considerably.”
According to Ms Nayyar, the key to effectively rooting out truly risky anomalous behaviour is for CISOs to examine every possible access and activity feed, so that dots can be connected across applications, systems, groups and devices. “If someone logs in at midnight, is that person doing a system update, or has an account been compromised and this a data exfiltration attempt?” she asks. “Examining context across the entire environment is the only way to identity truly aberrant behaviour. The best behavioural analytics solutions ingest the most data feeds out-of-the-box and have the best ML models to do that data justice.”
Others, however, are less enthused about the idea of behavioural analytics as an essential component of security testing. “Behavioural analytics is to malicious threat detection what a man wearing a bird suit flapping his arms is to supersonic flight,” asserts Dr Stein. “Behavioural analytics as a general enterprise is handicapped by multiple factors. One is a cardinal misunderstanding of the root problem: intent and motivation drive and precede behaviour, not the other way around.
“Behavioural technologies can only observe the superficial features of behaviour in-motion, but have no capability to comprehend or intuit the actual triggers of impending malicious action,” he continues. “Analytics compresses the infinitely complex individual and systemic factors giving rise to malicious conduct into basic behavioural and motivational taxonomies. To data scientists, technologists and social behaviouralists, these might seem inconsequential rounding errors. From a psychological perspective, they are radical flaws which deform concept, design and execution.”
Panacea, pipedream or neither?
With the threat of bad actors infiltrating systems and processes ever-present, the market for behavioural analytics seems likely to increase in the years ahead. But while some practitioners appear to view the tool as a panacea for detecting bad actors, others are more critical of its worth as a standalone threat detection mechanism.
“Behavioural analytics can be widely applied to security problems but I would stop short of calling it a panacea, as it is only part of the solution,” says Dr Zoldi. “Good data is essential, as are good monitoring systems and good case management tools. Ultimately, behavioural analytics can transform your ability to recognise threats earlier based on strong ML and guidance based on expert security analysis investigations. But what happens when a potential threat is detected is up to you.”
Likewise, in the view of Dr Stein, behavioural analytics is a useful tool if its application is limited to enhancing additional threat detection systems. “Companies must be alert to the fundamental if inconvenient fact that malicious human behaviour is a human problem,” he explains. “It frequently involves technology but is not a technology issue that can be solved technically.
“The actual causes of most institutional incidents are the confluence of a complex matrix of factors, not any single one,” continues Dr Stein. “Understanding and addressing who people are – not just controlling their functional competencies – and how they relate to and treat each other in the enterprise ecosystem, are in the aggregate more effective mitigants to misbehaviour than behavioural analytics.”
Whether or not the tool is as an all-seeing, all-knowing form of knowledge, as some proponents claim, more companies and their CISOs are set to integrate these solutions into their defence systems to help mitigate the risks posed by bad actors.
© Financier Worldwide