New research uses natural language processing to understand threats and breaches posed by internal actors

The study was published online in the journal HCI for Cybersecurity, Privacy and Trust.

3 min read

Typically, collective efforts to prevent cybernetic attacks from internal actors include persistent evaluation, an evolving approach to assessing current and future risks, and the psychological comprehension of the insider’s decision-making process.

While external threats to cybersecurity have been the subject of vigorous research, the rapidly shifting threat internal actors pose merits critical actionable insight and profound understanding.

In a recent study, published online in the journal HCI for Cybersecurity, Privacy and Trust, a research group with Cranfield University details their approach to confronting insider threat attacks using natural language processing (NLP).

“There are a number of Insider threat frameworks that bridge the gap between these views, creating a holistic view of insider threat,” Katie Paxton-Fear, and her colleagues, stated in their findings.

“These models can be difficult to contextualise within an organisation and hence developing actionable insight is challenging. An important task in understanding an insider attack is to gather a 360-degree understanding of the incident across multiple business areas: e.g. co-workers, HR, IT, etc. can be key to understanding the attack.”

In the new research, a fresh approach was brought to light, one that garnishes organic narratives of an insider threat case, which in turn utilizes a computational approach to map these narratives to an existing insider threat framework.

“Leveraging Natural Language Processing (NLP) we exploit a large collection of insider threat reporting to create an understanding of insider threat. This understanding is then applied to a set of reports of a single attack to generate a computational representation of the attack,” Paxton-Fear and her colleagues explained in the report.

“This representation is then successfully mapped to an existing, manual insider threat framework.”

The study arose as breaches involving internal actors have been reportedly on the increase, as highlighted in a 2019 report cited in the findings.

“Our latest research considers how reports and viewpoints from employees can be automatically mapped onto an understanding of insider threat. This increased level of knowledge can help organisations better manage security risks originating from insider attacks.”

“Insider threats are very difficult to detect and many companies use tools to try and monitor their systems and employees. However, these tools give degrees of access which can be a significant risk, from an insider threat perspective,” the co-authors concluded.

The paper, titled Understanding Insider Threat Attacks Using Natural Language Processing: Automatically Mapping Organic Narrative Reports to Existing Insider Threat Frameworks, was also authored by Duncan Hodges and Oliver Buckley. Their findings will be presented at the HCI International Conference in late-July.

Image courtesy of stock.adobe.com