Digital Safeguarding and Personal Security: A Talk With Nansen

Decorative dot pattern
News
POSTED ON 13-March-26

We recently sat down with Andrew Collins and Rose MacDonald, the Canberra-based founders of the digital forensics and personal protection firm Nansen Digital Forensic Services, to discuss the evolving landscape of digital safety. Whilst a big portion of the industry focuses on securing corporate and public infrastructure, Andrew and Rose work on something quite different, leading us to view some new and interesting perspectives. In many situations, individuals are put into positions where they’re controlled or forced to fear for their own personal digital lives. Rose, leveraging a twenty-year career in policing, digital forensics and investigations, and Andrew, bringing extensive experience in engineering and intelligence, shared their unique perspectives on how personal environments are increasingly targeted, and what individuals can do to protect themselves.

 

Digital Safety in the Domestic Environment

 

A core focus of Nansen's work is addressing technology-facilitated abuse, which is present in the vast majority of Domestic and Family Violence cases. Currently, most cyber defence mechanisms are designed to protect the victim from external threats, but things are much different when the perpetrator of violence is close to the victim. Direct access to mobile phones and computers, as well as online accounts, associated passwords and authentication codes, are all available to the perpetrator in these cases. As such, these defences are usually insufficient to protect those experiencing tech-facilitated abuse.

 

"[A phone] is lovely and secure against an external threat," Andrew explained regarding standard device security. "But it doesn't work if I'm in an intimate relationship with the person that's the threat. Suddenly all those controls actually become a liability, stop working, and become a tool for the perpetrator to use to monitor, survey, control."

 

Rose then noted that individuals rarely need to worry about highly sophisticated, expensive spyware. Instead, the threat usually comes from the systems we already use every day. 

 

"Most of your personal threats come from misconfiguration or misuse of legitimate apps and services that you rely on everyday. There's a lot of legitimate uses for those apps and services, which people may consent to, but it's the context [purpose of their use] that matters. The use of tracking apps, without the knowledge and voluntary consent of their use is never okay" - Rose.

 

The Artificial Intelligence Complication

 

The rapid integration of AI into daily life has introduced several new challenges for personal safeguarding. Andrew highlighted how Large Language Models (LLMs) have drastically lowered the barrier to entry for malicious actors.

 

"We are seeing increasing sophistication around the attacks as LLMs become more commonplace," Andrew observed. He then shared with us a concerning example of a perpetrator, with no IT background, who successfully deployed elaborate JavaScript code with ill intentions. 

 

"[LLMs] take that complexity and the engineering knowledge away to make it more entry-level."

 

Equally dangerous is the reliance on AI by individuals trying to investigate their own security concerns. AI systems are biased toward pleasing the user, often leading to severe hallucinations if prompted repeatedly and suggestively. Andrew even recalled a case where someone who fed standard, encrypted network traffic into an AI tool, was then convinced that they were being monitored or part of a hit list.

 

"It started to hallucinate and it started to tell her things like... the rapid synchronisation between parties is an indication of organised crime," Andrew shared. "By the time she got to 100 instructions... it had come to the conclusion that she was under imminent lethal threat and there was a plan to assassinate her."

 

To prove the AI was merely hallucinating based on the user's anxiety, Andrew ran the exact same encrypted file through the system, this time suggesting his business partner was a 'Dalek' from Doctor Who. 

 

"It came up with 50,000 incidences of the word Dalek, 80,000 of the word exterminate," he said. "Yet it's an encrypted network file. None of this was actually there."

 

So although LLMs might be useful and have the capability to significantly support productivity in some instances, they’re not foolproof. Whenever using LLMs or considering their advice - it’s important to ask yourself whether it actually sounds plausible, and how you could check the output. Make sure the information given by AI is not a hallucination.

 

ACORN: A New Announcement from Nansen.

 

Yesterday, Nansen officially announced their new product, Acorn, coming soon. Acorn is a digital safety toolkit designed to protect victim-survivors from technology-facilitated abuse. Acorn empowers women to detect and manage tech abuse through a technology-facilitated abuse detection tool, and provides them with safe communications, safe cloud use and an immutable evidence diary for recording incidents of abuse, all within one platform.

 

“Acorn exists to give victim-survivors back control over technology when that control has been taken from them”

 

We at the Canberra Cyber Hub are always proud and excited to hear of new Canberra innovations that actively contribute to securing, supporting and uplifting our local and national communities.

 

The insights from Nansen highlight that traditional security mechanisms are failing to protect those experiencing technology-facilitated abuse when that threat arises from someone who was once trusted. The insider threat is particularly pervasive and harmful, and with the rapid adoption of AI, the threat landscape only becomes more complex. Nansen has taken up the challenge to develop a solution to protect this vulnerable group.