Bringing lessons from cybersecurity to the fight against disinformation | MIT News

Mary Ellen Zurko remembers the feeling of disappointment. Not long after graduating from MIT, she did her first secure computer systems evaluation job for the US government. The goal was to determine if the systems were compliant with the “Orange Book,” the government’s authoritative manual on cybersecurity at the time. Were the systems technically secure? Yes. In practice? Not so much.

“There was no question that the security demands for end users were realistic,” says Zurko. “The notion of a secure system was about technology and presupposed perfect and obedient human beings.”

That discomfort set her on a path that would define Zurko’s career. In 1996, after returning to MIT for a masters degree in computer science, she published an important article that introduced the term “user-centered security”. He grew up in a field of his own, concerned with making sure cybersecurity is balanced with usability, otherwise humans could bypass security protocols and give attackers a footing. The lessons of usable security now surround us, influencing the design of phishing alerts when we visit an insecure site or the invention of the “force” bar when typing a desired password.

Now a cybersecurity researcher at MIT Lincoln Laboratory, Zurko is still embroiled in humans’ relationship with computers. His attention has shifted to technology to thwart operations of influence or attempts by foreign adversaries to deliberately disseminate false information (disinformation) on social media, with the intent to upset US ideals.

In a recent editorial published on IEEE security and privacy, Zurko argues that many of the “human problems” in usable security have similarities to the problems of combating disinformation. To some extent, he is facing a similar feat to his early career: convincing colleagues that these human problems are also cybersecurity problems.

“In cyber security, attackers use humans as a means to subvert a technical system. Disinformation campaigns aim to influence human decision making; they are a kind of ultimate use of computer technology to subvert human beings, ”she says. “They both use computer technology and humans to accomplish a goal. It’s just the goal that’s different. “

Anticipating influence operations

Research to thwart online influence operations is still young. Three years ago, Lincoln Laboratory began a study on the subject to understand the implications for national security. The field has since grown out of all proportion, particularly after the online spread of dangerous and misleading claims of Covid-19, perpetuated in some cases by China and Russia, as a RAND study found. There is now dedicated funding through the Laboratory’s Office of Technology to develop countermeasures for influenza operations.

“It is important for us to strengthen our democracy and make all of our citizens resilient to the kind of disinformation campaigns aimed at them by international adversaries, which seek to disrupt our internal processes,” says Zurko.

Like cyber attacks, influence operations often follow a multiphase path, called the kill chain, to exploit predictable weaknesses. Studying and reinforcing these weaknesses can work in the fight against influence operations, just as they do in cyber defense. Lincoln Laboratory’s efforts are in developing a technology to support the “source trend” or strengthening early stages of the killing chain as opponents begin to find opportunities for a divisive or misleading narrative and build accounts to amplify it. Sources research helps report a disinformation campaign on brewing to US intelligence operations personnel.

A couple of approaches to the laboratory are aimed at finding the source. One approach is to leverage machine learning to study digital characters, with the intent of identifying when the same person is behind multiple malicious accounts. Another area is focusing on creating computational models that can identify deepfakes or AI-generated videos and photos created to mislead viewers. Researchers are also developing tools to automatically identify which accounts have the most influence on a narrative. First, the tools identify a narrative (in one paper, researchers studied the disinformation campaign against French presidential candidate Emmanuel Macron) and collect data related to that narrative, such as keywords, retweets and likes. Then, they use an analytical technique called causal network analysis to define and rank the influence of specific accounts – which accounts often generate posts that go viral?

These technologies are fueling the work Zurko is conducting to develop a test bed for counter-influence operations. The goal is to create a safe space to simulate social media environments and test counter technologies. More importantly, the test bed will allow human operators to get involved to see how new technologies help them do their jobs.

“Our military intelligence personnel have no way to measure the impact. By standing on a test bed, we can use multiple different technologies, in a repeatable way, to augment the metrics that allow us to see if these technologies actually make operators more effective in identifying a disinformation campaign and the actors behind it. “

This vision is still ambitious as the team builds the test bed environment. Simulating social media users and what Zurko calls the “gray cell,” unwitting participants in online influence, is one of the biggest challenges in emulating real-world conditions. Rebuilding social media platforms is also a challenge; each platform has its own policies for dealing with misinformation and proprietary algorithms that influence the extent of misinformation. For instance, The Washington Post reported that Facebook’s algorithm gave “extra value” to news that received angry reactions, making it five times more likely to appear on a user’s news feed and that such content is disproportionately likely to include misinformation. These often hidden dynamics are important to replicate on a test bed, both to study the spread of fake news and to understand the impact of interventions.

Take a full system approach

In addition to building a test bed for combining new ideas, Zurko also advocates a unified space that disinformation researchers can call their own. Such a space would allow sociology, psychology, politics and law researchers to meet and share cross-cutting aspects of their work alongside cybersecurity experts. The best defenses against disinformation will require this diversity of skills, Zurko says, and “a full system approach of both human-centered and technical defenses.”

While this space does not exist yet, it is likely to be on the horizon as the field continues to grow. Influence operations research is gaining ground in the world of cybersecurity. “Just recently, major conferences have begun to incorporate disinformation research into their call for papers, which is a real indicator of where things are going,” Zurko says. “But some people still cling to the old-school idea that disordered humans have nothing to do with cybersecurity.”

Despite these feelings, Zurko still trusts her early observations as a researcher: What computer technology can do effectively is moderated by how people use it. She wants to continue designing technology and moving closer to problem solving in a way that places humans at the center of the structure. “From the beginning, what I loved about cybersecurity is that it’s partly the mathematical rigor and partly sitting around the ‘fire’ telling stories and learning from each other,” Zurko reflects. Disinformation draws its power from the ability of humans to influence each other; that ability might just be the most powerful defense we have.

Leave a Reply

Your email address will not be published. Required fields are marked *