Last week, FiveBy Director of Risk Intelligence, Irene Kenyon, participated in a Thomson Reuters Institute event titled “Those Darkest Hours”: the Future of American Security. The conference marked the twentieth anniversary of the September 11, 2001, attacks on our nation, focusing on how the terrorism landscape has evolved in these two decades and how mitigation strategies have and should continue to develop to protect our nation from violent extremist attacks—whether from domestic extremist groups or foreign terrorist organizations (FTOs).
Key discussion issues included:
- The differences between individuals connected with FTOs and domestic violent extremists (DVEs).
- The role of professional money launderers and trusted networks in funding domestic terrorism, how domestic extremist financial transactions differ from , and how financial institutions and law enforcement can best follow the money.
- How social media and other tech platforms play a role in recruitment, radicalization, and retention of DVEs.
- Key allies for curtailing fringe behavior and mitigating its risks.
- What role corporate America can play to identify extremist behavior and help mitigate it?
FiveBy Solutions analysts are experts in providing answers to many of these questions and performing in-depth, due diligence research on clients’ business partners and customers to ensure that they do not have ties to terrorism—whether domestic or foreign—and that they are in compliance with US sanctions and other laws. This work helps clients mitigate reputational risks by providing background, history, any known links to extremist organizations and possible connections to questionable groups, research on leadership and corporate ownership and control structures, and glances around the corner—strategic looks—at upcoming designations, regulations, and restrictions.
These assessments are more important than ever, as the world relies ever more heavily on artificial intelligence and machine learning to perform not only compliance and due diligence tasks, but content moderation. Human-driven analysis, assessing tone, language, colloquialisms, and possible intent is critical to helping challenge individual perceptions of persecution and marginalization that may help set individuals on a path to violence.
DVEs tend to meet in Internet chat forums and social media platforms. These people, who often feel disenchanted with the current national climate, disenfranchised, isolated, and targeted by Big Tech, find one another online, fostering feelings of camaraderie among individuals with similar views. They are embraced as family, because of their extremist views—many times a dynamic they lack in real life.
Individuals who hold extremist views may never embark on the path to violence, but censoring their ideas, regardless of how extreme they may sound, only reinforces their perception that they are being targeted and censored. Challenging those notions requires a human touch.
The National Strategy for Countering Domestic Terrorism published this year says that people should not be targeted based on their political views, but the challenge lies in how to determine whether unsavory talk is on its way to becoming a violent act.
It is critical that we condemn and confront domestic terrorism regardless of the particular ideology that motivates individuals to violence. The definition of “domestic terrorism” in our law makes no distinction based on political views – left, right, or center – and neither should we.
Much like messaging needs to be cautiously crafted to ensure it does not punish ideas and label them as “domestic terrorism,” content moderation should also be carefully performed to ensure that those individuals who already feel marginalized are not pushed into further isolation. Censorship will confirm their biases and perceptions of suppression, and AI—while a useful tool—cannot glean satire, understand cultural norms and references, comprehend colloquialisms, or assess language nuance.
Human-driven analysis can.
Expert analysts analysts can help assess content to ensure that legitimate views—no matter how much they seem out of the mainstream—are not censored based on words detected by an algorithm. They can examine content, its source, tone, and subtleties. They are regional experts who understand context and culture and can help tech platforms determine whether posted content is simply satire or bluster or may possibly be more than that. They can help platforms ensure that they are not contributing to further marginalization of individuals who may simply need a way to return to their community, instead of confirming their perception that the world is seeking to silence them.
Members of NGOs such as the Global Internet Forum to Counter Terrorism (GIFCT), that facilitates information-sharing and technical collaboration between platforms, will also find human-driven analysis critical when examining hashes—digital signatures for images or video that allow members to identify visually similar content—or other shared information. GIFCT members review content identified by the hashes and share their assessments with other members of the organization, facilitating dialogue between tech platforms and collaboration to ensure that terrorist content is removed from platforms, while working to preserve freedom of expression online
On a more basic level, human-driven analysis can help make sense of the reams of data generated by tech tools. Whether it’s conflicting sanctions information, multiple adverse media articles, or contradictory ownership and control information, FiveBy’s expert analysts can help make sense of the data, find the “so what,” and help companies avoid reputational, regulatory, and ESG risk. Human-driven linguistic analysis can identify name variations in multiple languages and transliterations, regional experts can help detect jurisdictional risk, and policy specialists can help companies and financial institutions be proactive in ensuring they stay ahead of the game when regulations change and new policies are implemented.
Click here for PDF.