Deploying Artificial Intelligence to Combat Disinformation Warfare

Figure 7. The LibShortText/LibLinear Model

Disinformation attacks that make use of Cloudbased social media platforms, and in particular, the attacks
orchestrated by the Russian “Internet Research Agency,”
before, during and after the 2016 U.S. Presidential election
campaign and the 2016 Brexit referendum in the U.K., have
led to increasing demands from governmental agencies for
technological tools that are capable of identifying such attacks
in their earliest stages, rather than identifying and responding
to them in retrospect. This paper reports on the interim results
of an ongoing research project that was sponsored by the
Canadian government’s Cyber Security Directorate. The
research is being conducted by the International CyberCrime
Research Centre (ICCRC) at Simon Fraser University
(Canada), in cooperation with the Department of Information
and Computer Sciences at the University of Strathclyde
(Scotland). Our ultimate objective is the development of a
“critical content toolkit,” which will mobilize artificial
intelligence to identify hostile disinformation activities in
“near-real-time.” Employing the ICCRC’s Dark Crawler,
Strathclyde’s Posit Toolkit, Google Brain’s TensorFlow, plus
SentiStrength and a short-text classification program known as
LibShortText, we have analyzed a wide sample of social media
posts that exemplify the “fake news” that was disseminated by
Russia’s Internet Research Agency, comparing them to “real
news” posts in order to develop an automated means of
classification. To date, we have been able to classify posts as
“real news” or “fake news” with an accuracy rate of 90.7%,
90.12%, 89.5%, and 74.26% using LibShortText, Posit,
TensorFlow and SentiStrength respectively.

Authors:

Barry Cartwright, George R. S. Weir, Richard Frank, Karmvir Padda

Published:

International Journal On Advances in Security, v 12 n 3 & 4 2019

August 2019

http://www.iariajournals.org/security/sec_v12_n34_2019_paged.pdf