Large-scale manipulations on social media have two important characteristics: (i) they use propaganda to influence others, and (ii) they adopt coordinated behavior to spread propaganda and to amplify its impact. Despite the connection between them, these two characteristics have
...
Large-scale manipulations on social media have two important characteristics: (i) they use propaganda to influence others, and (ii) they adopt coordinated behavior to spread propaganda and to amplify its impact. Despite the connection between them, these two characteristics have so far been considered in isolation. Here we aim to bridge this gap. In particular, we analyze the spread of propaganda and its interplay with coordinated behavior on a large Twitter dataset about the 2019 UK general election. We first propose and evaluate several measures for quantifying the use of propaganda on Twitter. Then, we investigate the use of propaganda by different coordinated communities that participated in the online debate. The combined analysis of propaganda and of coordination provides evidence about the harmfulness of coordinated communities that would not be available otherwise. For instance, it allows us to identify a harmful politically-oriented community as well as a harmless community of grassroots activists. Finally, we compare our measures of propaganda and of coordination to automation scores (i.e., the use of bots) and Twitter suspensions, revealing interesting trends. From a theoretical viewpoint, we introduce a methodology for analyzing several important dimensions of online behavior that are seldom conjointly considered. From a practical viewpoint, we provide new and nuanced insights into inauthentic and harmful online activities in the run up to the 2019 UK general election.
@en