For that work, we created a shared audience graph that demonstrated the underlying structure of those frame contests—and clearly demonstrated two “sides” of the political conversation. Echo chambers.
When Twitter released the 1st batch of accounts related to the RU-IRA troll factories, we cross-referenced those with our #BlackLivesMatter & #BlueLivesMatter data and… some of the most active & most influential accounts ON BOTH SIDES were RU-IRA trolls. faculty.washington.edu/kstarbi/examin…
Here’s a similar graph, made from a slightly different network property (RTs rather than shared audience) that shows retweets of RU-IRA trolls (in orange). U.S. political left on the left, political right on the right.
In other words, there are paid trolls sitting side by side somewhere in St. Petersburg hate-quoting each other’s troll account, helping to shape divisive attitudes in the U.S. among actual Americans who think of the other side as a caricature of itself.
Twitter has an opportunity to help people understand what is happening to us - not just to the “other side” - but to our side/ourselves. To help us become aware of HOW we’re being manipulated. Just telling us we’ve interacted w/ one of these accounts misses the opportunity.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Our upcoming #CSCW2018 paper examining information operations on Twitter in the context of the ongoing civil war in Syria. In this work, we look at how Russian, Syrian and Iranian info ops were integrated into an online community of information activists. faculty.washington.edu/kstarbi/cscw-W…
This paper focuses on the case of the “Aleppo Boy” (Omran Daqneesh) who was photographed after his family’s home was bombed in an airstrike conducted by Syrian or Russian forces in 2016 & who re-appeared on Syrian state television almost a year later. en.wikipedia.org/wiki/Omran_Daq…
The original (2016) photo of Omran served to garner attention to and sympathy for the Syrian people living in rebel held areas of the country, who were suffering impacts from military actions perpetrated by the Syrian government and their allies, including Russia.
Excited to release our upcoming #CSCW2018 paper describing how disinformation agents from the Internet Research Agency in Russia participated in online discourse about the BlackLivesMatter movement and police-related shootings in the U.S. during 2016: faculty.washington.edu/kstarbi/BLM-IR…
Following up on network analysis showing RU-IRA agents were active on both “sides” (pro- and anti-) of the politicized BLM conversation, we conducted a deep qualitative analysis, showing how RU accounts impersonated “authentic” voices.
The content on one “side” was typically very different from the other—(on the left) exhibiting pride in African American identity, (on the right) using racist memes targeting African American people. But it did converge around some themes, e.g. attacking the "mainstream media”.
News coverage yesterday noted research documenting how Russian trolls spread anti-vaccine content (online) in 2016. I think it’s important to step back and see the larger picture… this is one of MANY conversations they infiltrated. (Thread) theguardian.com/society/2018/a…
I won’t pretend to understand ALL of their motivations, but there are a few trends here. 1) Infiltrating and cultivating online activism. 2) Fostering an epistemology that “questions more”—questioning experts in science, journalists, etc.
They cultivate online activism communities, become part of them, and then work to bridge those communities into other topics that serve their strategic interests. We’ve seen this in diverse online activism that crosses the political spectrum.
New research shows how the Russian Internet Research Agency targeted African Americans before the 2016 election, first with a benign ad fostering solidarity, then right before the election with an ad encouraging them not to vote. nytimes.com/2018/08/16/tec…
This study was led by University of a Wisconsin-Madison professor Young Mie Kim. This article about that study talks about how initial messaging made “identity” appeals, likely to gain trust. Latter messaging aligned with RU’s strategic aims (in this case to reduce Dem votes)
In work led by PhD Ahmer Arif, we saw similar trajectories in Twitter acccounts associated with the IRA... lots of work to enact an identity within a targeted group, and then occasion moves to push certain RU narratives including converging on anti-Hillary & anti-media messages
This story has a huge impact on how we understand the “fake news” ecosystem... it is very hard to disentangle financial and political motivations for the different entities in this ecosystem, and there are efforts to hide political motives beneath financial ones.
It also provides some insight into the “why” of the content sharing practices that are rampant in this ecosystem - because it’s free content. That likely incentivizes those who wish to propagate certain narratives to make their content easily accessible to these sites.
Another interesting piece is how invisible partnerships between politically motivated operatives and these websites can shape the kinds of information flowing there. We’ve theorized about these in our research, but this provides evidence that they do indeed exist.
In our research on polarized discourse we’ve seen quoted tweets enabling a kind of back alley ambush... pulling a person (the quoted author) out of their bubble & into yours, so you & your friends can dogpile them, where their friends can’t see and therefore don’t know to help.
And by *you* here I mean anyone, all of us who use the feature to challenge someone else on Twitter. Not specifically Charlie.
Adding a link. Our first paper on this talks about “framing contests” w/in the Black Lives Matter conversation, and shows how the quoted tweet function was used to challenge and reframe the other side’s narrative. faculty.washington.edu/kstarbi/Stewar…