Excited to release our upcoming #CSCW2018 paper describing how disinformation agents from the Internet Research Agency in Russia participated in online discourse about the BlackLivesMatter movement and police-related shootings in the U.S. during 2016: faculty.washington.edu/kstarbi/BLM-IR…
Following up on network analysis showing RU-IRA agents were active on both “sides” (pro- and anti-) of the politicized BLM conversation, we conducted a deep qualitative analysis, showing how RU accounts impersonated “authentic” voices.
The content on one “side” was typically very different from the other—(on the left) exhibiting pride in African American identity, (on the right) using racist memes targeting African American people. But it did converge around some themes, e.g. attacking the "mainstream media”.
This work sheds light on how RU disinformation operations use fictitious identities to reflect and shape social divisions. Lead author & PhD Candidate Ahmer Arif reflects on how difficult it can be to recognize “disinformation” that aligns with our social & political identities.
This is one of two studies we did showing how RU disinfo operations infiltrate, integrate into, leverage, and shape “organic” online political activism. In this case, they targeted an authentic, organic political movement (and its countermovement) to further RU's political aims.
Thread got shredded a bit… Content from RU-IRA agents also converged in supporting Trump in the 2016 election, directly on the right through pro-Trump content, and indirectly on the left by urging supporters of BlackLivesMatter to not vote for Hillary Clinton.
This research was a follow-up study to an earlier paper, where we looked at “framing contests” within BlackLivesMatter discourse: faculty.washington.edu/kstarbi/Stewar…
That study showed two distinct communities of participating accounts (one on the political left, one on the political right). When Twitter released a list of RU-IRA accounts in Nov 2017, we cross-referenced those accounts and… found them on both sides on the conversation.
We had even featured some of the RU-IRA accounts in some of our tables and data excerpts—as being representative of the polarized content in that discourse.
I want to underscore that in this case (and others), legitimate and authentic online activism is targeted (and potentially undermined) by these tactics. A disinfo campaign like this RU-IRA one against BLM works both by leveraging and spoiling that activism. It's parasitic.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Our upcoming #CSCW2018 paper examining information operations on Twitter in the context of the ongoing civil war in Syria. In this work, we look at how Russian, Syrian and Iranian info ops were integrated into an online community of information activists. faculty.washington.edu/kstarbi/cscw-W…
This paper focuses on the case of the “Aleppo Boy” (Omran Daqneesh) who was photographed after his family’s home was bombed in an airstrike conducted by Syrian or Russian forces in 2016 & who re-appeared on Syrian state television almost a year later. en.wikipedia.org/wiki/Omran_Daq…
The original (2016) photo of Omran served to garner attention to and sympathy for the Syrian people living in rebel held areas of the country, who were suffering impacts from military actions perpetrated by the Syrian government and their allies, including Russia.
News coverage yesterday noted research documenting how Russian trolls spread anti-vaccine content (online) in 2016. I think it’s important to step back and see the larger picture… this is one of MANY conversations they infiltrated. (Thread) theguardian.com/society/2018/a…
I won’t pretend to understand ALL of their motivations, but there are a few trends here. 1) Infiltrating and cultivating online activism. 2) Fostering an epistemology that “questions more”—questioning experts in science, journalists, etc.
They cultivate online activism communities, become part of them, and then work to bridge those communities into other topics that serve their strategic interests. We’ve seen this in diverse online activism that crosses the political spectrum.
New research shows how the Russian Internet Research Agency targeted African Americans before the 2016 election, first with a benign ad fostering solidarity, then right before the election with an ad encouraging them not to vote. nytimes.com/2018/08/16/tec…
This study was led by University of a Wisconsin-Madison professor Young Mie Kim. This article about that study talks about how initial messaging made “identity” appeals, likely to gain trust. Latter messaging aligned with RU’s strategic aims (in this case to reduce Dem votes)
In work led by PhD Ahmer Arif, we saw similar trajectories in Twitter acccounts associated with the IRA... lots of work to enact an identity within a targeted group, and then occasion moves to push certain RU narratives including converging on anti-Hillary & anti-media messages
This story has a huge impact on how we understand the “fake news” ecosystem... it is very hard to disentangle financial and political motivations for the different entities in this ecosystem, and there are efforts to hide political motives beneath financial ones.
It also provides some insight into the “why” of the content sharing practices that are rampant in this ecosystem - because it’s free content. That likely incentivizes those who wish to propagate certain narratives to make their content easily accessible to these sites.
Another interesting piece is how invisible partnerships between politically motivated operatives and these websites can shape the kinds of information flowing there. We’ve theorized about these in our research, but this provides evidence that they do indeed exist.
In our research on polarized discourse we’ve seen quoted tweets enabling a kind of back alley ambush... pulling a person (the quoted author) out of their bubble & into yours, so you & your friends can dogpile them, where their friends can’t see and therefore don’t know to help.
And by *you* here I mean anyone, all of us who use the feature to challenge someone else on Twitter. Not specifically Charlie.
Adding a link. Our first paper on this talks about “framing contests” w/in the Black Lives Matter conversation, and shows how the quoted tweet function was used to challenge and reframe the other side’s narrative. faculty.washington.edu/kstarbi/Stewar…
This article on “flat earth” believers describes some of the “ways of knowing” that support this and other, related ideologies and highlights how social media platforms help draw people into these epistemic communities. newyorker.com/science/elemen…
"The conference audience was frequently encouraged to 'do your own research,' which mostly seemed to involve watching more YouTube videos and boning up on Scripture."
This idea of “doing your own research” is prevalent across the alternative media ecosystems that we’ve studied… in studying exploring the phenomena of denying mass shooting events and claiming those and other events are staged by “crisis actors”.