It's not hard to see how such a system could be weaponized via selective enforcement and categorization, and even to "distribute the blame" by not controlling for malicious mass-misuse of reporting features by some in their own epistemic bubbles.
Let's spell out a scenario. You start with any large, publicly-accessible system where people can offer commentary. Facebook, Twitter, YouTube, what have you.
Once any system gets large enough, there will be some who use it as a means of promulgating hatred and divisiveness.
If you're a provider, that's generally bad for whatever your business model happens to be.
There will be calls for action, and before long legal threats as well. So some system will need to be put in place to help calm things down. Some kind of "moderation" is needed, it seems.
Acceptable Use Policies and User Rules will be put in place, in an attempt to spell out to people that they shouldn't behave badly. Generally, some kind of reporting system will be put in place, to alert the system that someone isn't following the rules.
There's always someone.
OK, so now you've got a pile of "reports" about bad behavior in your user community. What do you do about it?
Generally, the most reliable way to deal with that is to have a human review those reports and take appropriate actions based on clearly-laid-out guidelines.
But...
Having a human deal with those reports tends to be a very expensive endeavor. It's a thankless, soul-wrecking job, and if you do it perfectly the best-case-scenario tends to be having marginally less vitriol hurled at you than at some of your coworkers.
And it doesn't scale.
With massive systems, it REALLY doesn't scale. So to have it not destroy your business model, you need something that front-ends this, and automates as many of these decisions as possible, and forwarding them to a human only when the automation can't handle it.
And so they do.
The logical extreme for this is to have a huge amount of the "moderation" being enforced by algorithm, and a vanishingly small amount of it ever even being seen by a human... because the incremental cost of having an algorithm do this is more-or-less a rounding error vs a human.
But your first algorithms trying to do this are going to suck. They always do.
And customers are not happy when an algorithm interferes with them, because they can't mount an outrage campaign to get an algorithm fired.
Because, let's face it...
Algorithms are honey badgers.
So instead, customers complain about how bad the algorithms are, and how they need to be fixed, so the engineers need to go in and start adding additional layers of complexity to the algorithms, and exception cases, and exceptions to the exceptions...
It starts... growing.
Pretty soon, no one can really understand what the sets of algorithms are doing.
It becomes brittle, and weird corner cases start showing up that are worse than anything that was seen before.
And then someone gets a clever idea to just let the machine do it all instead.
Someone will pitch the idea of putting Machine Learning and Big Data on the problem, and let that come up with a set of heuristics that can get you much closer to the root goal with fewer false positives.
A side benefit is that you'll have plausible deniability now about "why".
Executives will start talking less about the decisions that went into the training data set, and how those led to an emergent set of results, and more about what "the system" does automatically... where no human can be blamed or fired for the results the system happens to create.
Because, at the end of the day, that's what the company really wants.
They want customers to complain less, and they want to minimize how many of them need to lose their jobs when the complaints go way in the opposite direction.
It's self-preservation. People like getting paid.
So, how can a system like that be perverted, to create emergent behaviors that some people will really like at the expense of others who are loathed?
By playing with the thresholds on the inputs to the system, and applying defensible "tweaks" to the system to "improve" things.
Let's say your system has started creating categories of behaviors that act as inputs into whose content is going to be largely suppressed from general view... because the system has shown that suppressing some people's content reduces overall customer complaints by A LOT.
As a company, it's pretty easy to sell the idea that you're really seeking to create the greatest good by restricting the smallest number of your customers that you can, in order to help "keep the peace" among the vast majority of your customer base.
And it's not a bad idea.
And restricting the REACH of those subset of disruptive customers tends to be the best-case answer that can be come up with, because of how easy it is to evade attempts to outright ban them entirely from the system.
So this gets pitched as a "compromise" solution to the problem.
The general idea would be to have the subset of disruptive users off in their own little partition, where they can talk amongst themselves without disrupting the rest of the user community.
That sounds like a win-win... except for ignoring the goals of those partitioned off.
The goal of a troll is to create a reaction. If you tell them you're going to be partitioning them off so that they can't get the reaction they want, you'll get the same reaction from them as you would if you'd just banned them outright.
They'll find a way. Life always does.
So it's key to keep a system like this a secret. You can't tell people they're being partitioned in this way, or it loses its effectiveness as a tool. So you'll need to deny that such a practice is used if it's going to be a useful tool.
So clever wording is needed here.
Remember the dichotomy from before, where it's less about what the engineers do, and more about what the system does?
When you deny this, you can very carefully say that "no one" in your organization is doing this partitioning, and still be technically correct when you say it.
Because there is now no individual you can point to and say, "that's the engineer who partitioned your account". That's true, but it ignores the much larger point that the system DID partition that account, for whatever reason.
A lie of omission, evading the blame for actions.
And that's understandable, because people like keeping their jobs.
And things might even be innocent enough that it could be left there... if it were not for the potential of weaponizing those system inputs to meet personal goals.
When that happens, it gets tends to get dark.
Because with a big system like that, you're likely to get all sorts of metrics that are popped out along the way as to what the system is doing from a classification standpoint, and if you're a little diligent and pay attention, you can notice some things that can be leveraged.
Let's say you have a particular subset of your partitioned users that you happen to find personally vile. Maybe their politics are the polar opposite of your own.
You might notice that the patterns of interactions from this subset of users are different from ones you agree with.
By selectively "enhancing" the partitioning parameters (say, in the name of "conversational health work" or some ridiculous term like that), you might find that you can create a way to pull in a lot more people into the partitioning.
Ones that you just so happen to disagree with.
This can be weaponized externally by using triggers like, say, the number of times or frequency at which other users report an account... when you've noticed that people you agree with are much more likely to falsely report those you disagree with.
And it's all fully deniable.
Each individual decision that is made for those tweaks can be defended as an attempt to "improve" the system, for entirely noble reasons.
And if it just so happens to hurt those whose opinions you find to be deplorable along the way, well, you can't be blamed for that, right?
So please forgive me if I roll my eyes when folks at Twitter say "no one" is implementing shadow-bans against conservatives, and that it's all some big misunderstanding with no evil intent behind any of it... that just so happens to tilt egregiously towards one particular side.
What are the odds that it's entirely by accident that "the system" just so happens to adversely affect (by a huge margin) those whose political opinions are opposite those of the CEO of the company?
Why it's almost like the system somehow "knows" who SHOULD be affected most...
Emergent systems are incredibly sensitive to very minor changes in the input rules, and given a particular input data set, can produce very different results when those changes are implement.
Sometimes, that's considered a feature.
And sometimes, it can be a weapon, too.
/end
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This notion of collective guilt and collective punishment for that collective guilt is, perhaps, the single most toxic fruit of the downward spiral we have seen recently into weaponized identity politics.
It is not a stretch to say that EVERY meaningful aspect of Western civilization today arose out of the profound philosophical insights and shifts that arose in that period.
The United States of America was founded based on the enshrining of those core philosophical principles
One of the very most important of those principles is the notion that each and every one of us, as an individual, is responsible for our own choices and actions, and that Justice cannot be served by transferring guilt onto someone else.
The idea that a young man in the 1980's would lie about how much (or little, especially) sexual experience he had, when other men were cruel in their abuse of male virgins, is one of the least controversial notions in existence.
The 2018 elections may very well be the most important election in your lifetime.
No matter what you see, hear or read, everyone who cares about the future of this country needs to do everything in their power not only to legally vote, but to rally everyone else as well.
Everyone needs to show up on November 6th (that's in just 6 weeks and 2 days!) and vote like the continued existence of your country depends on it, BECAUSE IT DOES.
The Left has now made it crystal-clear that they are in Total War mode, and want power "By Any Means Necessary".
If you oppose that severely anti-American idea, there is one legitimate thing that you can do, without sinking to their level: exercise your legitimate rights, and turn every one of them out of office. Vote against every single Democrat running, in every race.
Given everything that is flying around with respect to the accusations being leveled at Brett Kavanaugh, I want to take a few moments to point out some things that people might not like.
And not liking these observations has nothing to do with politics.
It's about being human.
Human consciousness, identity, ego and the mechanics of memory all intertwine in ways that are not fully understood by anyone... but there are some things that have been learned along the way.
And they're not pretty. They're hard to hear, and harder to accept, but they're true.
One of the hardest ideas to really and fully internalize is the notion that your memory is not what you think it is. It turns out that, what we think of as a "memory", is almost always a reconstruction of what we think 'must have happened'.