After over five hours of the #Zuckerberg hearing today, here are EFF’s highlights: (1/11)
Mark Zuckerberg started Facebook in his dorm room in 2004.
Facebook seems to be moving towards AI solutions for all of Facebook’s problems, including hate speech and fake news, without mentioning the likelihood of overbroad censorship. eff.org/deeplinks/2018…
Several senators seemed to call for laws requiring web platforms to filter users’ behavior. Remember, when platforms over-rely on automated filtering, it results in some voices being pushed offline. eff.org/deeplinks/2017…
Zuckerberg said, “We have a whole AI ethics team that is working on developing basically the technology, it’s not just the philosophical principles." But arm-waving and “nerding harder” will not resolve the fundamental problems that automated content filtering efforts face.
Zuckerberg repeatedly insisted users have “complete control” over their data, referring to posts & photos as examples. That ignores mountains of data Facebook takes from users’ behavior without their knowledge or consent, certainly without opportunity for meaningful control.
And as we expected, Zuckerberg also insisted that Facebook “does not sell user data.” While that may be technically true, it’s beside the point. No matter how the CEO slices it, Facebook’s business revolves around monetizing user data and attention.
Zuckerberg struggled to name a direct competitor or alternative to Facebook. Without viable alternatives, many concerned users cannot feasibly leave the platform and support more privacy-protective services. eff.org/deeplinks/2018…
Zuckerberg agreed with some senators that regulation could benefit the social media space. But any new law must not limit competition. Heavy-handed requirements could snuff out tomorrow’s social network before it even gets started. eff.org/deeplinks/2018…
Many unanswered questions remain, but don’t worry. Zuckerberg’s team will get back to us on that.
We’ll be back tomorrow morning live-tweeting Zuckerberg’s hearing in the House. Follow us @EFFLive starting at 7AM Pacific.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
A whopping 800,000 people registered to vote for National Voter Registration Day.
But Congress and state governments still have not taken the recommended measures to increase security in the midterm elections. <thread> eff.org/deeplinks/2018…
At this year’s @defcon, researchers evaluated a voting machine that’s used in 18 different states. They demonstrated how easy it is to gain admin access, which lets someone change settings—or even the ballot—in under two minutes. defcon.org/images/defcon-…
A Congressional working group concluded that election infrastructure is largely insecure across the country, with 42 states using machines susceptible to vote flipping, and at least ten states using machines that provide no paper record or receipt. documentcloud.org/documents/4379…
When @mcsweeneys editors approached EFF earlier this year about collaborating on a surveillance & privacy-themed essay collection, we jumped at the opportunity.
The first all non-fiction issue of Timothy McSweeney’s Quarterly Concern debuts this November: eff.org/deeplinks/2018…
“The End of Trust” features writing by EFF’s team, including Executive Director Cindy Cohn, @maassive, Soraya Okuda, @doctorow, and board member @schneierblog, exploring issues related to surveillance, freedom of information, and encryption. eff.org/deeplinks/2018…
A group of organizations, advocates, and academics—including @EFF—came together in February to create the Santa Clara Principles on Transparency and Accountability in Content Moderation. We're happy to announce that the Principles now have a permanent home:santaclaraprinciples.org
The Principles set a minimum standard for transparency and accountability for communications platforms, and should serve as a basis for more in-depth dialogue and activism going forward. santaclaraprinciples.org
The Principles ask that companies be transparent to the public and their users about content takedowns and account suspensions, and provide opportunity for timely, meaningful appeals to their users. santaclaraprinciples.org
Anyone looking to make changes to how online platforms police speech should learn lessons from the failures of using copyright to do the same. Here are five major takeaways from the copyright wars: eff.org/deeplinks/2018…
1. Mistakes will be made. The law gives huge incentives to platforms to take things down after getting a complaint, leading people seeing their work disappear due to fraudulent takedown notices. Content moderation policies have and will make similar errors.eff.org/takedowns
2. Robots are not the answer. We've seen the mess that automated filters like YouTube's cause.
On Monday, a federal court dismissed our lawsuit against the Justice Department to block enforcement of #FOSTA. (1/5) eff.org/deeplinks/2018…
The case was filed on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist. The court did not reach the merits of any of the constitutional issues, but instead found the plaintiffs did not have standing. (2/5)
We’re disappointed & believe the decision is wrong. For example, the court failed to apply the standing principles that are usually applied in 1st Amendment cases in which the plaintiffs’ speech is chilled. The plaintiffs are considering their options for their next steps. (3/5)
In 2014, we launched Onlinecensorship.org (@censored) to collect reports from users who had experienced content takedowns on social media, in an effort to encourage companies to operate with greater transparency and accountability as they make decisions that regulate speech.
Today, we're relaunching the site with a fresh new look! We're still collecting reports from users, but going forward, @censored will be home to more resources for users, journalists reporting on content moderation, and companies.
Our journalist toolkit offers insight into a set of diverse issues and is a one-stop resource for information related to content moderation policies: onlinecensorship.org/content/a-reso…