"just collect data and figure out later how you'll use it" doesn't work any more. #VelocityConf
We used to be optimistic before we ruined everything.
Mozilla also used to not collect data, and only had data on number of downloads, but its market share went down because they weren't measuring user satisfaction and actual usage. #VelocityConf
Privacy and security are a fundamental right and shouldn't be treated as optional, according to Mozilla's principles.
They're in tension with collecting data to make the web better. So how does Mozilla navigate the challenge? #VelocityConf
[ed: full disclosure: this talk I'm transcribing represents the speaker's opinions and not mine, and not @invinciblehymn's/Chrome's either]
There's a lot of cynicism around data collection, especially when it comes to Chrome, etc. #VelocityConf
.@lxt will discuss how they collect data, how they've messed up, and how they've recovered from mistakes.
"Part of ethics is admitting when you mess it up and fixing it." #VelocityConf
Disclaimers: this is what they do, it's not perfect. It's open source so it can be cloned and made better.
Lean data practices: (1) collect what you need to answer your questions (2) keep minimum amount of time (3) don't violate user expectations. #VelocityConf
4 different kinds of data: (1) technical data - OS, memory, version number. opt-out [ed: this potentially becomes fingerprintable] (2) interaction/usage data - number of tabs, session length, configs. opt-out.
- - - - - - - - (3) activity data e.g. browsing history. #VelocityConf
Mozilla considers this every sensitive, and many people don't want it collected. Remember the AOL and Uber "anonymized" datasets that weren't.
Some URLs are magic access keys that allow mutation :/
Mozilla rarely collects, and only for specific cases. #VelocityConf
(4) highly sensitive data - email, username, identifiers. opt-in with advance notice, user consent, and secondary ???.
So we want to collect at Mozilla. Steps: file a request for collection in Github, then review by a data steward. #VelocityConf
Data Stewards are like lawyers; looking for ways to be able to say yes, rather than being adversaries. They pattern-match to known precedents/case law to find ways for you to do it in a safe way. An example: #VelocityConf
Suppose you want to find slow URLs to be able to debug them. Treat it like a crash report: ask the user if they want to report it to Mozilla and show them the URL that would be sent. If it's "embarassing health url", they'll say no and decline to send. #VelocityConf
Privacy preserving data collection: add randomness to make data not identifiable, or mix the data using mixnets. Mozilla is investing in this approach.
With experimentation, there are ethics as well. #VelocityConf
But if you don't perform tests before releasing products, you're performing a massive uncontrolled experiment. "we're giving everyone in the country a non-fda approved medication, if you die, let us know." This is not what we want. #VelocityConf
A product hypothesis document describes the purpose of your experiment and your approach. data reviews, who will have access, and science review to figure out if the experimental design is correct. #VelocityConf
Most experiments are opt-in, but some are opt out. Harder to get approval for opt-out experiments, unless it's something that must eventually roll out to all users (e.g. TLS 1.3) where it's testing maturity rather than product direction. #VelocityConf
There's also testpilot.firefox.com that people can opt into, with informed consent. Even if biased towards early adopters, it still provides useful data.
And there's Firefox Pioneer that allows people to donate their data to Mozilla. #VelocityConf
1000 people installed it even before the blog post went up.
Some case studies where Firefox messed up: #VelocityConf
Case study 1:
Mr. Robot promo. Experimental system used to push opt-out "experiment" to all US users as part of an AR marketing promotion/game. It violated user trust and expectations. They wouldn't do it again.
Road to hell is paved with good intentions. It was meant to be fun, no money or user data changed hands. Mozilla employees thought the TV show was cool and it would be an easter egg. But it wound up looking like malware to users.
"It doesn't collect any data, so I guess it's okay."
"You're not doing any science, so it looks okay..."
People had unease that it wasn't right but nobody felt empowered to speak up and stop it. #VelocityConf
Action items: Don't do things in secret. Have more formal process, define red flags (such as "things that are done in secret", "things done in partnership", "we weren't trying to learn anything"). Document escalation paths. #VelocityConf
Second case: crash reporting system for single tab crashes.
Bug where if someone submitted one tab, it would submit all future crashes without opt-out. Uh oh. #VelocityConf
"We can't tell what data was submitted with consent and what was fruit of the poisoned tree..."
They met on Dec 26, and the VP's immediate response was "burn it to the ground". So they deleted 1PB of crash data, without question. #VelocityConf
And there was a moment of silence for the lost crash data.
We can always do better. Learn from mistakes, steal ideas, steward users' data wisely, and always feel free to ask questions.
Systems are people. It's easy to mess up. Always strive for better. [fin] #VelocityConf
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Final talk I'll be getting to at #VelocityConf before I dash to Toronto: @IanColdwater on improving container security on k8s.
@IanColdwater She focuses on hardening her employer's cloud container infrastructure, including doing work on k8s.
She also was an ethical hacker before she went into DevOps and DevSecOps. #VelocityConf
She travels around doing competitive hacking with CTFs. It's important to think like an attacker rather than assuming good intents and nice user personas that use our features in the way the devs intended things to be used. #VelocityConf
My colleague @sethvargo on microservice security at #VelocityConf: traditionally we've thought of traditional security as all-or-nothing -- that you put the biggest possible padlock on your perimeter, and you have a secure zone and untrusted zone.
@sethvargo We know that monoliths don't actually work, so we're moving towards microservices. But how does this change your security model?
You might have a loadbalancer that has software-defined rules. And you have a variety of compartmentalized networks. #VelocityConf
You might also be communicating with managed services such as Cloud SQL that are outside of your security perimeter.
You no longer have one resource, firewall, loadbalancer, and security team. You have many. Including "Chris." #VelocityConf
The problems we're solving: (1) why are monoliths harder to migrate? (2) Should you? (3) How do I start? (4) Best practices #VelocityConf
.@krisnova is a Gaypher (gay gopher), is a k8s maintainer, and is involved in two k8s SIGs (cluster lifecycle & aws, but she likes all the clouds. depending upon the day). And she did SRE before becoming a Dev Advocate! #VelocityConf