In my opening remarks at #NDSS18 today, I took a few minutes to talk about the process we took as a Program Committee to build the program. My goal is to increase transparency and improve the academic reviewing experience. (long thread) 1/
These efforts and initiatives were made by both myself and my co-chair, @AlinaMOprea. However, any problems with phrasing or opinions on Twitter are solely my fault. 2/
We faced three main challenges this year: 1) an ever expanding community; 2) changes by other conferences to their submission practices and 3) an almost universal sentiment that the peer reviewing process in security has become unhelpful and unfriendly 3/
1) Expansion of our community is a great thing. It means that our work is relevant and in demand. It also means that we need to do a better job of ensuring that we capture wide representation 4/
One of our major efforts this year was to shoot to have balanced gender representation on the PC. Accordingly, we strove for women to make up between 40-50% of our membership. Unfortunately, we were only able to achieve 20%. 5/
Note that while this is a record for NDSS, we need to do better. Our search extended well beyond the walls of academia, and we have since had numerous discussion on how to continue improving this in the future. 6/
We were able to include many new voices on the PC this year, including multiple senior PhD students and post-docs. While some were concerned about putting junior people on the PC, we viewed this as an opportunity for mentorship. 7/
Anyone who believes that graduate students are not already writing reviews is not paying attention. Having them on the PC meant that we could teach them how to be good reviewers/community members early. They were all excellent 8/
2) The monthly submission process of IEEE S&P was also considered. We received fewer papers this year than last. The temptation to simply maintain acceptance rates was significant, but ultimately unhelpful. After all, Issue 1 was that our community continues to expand. 9/
We decided to take no fewer papers than last year, even though it would result in a higher acceptance rate. We believe that we would not only take more good papers, but give more room for new voices. 10/
Besides, security is notoriously bad at metrics. Taking fewer papers is not only not prestigious, it’s counter-productive and (as the now famous NIPS experiment shows) not at all scientific. NDSS is great because the quality of ideas and their execution. 11/
It’s also great because of the people it brings together. If you think there’s a meaningful difference between conferences based solely on a few points difference in the acceptance rate, you’ve missed the point. 12/
3) We’ve all received less than helpful reviews. It’s maddening to have a paper locked up for months, only to have it returned without actionable comments for improving it. This is a huge contributor to papers being submitted with minimal changes. 13/
We selected PC members using many metrics, including reputation. Folks with a reputation for writing short, unhelpful or highly opinionated reviews were generally excluded from the PC. We tried to be very careful in our selection process. 14/
Roughly 50% of papers were still rejected in the first round. We make no claims that all reviews achieved our goals, and I received many angry (and some personally insulting) emails reminding me of this. 15/
The problem of unhelpful reviews is bigger than any one PC. #NDSS18 is a first push-back against this toxic behavior. We only solve this problem when enough of us vow to address it. Explicit tips for reviewers are offered below. 16/
Papers that made it to Round 2 were allowed to submit a response. This was intentionally different than previous rebuttal processes. Our experience had been that too many reviewers ignored rebuttals in the past, and we hoped limited numbers would help. 17/
By returning Round 1 results to authors within a month, those that did not make Round 2 did not have to wait the full three months. The best thing for many of these papers was likely additional time and a new, different set of reviewers. 18/
We also assembled a “Review Task Force” for Round 2. This set of senior reviewers could challenge reviews, force conversation and even demand fixes. We found this to be enormously helpful because these hand-selected members were very active. 19/
We set the tone very early. Killing papers is easy, and quite frankly unimpressive. Every paper has flaws, but being able to identify the interesting ideas and help the authors is our real job. Being a PC member is about being a mentor, not an assassin. 20/
Finally, we gave the PC an ultimatum. We set the minimum number of accepted papers to last year’s total (67), and told PC members that if they failed to meet this number that the chairs would pick the remainder of the program. 21/
The choice was simple - understand the challenges of peer reviewing (aka the NIPS experiment) and be democratic, or do what is always done and force the PC Chairs to be autocrats. Democracy won, and the PC selected 71 papers. 22/
The process wasn’t perfect, and I am certain that I will continue to hear about where mistakes were made. That’s ok - we are committed to improving the process. 23/
But we can’t do it without you and the rest of the community, so let’s do this together. Make your reviews helpful, and provide actionable feedback. Separate technical from philosophical problems. 24/
Just because you wouldn’t have done it that way doesn’t mean it’s wrong. 25/
Don’t say something is “not novel” without providing strong evidence and specific citations. Even if that’s the case, remember that measurement and confirmation are valuable contributions. Nobody won a Nobel for the Higgs-Boson until it was measured. 26/
Lastly, accept that you may be wrong. Unpopular, for sure, but it should take more than one loud reviewer to kill a paper. 27/
My hope is that the community builds on our efforts. Chairs, don’t be afraid to try new things. PC members, be mentors! Community, volunteer to serve! 28/28
• • •
Missing some Tweet in this thread? You can try to
force a refresh
If you’re in the US at the moment, you almost certainly got one of these yesterday. There’s been a lot of confusion about what it is (and what it is not). Let me tell you more about emergency alerts using computer science, history, and of course, Batman. #news 1/
Emergency alerts were deployed in the US in 1951 via the Control of Electromagnetic Radiation (CONELRAD) system. Fearing a nuclear attack during the Cold War, radios could be tuned to 2 AM frequencies (see triangles) to receive civil defense information 2/ radiomuseum.org/forumdata/user…
As the threat of attack decreased, the desire to expand such a system for use in local emergency coordination allowed for the better known Emergency Broadcast System (EBS) to replace CONELRAD in 1963. You’ve heard EAS on your radios and TVs. 3/
Let’s add a little context to the discussion of voting security in the US. To do that, I’m going to do something Computer Scientists are bad at - I’m going to talk about history. Specifically, let’s look at the Help America #Vote Act (HAVA) of 2002. 1/
The reason to talk about HAVA is to start recognizing some of the challenges of paper ballot-based voting. HAVA came about largely because paper ballots were proving to be extremely difficult to manage. 2/
For instance, paper ballots are expensive (they’re not printed on standard college-rule paper), and they’re not reusable across elections. You’ve also got to try and guess who is going to show up to the poll, and how many ballots you’ll need in each language. 3/