Published: Sat 07 December 2019
By Mathias Payer
In Academia .
tags: NDSS SEC review
Yesterday we concluded the NDSS20 PC meeting. In total, 12% of papers were accepted, 6% now have a short fuse major revision opportunity , in line with other top tier conferences.
The PC chairs handled the meeting well, striving for positivity and feedback for the authors.
Overall, this was a great experience with lots of interesting discussions and arguments.
Looking at the subset of systems/software security papers, I'm a little worried.
In total, I positively bid on 53 papers (during the bidding phase of the review process, reviewers indicate which papers they would like to review).
Note that this set does not include papers I have a conflict of interest with.
Also note that this selection is highly biased towards what I am interested in.
I'm therefore focusing only on the subset of papers I am a) interested in and b) not conflicted with. Your mileage will vary.
Of the papers in my bid, only 6/53 papers received an accept/major revision decision.
This results in a sad maximum 11% acceptance rate (assuming the unlikely event that we would accept all revisions).
Of the six papers moving forward, I reviewed all of them (in total, I reviewed 17 papers this round). Three of the six I championed, for the others I argued at least for a major revision.
If all five major revisions are accepted, this will result in, at best, an 11% acceptance rate in my field. At worst, we will have a 2% acceptance rate.
I strongly believe that there are more good papers in this community.
Let us find ways how to distill these good papers and bring them to light!
Many of my reviewer peers are complaining about the low acceptance rate in software/systems security.
Unfortunately, it is extremely easy to reject papers as a reviewer.
Assume you are a reviewer. If, every time you review a set of papers for a conference, only two papers out of 20 are acceptable then you may want to adjust your approach.
I'm not talking about the occasional batch of bad papers. If this happens every time you review, you may either want to reconsider your bidding strategy (your area may not be of interest to others) or your review scores (you are too tough on papers).
At top tier conferences, the program is made out of papers from many different sub areas.
These areas change over time, as new areas become dominant, others lose importance.
My (informal) observation is that systems/software people are too tough on papers their area.
As systems people we not only assess the idea and the design of the system but also how well it is implemented and evaluated.
We get satisfaction by finding flaws in the design ("ah, you did not consider that the blubb bit remains writable"), by asking for massively more evaluation ("well, they only tested their system on 23597 applications and 253 million lines of code"), or by comparing to marginally related work whose underlying assumptions may no longer be valid ("In 1983 there was a paper on information flow control that solved the problem for programs up to 200 lines of code").
While pointing out such issues is great (and they should be clearly discussed in the paper), they can often be handled as a major revision.
I've been guilty of all these fallacies myself. When considering these flaws, assess if they are fixable and remain positive.
Being part of the review task force at Usenix SEC20 (the review task force supports the PC chairs by reading reviews of a large chunk of papers and guiding the online discussions), I saw how people in other communities fight for acceptance of papers in their area.
The general vibe was much more positive and reviewers were looking for reasons to accept a paper, not to reject it.
So, software/systems security folks: find reasons to accept the paper.
Let's turn our systems skills into an advantage that makes our field stronger.
We can shape the program of conferences and accept more papers that are closer to our interest.
Stop worrying about the occasional false positive where a bad paper is accepted, but focus on the broader picture of what is interesting in our area.
As we carefully review papers, we can guide authors on how to improve their papers.
Augment your review with facts that you liked about the paper, letting authors know the strengths of their paper along with the weaknesses.
When you make your final judgment, consider the full set of pros and cons. For the weaknesses, consider if they are fixable.
Give the authors clear instructions on what you think the weaknesses are and point them at how they can fix them.
Then, fight for acceptance instead of rejection. The next deadline is coming up soon!