Mark Allman
International Computer Science Institute
When the Internet was initially constructed it was a de-facto playground for researchers. The research community built the Internet and refined it by monitoring traffic and tinkering with protocols and applications. The days of the Internet being a research sandbox are clearly over. Today, society at-large depends on the Internet as a crucial piece of infrastructure for communication and commerce. As such, experimenting with and monitoring today's Internet is a much more thorny endeavor than in the past because researchers must consider the impact of their experiments on users. The community has started to face questions about the propriety of certain experiments and will no doubt continue to struggle with such issues going forward. In this note, we ponder the role of a program committee when faced with submissions that involve a potentially unacceptable experimental methodology.
We note that the community already charges program committees with dealing with a number of ethical issues such as plagiarism, simultaneous submissions and conflicts of interest. Should dealing with questions about the ethics of a particular experimental methodology also be on the PC's plate? Certainly we are aware of PCs that have taken it upon themselves to include such issues in the debate over particular submissions. Are such considerations ``in bounds'' for PC consideration? Or, should PCs stick to the technical aspects of submissions?
To give the reader a concrete flavor of the sorts of questions that might arise during PC deliberations we sketch possible reactions to several recently published paper as examples.
The above questions are all items that the research community will have to (formally or informally) puzzle through in some fashion. These questions are not posed to lay blame against the papers cited above. Nor are we posing these questions in an attempt to answer them in this note. Rather, we pose the above questions and cite the papers to give a flavor for the work that program committees have grappled with recently. The question we ask in this short note is: What is a program committee's role when faced with submissions that touch on the thorny issues of whether a particular experimental methodology is acceptable or a breach of etiquette or ethics?
The discussion in this paper is in terms of ``etiquette'' and ``ethics''. Another aspect of the questions posed, however, is that of ``legal'' concerns. We largely ignore this final aspect, but no not wish to diminish its importance. We do not consider legal aspects here because () PCs for systems venues are not comprised of lawyers and therefore applying legal constraints to submissions is not likely to be accurate or useful and () the global nature of our conferences and workshops mean that the submitted work is subject to myriad laws and precedents that PCs cannot be expected to understand or cope with. Researchers are encouraged to take steps to ensure they understand the legal implications of their work. Two recent papers written by legal scholars provide legal background within United States law (however, these are likely not a replacement for consulting a lawyer about specific projects) [13,9].
The ACM Code of Ethics and Professional Conduct [6] can be read to address some of the issues we discuss in this note.For brevity we only address the ACM Code in this note, but other professional societies have similar codes. For instance, the Code indicates that professionals should ``avoid harm'' and ``respect the privacy of others''. As a broad framework the Code is reasonable and should provide researchers with a basis for thinking through their experimental methodologies. However, the lack of specifics means that the Code can be interpreted in a multitude of ways and therefore offers little help to PCs when considering thorny ethical issues (except, perhaps, in particularly egregious cases). In addition, reading the Code from a reviewer's standpoint might cause the reviewer to think that ``avoid harm'' means that a paper should be rejected to provide a disincentive to some particular behavior. Given these ambiguities it seems difficult to strongly lean on the Code for guidance on the sorts of specific questions that often face program committees.
Finally, while admitting similarity, we draw a distinction between research that is focused on users and networks and researched that is focused on algorithms. Therefore, we do not consider research such as outlined in [8], which discusses the security problems in 802.11 networks protected by WEP (and in fact how to compromise WEP). While such work can have concrete implications for users the research is fundamentally focused on the algorithms.
A program committee's overall task is obviously to decide which papers to include in a conference program. How much should these issues of etiquette and ethics play into the decision to accept, reject or shepherd a paper?
A simple approach would be for PCs to not bother with issues of etiquette and ethics at all and consider only the technical contribution of a particular paper. This option might be seen as reasonable because PCs are generally comprised of people with technical expertise, but not necessarily a broad grasp of the potential ethical issues involved in conducting various research projects.This is different from PC members being unethical. It is possible for a given researcher to understand well the issues involved in their own work, but have little or no understanding about the sensitivities involved in work on a different topic. E.g., someone may understand the sensitivities involved in passive measurements, but not appreciate the issues involved with active probing. This would leave judgments about the acceptability of particular techniques to the overall community's public scrutiny. For instance, the use of the LBNL data discussed in § 1 and published in [10] led to a rebuttal and call for more explicit guidance from data providers [5]. Perhaps a system whereby the community-at-large polices behavior in such a fashion is best.
On the other hand, there are a number of reasons why a PC might want to consider the etiquette involved in a particular submission as part of its decision process, such as:
While there are reasons a PC may want to consider breaches of etiquette in their decision-making process, such a path is not without problems. First, in the absence of a set of community norms each PC will have to reach its own consensus about the acceptability of a particular experimental technique. This will ultimately lead to uneven results and an unfairness to authors across venues. Further, rejecting a paper does not necessarily discourage what a PC considers to be inappropriate behavior since such decisions are not public. Finally, by rejecting questionable papers the community may lose out on some key new understanding that came from the research--which in turn begs the question as to whether the ends justify the means.
In the normal course of business submissions are expected to be held in confidence by a PC. However, another question that has come up is to what degree a PC should violate this expectation when the committee finds the experiments presented in a submission to violate some form of etiquette. We know that in cases whereby reviewers and PC members have alerted PC chairs about a possible simultaneous submission that the chairs of the two venues have violated the expectations and shared the two papers to investigate the claims. Further, in cases of suspected plagiarism the ACM has a well-established policy for investigating such allegations that is beyond the scope of normal PC process and includes additional people [4]. Thus, there is precedent for PCs to involve external parties under exceptional circumstances.In addition, outside of computer science there is history in revealing otherwise private conversations in extreme cases. E.g., a lawyer is required to reveal any knowledge of a planned crime by a client--even though conversations between attorneys and clients are generally private and not revealed.
A broad question about whether there should be a body charged with investigatory power for certain unacceptable experimental behavior--as for plagiarism--is certainly something the community could puzzle through. However, that requires a set of ethical standards as a first step. Further, this question is somewhat beyond the scope of this note (i.e., what a PC can or should do when encountering inappropriate experimental techniques).
A more near-term question pertains to the use of a shared community infrastructure such as PlanetLab or some released dataset. When a PC encounters an experiment it ultimately considers inappropriate, is the PC in some way duty-bound to share the submission and the concerns with stewards of the community resource in the name of protecting the resource for the good of the entire community? As discussed above, PCs have a unique vantage point and therefore can raise concerns before a particular paper reaches the public. This can be important in cases such as de-anonymization of data whereby the results of the research may, for instance, have an impact on a network's security posture. In addition, if the administrators of some platform concur that unacceptable behavior is occurring they can sever a user's access. On the other hand, this violates the expectation of a PC holding submissions in confidence. Is protecting our community's resources a big enough concern to violate this expectation?
A similar situation would arise if a PC thought the privacy of a groups of users was being undermined in a submission. Does the PC have a duty to report such activity to the group of users (if possible) or the organization providing the data used in the submission?
Many organizations have processes for doing research that involves human subjects and this often involves an Institutional Review Board (IRB). Computer scientists have started using their institution's IRB processes for studies involving traffic monitoring. Anecdotally we find that some of this is driven by institutions becoming more aware of the implications of networking research and some is driven by researchers seeking to ``cover themselves''. Whatever the impetus for using the IRB process, a natural question is how it pertains to a PC's deliberations. If a submission notes that the experimental methodology has been vetted by the submitting institution's IRB is that enough to allay any concerns a PC might have about etiquette? While our intention is not to dissuade the use of IRBs, they are not a panacea. We note several issues:
All that said, it seems clear that in some cases IRB approval can be used by PCs as an indication that an experiment is acceptable. For instance, if an experiment involves monitoring a campus wireless network and the appropriate IRB approves of the procedures for monitoring, storing the data, ensuring user privacy, etc. then shouldn't a PC respect that boards findings for the given setting described in a submission?
In many ways this note contributes nothing to finding solutions to what PCs ought to do in cases where they discover what they believe to be inappropriate experimental behavior. Our goal in writing this is to not speculate on an answer to this question, but rather to start a discussion within the community about these issues. One avenue that the community may pursue is to develop a set of standard practices and/or a set of practices that are considered out-of-bounds.A BOF was held at last year's Internet Measurement Conference as an initial discussion of whether such a set of norms would be useful. A possible workshop adjunct with the Passive and Active Measurement Conference in April 2008 may attempt to take the next step. If a set of community norms were to be developed, what should a PC's role be in enforcing those norms? In the absence of such a set of community standards, what role should a PC take? Our hope is that rather than positing what a PC's role should be we can first start with a community discussion of these issues.
This note benefits from discussions with a great many people including Ethan Blanton, Aaron Burstein, kc claffy, Ted Faber, Mark Claypool, Shawn Ostermann, Vern Paxson, Colleen Shannon and the anonymous WOWCS reviewers. My thanks to all!
This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.71)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 -show_section_numbers -local_icons ethics.tex
The translation was initiated by Mark Allman on 2008-04-01