Moshe Vardi makes an excellent point in his January 2020 column in noting we, as a community, should do more to reduce carbon emissions and suggests ACM conferences do more to support remote participation. While I share his concern about carbon emissions, I have several concerns about his proposals for conferences.
First, time zones often make it difficult to participate in remote events, a problem that is also often faced by members of a distributed development team. At home, I'm nine hours behind Western Europe and about 12.5 behind India, so I would have to join late at night in both cases. That is just not a workable solution for a multiday conference.
Second, my own teaching experience during the past 15 years (plus countless faculty meetings) has repeatedly demonstrated that remote participants are less involved. Maybe they are trying (unsuccessfully) to multitask, but it is simply more difficult for remote attendees to ask questions or join a discussion unless it is a virtual event where everyone is remote and there is a moderator who recognizes participants in turn.
Third, the experience with online courses (Udacity, edX, among others) suggests material should be presented differently to a remote audience than to a local one. Khan Academy has long taught in 10-minute snippets, perhaps in recognition of the shorter attention spans of its audience. Personally, a brief illness last year caused me to deliver a keynote address remotely. Even though I cut my talk down to half of its original length and used slides, there were fewer questions and less discussion than I would have expected.
Fourth, it's important for aspiring and junior faculty to personally meet the senior faculty in their specialty F2F. Not only are they colleagues, but they are often valuable for supporting academic promotions. A connection over LinkedIn, even if accepted, falls well short of a personal connection. Vardi recognizes, and I agree, that there is an important social networking aspect to conferences that cannot be satisfied by remote participation.
Finally, conferences need to build their own community to assure their long-term success, including the leadership of future years of the conference. While it's easy to join a program committee remotely, conference and program chairs, as well as other members of the organizing committees, are more likely to come from repeat attendees who have developed personal relationships with conference organizers.
In summary, I'm trying to do my part (home solar panels, electric car) to reduce my carbon impact, but I think there are some difficult issues with Vardi's proposal. I hope that we can continue the important discussion about our impact on the environment and find some alternative solutions that can address the issues raised here.
Anthony I. Wasserman, Moffett Field, CA, USA
Author's response
Quoting from my column: "Of course, conferences are more than a paper-publishing system. First and foremost, they are vehicles for information sharing, community building, and networking. But these can be decoupled from research publishing, and other disciplines are able to achieve them with much less travel, usually with one major conference per year. Can we reduce the carbon footprint of computing-research publishing?"
Reducing our carbon footprint is an existential imperative. We cannot blindly cling to the way we have been doing things. For some fresh thinking, see, for example, http://uist.acm.org/uist2019/online/
Moshe Y. Vardi, Houston, TX, USA
Response from the Editor-in-Chief
The idea that the field of computing could reduce its carbon impact by reducing the prominence of conferences and adopting practices from a number of other scientific fields is a good one, and I applaud Vardi's column, Wasserman's response, and other efforts recently highlighted in Communications (for example, see Pierce et al. on p. 35 of this issue.)
But if the cause of reducing computing's carbon footprint excites you, recognize that conference travel is a pittance when compared to the negative climate impact of computing's power consumption. Our research collaborators' work of 2019 datacenter global power consumption estimates are nearly double earlier estimates—now 400 TWh! These numbers are a large multiple higher than the best projections based on 2013 data.3 There has been an important major change. These numbers are shockingly large—and worse—they are growing fast. Recent press about hyperscale cloud reveal growth rates of perhaps 40% per year.2
For more, see my broader call to action1 for computing professionals to address computing's growing and problematic direct environmental impact. Let's all get moving on this!
Andrew A. Chien, Chicago, IL, USA
Reducing Biases in Clinical Prediction Modeling
In "Algorithms, Platforms, and Ethnic Bias" (Nov. 2019), Selena Silva and Martin Kenney visualized a chain of major potential biases. The nine biases, which are not mutually exclusive, indeed must be considered in the design of any data-driven application that may affect individuals, especially if the biases have the potential to negatively affect a person's health condition.
Users may be slightly affected if they are exposed to irrelevant online advertisements or more greatly affected if they are unjustifiably refused a loan at the bank. Even worse would be a poorly designed algorithm that can cause a physician to make a decision that may be harmful to patients. An outdated risk-assessment algorithm can significantly affect many individuals, especially if broadly used. An example of such an algorithm is the Model for End-Stage Liver Disease (MELD) score, a risk-assessment algorithm for the liver that has been in use worldwide since 2002. The score was designed based on data captured from an extremely small group of patients and had only three laboratory covariates, which were manually selected, eliminating other potentially predictive covariates, such as age and other labs, incorporated into the MELD-Plus score in 2017.
Reduction of biases in the design of clinical prediction modeling is crucial. To achieve such a reduction, it is necessary to precisely define the outcome to be predicted; when defining the exact occurrence of a diagnosis or exacerbation of a condition, relying on diagnosis codes alone may result in inaccuracy, as has been widely discussed in the medical literature. The date of exacerbation in heart failure, for example, must be defined by at least two independent data elements that are closely captured in time, such as a diagnosis code date and a diuretic prescription, as opposed to merely capturing an admission associated with the condition with no clear evidence that the primary reason for admission was the patient's worsening heart. To avoid such biases, for example, Khurshid et al.4 combined multiple data elements to identify the onset of atrial fibrillation.
To reduce biases even further, another approach would be to avoid using subjectively selected elements. For example, there is great variability in how physicians use diagnosis codes to document conditions such as hypertension and type-2 diabetes; such conditions could be defined more precisely based on actual lab values (for example, A1C and blood pressure) rather than relying on diagnosis codes alone. Furthermore, although it is widely known that genetic as well as behavioral variabilities exist across ethnicities and regions of residence, such data elements must be used with caution when incorporated into predictive risk scores because these factors are not objectively measured as labs and may be co-incidental relative to a medical outcome and not serve as reliable predictors.
Uri Kartoun, Cambridge, MA, USA
Where Good Software Management Begins
Bertrand Meyer's critique on a project's critical path and Brook's Mythical Man Month is so laced with pejorative themes (Blog@CACM, Jan. 2019); his basic thought that heuristics and mathematical models should always be tailored to the situational context is only laboriously revealed. Mocking and ridiculing the work of earlier practitioners negates one's own ideas, as we all build on yesterday's results.
Brook's insight is not a law, but a heuristic based on the simple mathematical formula that calculates the variable possible number of channels (Edges) of communications between a given number of people (nodes): C = {N(N-1)} / 2.
Whether ineffective managers blindly throw additional money and resources at a project ('Crashing a project' in project management nomenclature) is not a fault of Brook's insight, but a mis-application of the principle.
The Project Management Institute (PMI) has a well-documented Body of Knowledge (PMBOK) including earned value management (EVM), a suite of simple formulas using a common $cost unit of measure across both time and cost units.
One of the initial guidance principles of PMI and systems engineering is that the rigor and scope of the use of the tools should always be tailored to the particular effort; in other words, you don't need a shotgun when going to an arm-wrestling contest.
Good software engineering management always calls for intelligent application and balance of cost, scope, and time. If you constrain any one side of this triple constraint, the other two will flex. It's not rocket science. And, if anything 40 years old is obsolete, we may as well drop Euclidian geometry, after the advent of Einstein's work and non-Euclidian geometry.
Michael Ayres, San Francisco, CA, USA
Author's response
I am not sure Ayres paid enough attention to what my blog actually says. It is not a "critique" and does not mock anyone. It is the reverse of "pejorative," that is to say, it is actually laudatory: it brings to the attention of the Communications readership, particularly software project managers, the importance of a key result reported in Steve McConnell's 2006 book, pointing out it deserves to be better known. This is its plain goal, not "that heuristics and mathematical models should always be tailored to the situational context" (which, if I understand this sentence correctly, is probably true but not particularly striking and not what I wrote).
"Brooks' insight is not a law:" True, that's indeed what my article says, but "Brooks' Law" is what Brooks himself called it when he introduced it in The Mythical Man-Month.
"Anything 40 years old is obsolete:" Of course not, nor did I imply anything like this. Same thing for the blaming of Brooks' Law for ineffective managers; my article makes no such representation.
I guess Ayres's main goal is to highlight the value of the PMBOK, a recommendation that I am happy to endorse.
Bertrand Meyer, Zürich, Switzerland
Join the Discussion (0)
Become a Member or Sign In to Post a Comment