As I read the special section on the China Region (Nov. 2018), I thought privacy in China deserved better treatment than was expressed in the section's foreword "Welcome to the China Region Special Section" by co-organizers Wenguang Chen and Xiang-Yang Li, that "People in China seem less sensitive about privacy." It sounded almost identical to what Robin Li, CEO and co-founder of Baidu, said in a talk at the March 2018 China Development Forum that was not well received by China's Internet users.2
A March 2018 survey of 100,000 Chinese households by CCTV and Tencent Research reported 76.3% of participants view AI as a threat to privacy.1 Other global privacy surveys, including one by KPMG, reported privacy awareness in China as far more prevalent than the authors seemed to imply.
One of the few critical notes in the special section came near the end of the Elliott Zaagman's article "China's Computing Ambitions" when it called the lack of (Western-style) legal protections and transparency "a real concern." This was followed by a quote on the weaknesses of more-open digital societies. When lack of privacy rights was mentioned elsewhere in the special section, it was described as "an accepted observation."
Feng Chucheng of risk-analysis firm Blackpeak, said, "Rather than simply reflecting [the status quo] that privacy protections are not well-developed in this society, [Baidu] should be leading the charge to improve privacy rights."2 Perhaps the professors and analysts who contributed articles to the section should have tried to do the same. It would not have detracted from the quality of their articles.
The "West" itself shows signs of moving toward being a surveillance society, and no amount of "privacy rights" will change that historical direction. More than a few Western governments are actually envious of China's unique applications of technology in society. We should be suspicious of government agencies and regulators redefining privacy or downgrading it or citing national security to make such applications fit their agenda. A similar observation can be made about privately run corporations as well, especially social networks.
Articles and columns in Communications should include, along with technological achievement, considerations on how they might be abused and the lessons that should be learned when they are. It would mean extra work for every author, as well as increased reader skepticism, but would surely increase awareness.
As a New Year's resolution, I respectfully invite everyone to read or reread the ACM Code of Ethics and Professional Conduct (https://www.acm.org/code-of-ethics), especially sections 1.1, 1.2, and 1.6, and incorporate it into their research and professional practice, especially those with authority and influence—or who publish in its leading publication.
Vincent Van Den Berghe, Leuven, Belgium
Van Den Berghe's letter raises a good point—that articles discussing technology can and should be enriched by discussion of their societal context, including potential abuses. I am pleased to see this topic being raised in the context of the China Region special section and believe it applies much more broadly, both globally and across a variety of topics. This is an important challenge to Communications authors. I am sure they will rise to it.
Andrew A. Chien, Chicago, IL, USA
In their Viewpoint "Learning Machine Learning" (Dec. 2018), Ted G. Lewis and Peter J. Denning used a Q&A format to address machine learning and neural nets but, in my view, omitted two fundamental and important questions. The first is:
Q. Is machine learning the best way to get the most reliable and efficient solution to a problem?
A. Not generally.
To explain my answer, I need a definition of "machine learning." Machine learning is a machine collecting data while providing service and using the data to improve the speed or accuracy of the service. This is neither new nor unusual. For example, a search program can reorder its search list to move the most frequently requested items toward the top of the list. This improves performance until there is a major change in the probability of the items being requested. When this happens, performance may degrade until the machine "learns" the new probabilities. Suggestions offered by a search engine are also based on data collected while serving users; the search engine uses the data to "learn" what users are likely to ask.
When machine learning is used to "discover" an algorithm, it may find a local optimum, or an algorithm that is better than similar algorithms but very different from a much better one. A human who took the time to understand the situation might find that algorithm. Machine learning is often a lazy programmer's way to solve a problem. Using machine learning may save the programmer time but fail to find the best solution. Further, the trained network may fail unexpectedly when it encounters data radically different from its training set.
The second Q&A pair Lewis and Denning should have addressed concerns "neural networks":
Q. If developers have constructed (or simulated) a physical neural network and trained it to have the behavior they want, is it possible to replace it with more conventional hardware and software with the same behavior?
In other words, there is no problem that can be solved using neural nets that could not be solved using other more conventional hardware and programming languages. Some claim the neural net will be faster (or more efficient in some sense), an assertion that remains to be proved. Any performance advantage observed today can be attributed to the highly parallel specialized processors used to implement the nets. Better performance can often be obtained by programming the hardware directly.
David Lorge Parnas, Ottawa, Canada
Given the space, we would have answered Parnas's provocative questions much the same way he did. We would have added how difficult it is to beat the performance of neural networks on special-purpose hardware. We also cannot ignore AlphaGo, the machine that played against itself for several days with no outside information and became a grandmaster at Go. The previous IBM chess supercomputer was carefully designed by industrious programmers over many years. Speed to solution is a powerful motivator, even if the solution may not be understandable.
Ted G. Lewis and Peter J. Denning, Monterey, CA, USA
Jordi Cabot et al. first outlined their hypothesis about lack of "newcomer" authors being accepted at computer science conferences in their Viewpoint "Are CS Conferences (Too) Closed Communities?" (Oct. 2018) and then, seeking data to evaluate it, succumbed to confirmation bias, unintentionally undermining their own hypothesis. Their stated objective of "opening up" computer science conferences may be a laudable social goal, but they presented no evidence that the technical quality of conferences would be enhanced by doing so. Moreover, they presented little, if any, compelling evidence that the claimed lack of newcomer submissions is due to any reason beyond the standard criterion—technical merit of the papers.
Although the title of the Viewpoint referred specifically to computer science conferences, Cabot et al. pointed out that the database of papers they included in their survey was limited to the area of computer software. They should thus have limited any conclusions to conferences likewise devoted to computer software.
They defined newcomer papers as "... research papers where all authors are new to the conference; that is, none of the authors has ever published a paper of any kind in that same conference." This brings up two problematic analytical issues. First, is newcomer status binary? That is, does publication of a single paper in a conference render a newcomer author (to use their phrase) a "member of the community?" Second, how different would their statistics have been if they had used a data-collection period different from the seven years on which they based their analysis? These questions went unanswered.
Moreover, they said, "... analysis suggests that newcomer paper submissions represent at least one-third of the total number of submissions" based on the data of one of the Viewpoint authors as a member of the program committee of four software conferences. We cannot ignore the potential correlation among the conferences where he was a committee member. It thus seems unreasonable to conclude the data suggests anything about the set of 65 conferences covered in the study survey. Further, their suggestion that at least one-third of submissions are from newcomer authors was weakened by their later conjecture that "some potential newcomers refrain from submitting in the first place," saying, "[t]he overall presence of newcomers decreases over time." This suggests that either newcomers are becoming "established members of the conference community" or the field itself is shrinking. The possibility of computer software research shrinking is unlikely.
It is thus not apparent there is a "problem" involving lack of newcomers submitting papers to computer science conferences or that Cabot et al.'s suggestions are supported by relevant data and would contribute to the health of the field of computer science.
Paul B. Schneck, Bala Cynwyd, PA, USA
We agree there is no evidence that opening up conferences increases their technical quality, at least not right away, but believe it is still an important goal for the community and one that will prove beneficial in the long term. We also agree an extended data analysis would be beneficial to continue the discussion. We hope the column triggers it and generates replication studies and some pressure on conference managements to release additional (anonymized) data.
Jordi Cabot, Barcelona, Spain, Javier Luis Cánovas Izquierdo, Barcelona, Spain, and Valerio Cosentino, Madrid, Spain
Near the end of Leah Hoffman's interview with Dina Katabi "Reaping the Benefits of a Diverse Background" (Oct. 2018), Katabi said, "I couldn't tell you if . . . we should change the dose of her Parkinson's medication." In fact, the winner of the 2018 Human-Competitive Award at the ACM Genetic and Evolutionary Computation Conference in Kyoto, Japan (see http://www.human-competitive.org/awards) has already done just that.
The prize went to Stephen L. Smith, a senior lecturer in the Department of Electronics in the University of York, York, U.K., for a home-monitoring device for Parkinson's dyskinesia (involuntary muscle movement).3 ClearSky's LID-Monitor, which includes novel signal processing developed through Cartesian genetic programming, reports the severity of shaking associated with the disease to the patient's medical team, assisting in setting the correct dose of Levodopa.
W.B. Langdon, London, U.K.
1. Hersey, F. Almost 80% of Chinese concerned about AI threat to privacy, 32% already feel a threat to their work. TechNode (Mar. 2, 2018); https://technode.com/2018/03/02/almost-80-chinese-concerned-ai-threat-privacy-32-already-feel-threat-work/
2. Li, R. Are Chinese people 'less sensitive' about privacy? Sixth Tone (Mar. 27, 2018); http://www.sixthtone.com/news/1001996/are-chinese-people-less-sensitive-about-privacy%3F
3. Lones, M.A. et al. A new evolutionary algorithm-based home-monitoring device for Parkinson's dyskinesia. Journal of Medical Systems 41, 11 (Nov. 2017), article 176; http://doi.org/10.1007/s10916-017-0811-7
Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to [email protected].
©2019 ACM 0001-0782/19/02
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.
No entries found