Opinion
Computing Applications

Forum

Posted
  1. Solving the plagiarism problem
  2. The Limits of LEGOs
  3. What About Information Dominance Warfare?
  4. The Licensing Issue
  5. Author

Ned Kock’s article, "A Case of Academic Plagiarism" (July 1999, p. 96), was an interesting detective story that also shed some light on copyright laws, electronic publishing, and the frustration and aggravation encountered by authors who believe their work has been plagiarized. Kock ends his article by calling for a wider debate on academic plagiarism. One cannot argue that call. But while worrying about the ethics in the area of plagiarism, Kock, and those he dealt with, failed the ethical test in an other area—the ethics of professional journal refereeing.

It is dismaying to learn that Kock’s friend, who first alerted him that he was refereeing a paper that heavily cited Kock’s work, eventually let him see the referees’ reports, the journal’s associate editor’s report, the identity of the referees, and the submission with the author’s identity. To that point in time, neither Kock nor his referee friend had any thought that plagiarism occurred. I find such action quite unethical and against the stated and implied rules of professional journal refereeing.

The assumed plagiarist’s submission was rejected by the journal’s associate editor. That should have been the end of it. I am not suggesting that plagiarism, if suspected, should not be pursued. If the paper eventually got published, then Kock would have certainly become aware of it and would have had, in my mind, a stronger case.

There are worrying implications stemming from Kock’s article. The first is his suggestion that professional societies establish ethics committees "…to bring some measure of justice to the people involved." Such volunteer committees would, I believe, require Solomonic decisions from ordinary people and could lead to false results and the possible ruination of careers. I say, leave it to the people involved and the courts. The second implication is that referees may be asked to become plagiarism police, endangering the refereeing process and the structure of peer review. We must enter into the peer review process under the assumption that all involved work under personal and professional codes of ethics that advance the objectives of the profession as a whole (see p. 102 of this issue). To do otherwise would bring us back to the future, Orwell’s future, that is.

We cannot chase away one ethical failure by committing another.

Saul I. Gass
College Park, MD

Kock shows that the present refereeing process cannot determine the quality of empirical data. This problem is even more serious than plagiarism. If referees cannot see whether data pertain to a New Zealand university or a U.S apparel manufacturer, they will not be able to distinguish fictitious data either. Writing an article based on false data may take more time than plagiarizing, but the probability of detection is far lower. This can be prevented by supplementing current refereeing with an audit of empirical data by an auditor with full access to all underlying data and sworn to confidentiality.

Rommert J. Casimir
Tilburg, The Netherlands

Kock’s article is as alarming as it is unsurprising. There is a closely related, much more widespread, and only marginally less-serious plagiarism problem that Kock doesn’t mention: inclusion of verbatim Web material by undergraduates in assessment essays. For several years I’ve advised colleagues to take key phrases from student writings and check them with a search engine. I also warn students that such a check will be done. Prompted by Kock’s article, I decided to do a tiny statistical test on the effectiveness of my advice.

I gathered 10 paragraphs from Web sites with the URL www.random-noun.com. The random nouns were generated using an ordinary nonlinear additive feedback generator acting on a machine-readable dictionary available from the Oxford Online Text Archive (ftp://ota.ok.ac.uk/pub/ota).

The hit rate of valid URLs was surprisingly high. I then took what seemed to be a distinguishing key phrase from each paragraph and let AltaVista search it as a quoted string. Only four of the 10 paragraphs were found.

Ten tests is far too small a statistical sample to be conclusive. But to me, a 60% chance of getting a high mark for a piece of assessed coursework for which no effort was put forth seems like a good bet from a time-pressed student’s point of view.

Adrian Bowyer
Bath, U.K

I found Kock’s article very interesting. I am happy that "Plag’s" university took some action to clarify the case, and I am sorry that Kock didn’t obtain any financial compensation for his work against the form of plagiarism described in his article.

A long time ago, Mediterranean fishermen used an anchor-like device called a "corpo morto" that behaved like most modern systems. Corpo morto with its polyp-like tentacles opposes stormy motions very well. The system moves only if constant force is applied and the movement is very slow—more tentacles cause more opposition. All legal systems behave in a similar fashion. Like all other systems, they oppose abrupt moves. That might be one reason why the academic world has conflicts with the legal system. In the long run, Kock’s work was successful; "Plag’s" probably was not. Plag’s environment corrected system behavior.

My family once lived in a European country in which the legal system wasn’t as favorable as Kock found the U.S. system to be. Every day, local announcements asserted the country would be one of the first included in the EU. Yet we transferred to nearby Italy because the legal system threatened to punish me for lowering our national wall. The architect (a person who won the suit) had direct connections with the country’s minister of culture. This example proves that at least in Europe, there are singular legal systems that can withstand stormy changes. One example is local war.

As a Communications reader for about 20 years, I find the move from a pure research/scientific review to a more consumer-oriented magazine very interesting. Keep publishing articles wide open to the law, ethics, and other noncomputer-related worlds.

Anonymous

Back to Top

The Limits of LEGOs

I enjoyed the article "Using Autonomous Robotics to Teach Science and Engineering" (July 1999, p. 85).

The class the authors describe, which entails multidisciplinary student teams designing, constructing, and testing LEGO-based autonomous mobile robots, is an admirable attempt to change the way technology is taught at universities. This interdisciplinary approach is sorely needed in technology education in general, and robotics in particular. Similar classes are being taught at many universities. Unfortunately, while they are great in principle, they often have implementation problems.

First, these classes often fail to address certain topics vital to the development of modern robotics. In particular, while they usually have lectures devoted to sensors, actuators, gear systems, and so on, and cover AI algorithms and biological connections in some detail, they usually do not address dynamics, kinematics, or control theory. This is unfortunate since these fields are critical to robotics, having contributed optimal solutions to the problems of robot locomotion in the presence of environmental uncertainty and noisy sensors and actuators.

In my experience, students in these classes must spend many hours developing ad-hoc solutions to these problems, when simple, elegant solutions already exist and can be covered in just a few lectures. Nearly every real-world mechanical system uses these results, yet most students exit these classes not even knowing they exist. While robotics researchers do not necessarily need to become control theorists, anyone participating in robotics research or any kind of commercial technology design should at least understand the vital (and astoundingly overlooked) part that dynamics, kinematics, and controls play in robotics.

Also, while LEGOs are a wonderful tool for encouraging creative play in children, they fall flat as a university educational or research tool. LEGOs have so many structural limitations, and LEGO geartrains have such poor friction and backlash characteristics, that the techniques required to build a functioning LEGO robot are completely different from those needed to design a real-world machine. In my experience with such classes, the vast majority of student time is devoted to solving problems endemic to LEGOs instead of learning about robots. The same criticism also applies, in lesser degree, to sensors and actuators, which have poor noise and accuracy characteristics, making it difficult to ascertain the performance of any higher-level algorithms implemented on them.

There are many commercially available kits designed for robotics prototyping. Most are just as flexible, reusable, and cost-effective as LEGOs, and have much better mechanical properties. Some even provide the parts necessary to build complex robotic manipulators. They allow a broader range of solutions than the relatively constrained LEGO world permits, and there’s minimal worry about bits of a robot ending up all over the lab if dropped. Why not use one of them instead?

Glen Henshaw
College Park, MD

Back to Top

What About Information Dominance Warfare?

Thank you for devoting a section of the July issue of Communications to defensive information warfare (DIW). The very real threat to information systems has been heretofore largely viewed as consisting of only rogue hackers and teenage techno-geeks.

As Neil Munro’s "From Washington" column (p. 19) points out, information operations that subsume information warfare, and information-in-warfare, are being conducted not only by hackers, but by organized groups, such as governments and the media. These operations affect information at all levels. A U.S. Air Force doctrine states: "The two pillars of information operations, information-in-warfare and information warfare, though separate and distinct, must be closely integrated with each other."

As George Stein, a professor at USAF Air War College, says, "Information warfare is about the way humans think and, more importantly, the way humans make decisions. The target of information warfare, then, is the human…" Information warfare is really composed of two components—information systems warfare, aimed at the cabling, computers, databases, software, the Internet, and so forth—and information dominance warfare—aimed at the human element that leverages data, information, and knowledge to tactical and strategic advantage.

With the exception of Munro’s column, the DIW special section focused almost exclusively on information systems warfare. While this focus is a necessary aspect of information operations, it is not sufficient. To support the human decision-maker, DIW system designers should address human-centered issues, such as system credibility (as discussed by Tseng and Fogg in the persuasive technologies special section in the May 1999 Communications), data aggregation (finding evidence of an event’s occurrence among data from multiple sources distributed over a range of means, points of origin and temporal occurrence), the effects of information dissonance (such as incomplete, inconsistent, and/or incorrect data) on decision making and information pedigree (the origins of the information and information visualization).

I am sure the guest editors and writers are aware of such issues. However, space restrictions did not allow for detailed discussion of the information-dominance- warfare aspect of DIW. It would be valuable to see a future issue of Communications devoted to such research within the realm of information-intensive domains, such as DIW, information retrieval, and decision-support systems. Research being conducted at the Air Force Research Laboratory, Wright Research Site, Wright Patterson AFB, Ohio, in conjunction with Joseph Giordano’s branch (as well as other researchers at the Rome Research site), would be very applicable to such an issue.

Scott M. Brown
Dayton, OH

Back to Top

The Licensing Issue

In his "Forum" letter (July 1999, p. 11), Tom DeMarco responds to Donald Baggert’s April 1999 "Viewpoint" (p. 27), saying, without proof, that licensing software engineers is a dimwitted idea.

I would like to know whether DeMarco believes licensing civil engineers is also dimwitted, or does he think that any low bidder is qualified to build a bridge. If we built bridges like Microsoft builds software, would he be willing to use them?

How about registered nurses and other state regulated titles? Does DeMarco believe an MCSE is a better replacement for a college degree and a state-issued license? We don’t elect representatives to Microsoft. Why should Microsoft become a de facto replacement for the state with which we have some oversight?

Personally, I do not want my doctor’s nurses to have a "certificate" from the local drug store. RN is a prestigious title that ensures competence and provides a level of comfort to the patient when it is a legally valid title issued by the state examination and licensing process.

State licensing of professionals has a sound basis, extending it to software engineers forces vendors to provide better and more reliable products instead of shipping garbage for purchasers to debug.

As to the decertification DeMarco fears, it may be a good thing, since many so-called software engineers are simply not competent. Too much software is built by people other than qualified programmers, including junior analysts and vacuum tube-type hardware engineers. As with all licensing, genuinely qualified current practitioners should be grandfathered in, with new licensees required to jump through all the hoops. A suitable college degree is a reasonable requirement. If this drives the Bill Gates-types out of the industry, then all the better. This would only improve the quality and reliability of our software. For those who learn on the job, the traditional engineering approach of substituting "x" years of experience for a college degree is a suitable alternative.

The two-step approach to professional engineering licensure, requiring the passing of a test when ready to start the licensing process as a qualified trainee, followed by four years of experience with qualified engineers who can vouch for the applicants’ work quality, and finally the professional exam itself, has proven itself to be an excellent system.

DeMarco finds it morally repugnant to prevent charlatans from selling their services to unsuspecting members of the public. People cannot protect themselves against such persons. And with no laws for licensing, the public would have no re-course after the fact.

We should require all software engineers to be licensed and the states to enforce their exclusive right to issue engineer titles by penalizing Microsoft, Novell, et al., who have usurped the states’ legal rights as sole issuers of professional licenses and associated titles, such as engineer and RN.

William Adams
Springfield, VA

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More