November 1990 - Vol. 33 No. 11

November 1990 issue cover image

Features

Opinion

Personal computing: Windows, DOS and the MAC

Direct-manipulation or graphical user interfaces (GUIs) are nearly as old as command-line interfaces.1 At the ACM Conference on The History of Personal Workstations, Doug Ross told of drawing on an oscilloscope screen by using his finger to move a spot of light in 1954. Graphic software has been a bastion of direct manipulation since the 195Os, and Douglas Englebart demonstrated direct manipulation of text to large audiences in the 1960s. The style of contemporary direct-manipulation interfaces evolved largely from prototypes developed at the Xerox Palo Alto Research Center (PARC) in the 1970s. The Xerox Star offered a commercial GUI in 1981 (see Figure 1), and several early GUIs, like VisiOn, TopView, and Windows version 1, failed on underpowered PCs. The Macintosh, introduced in 1984, was a major commercial success. Although GUIs have been used for years, the hardware to support them is expensive, so the vast majority of personal computer users still control their software by typing commands. With the introduction of Windows Version 3, Microsoft hopes to move DOS users away from their command-line interface to a direct-manipulation interface. Let us take a quick look at Windows, then compare it to DOS and the Mac.
Opinion

Legally speaking: how to interpret the Lotus decision (and how not to)

On June 28, 1990, a federal court judge in Boston made public his decision in favor of Lotus Development Corporation in its software copyright lawsuit against Paperback Software. People in the software industry had been waiting for this decision since the lawsuit was first filed in January 1987, certain that it would be a landmark case and would resolve many vexing questions about copyright protection for user interfaces.The trade press has abounded with varying interpretations of Judge Keeton's opinion in the Lotus case: Some have said the decision is a narrow one, making illegal only the direct copying of another firm's interface [9]; Some have seen it has a much broader ruling—one that will have a chilling effect on development of competitive software products [5]; Others have asserted the case draws a reasonable line, and will have a positive effect overall [4]; Several have argued the ruling will be harmful because it ignores the interests of users of software, and will make standardization of user interfaces impossible to achieve. [3] Still others perceive the opinion as only setting the stage for a new confrontation over the issues in the appellate courts. [1] Lotus has given some indication of how broadly it interprets the Paperback decision by filing a new round of user interface copyright lawsuits against two of its other spreadsheet competitors.his column, rather than just adding one more interpretation of the Lotus decision to the bin of those already expressed, will give the reader a glimpse of the nature of the legal process and of judicial opinions so he or she can see why people can interpret the Lotus opinion differently. The following three factors make it difficult to know what the Lotus decision means: 1) The legal process is not yet over, and the meaning of the case will depend in part on the outcome of this further process. 2) While Judge Keeton makes some statements that seem to suggest his ruling is a narrow one, some of his other statements could be interpreted much more broadly. 3) Even from unambiguous statements Judge Keeton makes, different people can draw reasonable but nonetheless differing inferences about what the judge would do in similar (though somewhat different) cases. For these reasons, it is impossible to know with any certainty what the law concerning copyright protection for user interfaces is in the aftermath of the Lotus decision.
Research and Advances

Women and computing

There is mounting evidence that many women opting for careers in computing either drop out of the academic pipeline or choose not to get advanced degrees and enter industry instead. Consequently, there are disproportionately low numbers of women in academic computer science and the computer industry. The situation may be perpetuated for several generations since studies show that girls from grade school to high school are losing interest in computing.Statistics, descriptions offered by women in academic and industrial computing, and the research findings reported later in this article indicate that much is amiss. But the point of what follows is not to place blame—rather it is to foster serious reflection and possibly instigate action. It behooves the computer community to consider whether the experiences of women in training are unique to computer science. We must ask why the computer science laboratory or classroom is “chilly” for women and girls. If it is demonstrated that the problems are particular to the field, it is crucial to understand their origins. The field is young and flexible enough to modify itself. These women are, of course, open to the charge that they describe the problems of professional women everywhere. But even if the juggling acts of female computer scientists in both academia and industry are not particular to computing, American society cannot afford to ignore or dismiss their experiences; there is an indisputable brain drain from this leading-edge discipline.A look at statistics reveals a disquieting situation. According to Betty M. Vetter, executive director of the Commission on Professionals in Science and Technology in Washington, DC, while the number of bachelor's and master's degrees in computer science are dropping steadily for both men and women, degrees awarded to women are dropping faster, so they are becoming a smaller proportion of the total. . Bachelor's degrees peaked at 35.7% in 1986, masters also peaked that year at 29.9%, and both are expected to continue to decline. “We have expected the numbers to drop for both, due to demographics such as fewer college students,” says Vetter, “but degrees awarded women are declining long before reaching parity.” (See Table I.) Vetter also would have expected computer science to be “a great field for women,” as undergraduate mathematics has been; female math majors have earned 45% of bachelor's degrees during the 1980s. On the other hand, math Ph.D.'s awarded to women have gone from only 15.5% to 18.1% in this decade, which is more in line with computer science Ph.D.'s earned by women. In 1987, 14.4% of all computer science Ph.D.'s went to women; this number declined to 10.9% the following year. Although the number almost doubled between 1988 and 1989 with women receiving 17.5% of Ph.D.'s, Vetter points out that the number remains very small, at 107. Since these figures include foreign students who are principally male, women constitute a smaller percentage of that total than they do of Ph.D.'s awarded to Americans. But while American women received 21.4% of Ph.D.'s awarded to Americans, that is not encouraging either, says Vetter. Again, the number of American women awarded computer science Ph.D.'s was miniscule, at 72. And taking a longer view, the awarding of significantly fewer bachelor's and master's degrees to women in the late 1980s will be felt in seven to eight years, when they would be expected to receive their Ph.D.'s.How do these figures compare with those of other sciences and engineering? In her 1989 report to the National Science Foundation, “Women and Computer Science,” Nancy Leveson, associate professor of information and computer science at the University of California at Irvine, reports that in 1986, women earned only 12% of computer science doctorates compared to 30% of all doctorates awarded to women in the sciences. Leveson notes, however, that this includes the social sciences and psychology, which have percentages as high as 32 to 50. But the breakout for other fields is as follows: physical sciences (16.4%), math (16.6%), electrical engineering (4.9%), and other engineering ranges from 0.8% for aeronautical to 13.9% for industrial.Those women who do get computer science degrees are not pursuing careers in academic computer science. Leveson says women are either not being offered or are not accepting faculty positions, or are dropping out of the faculty ranks. Looking at data taken from the 1988-89 Taulbee Survey, which appeared in Communications in September, Leveson points out that of the 158 computer science and computer engineering departments in that survey, 6.5 percent of the faculty are female. One third of the departments have no female faculty at all. (See Tables III and IV.)Regarding women in computing in the labor force, Vetter comments that the statistics are very soft. The Bureau of Labor Statistics asks companies for information on their workforce, and the NSF asks individuals for their professional identification; therefore estimates vary. Table II shows that this year, women comprise about 35% of computer scientists in industry. And according to a 1988 NSF report on women and minorities, although women represent 49% of all professionals, they make up only 30% of employed computer scientists. “There is no reason why women should not make up half the labor force in computing,” Betty Vetter says, “It's not as if computing involves lifting 125 pound weights.”The sense of isolation and need for a community was so keen among women in computing, that in 1987 several specialists in operating systems created their own private forum and electronic mailing list called “Systers.” Founded and operated by Anita Borg, member of the research staff at DEC's Western Research Lab, Systers consists of over 350 women representing many fields within computing. They represent 43 companies and 55 universities primarily in the United States, but with a few in Canada, the United Kingdom, and France. Industry members are senior level and come from every major research lab. University members range from computer science undergraduates to department chairs. Says Borg, “Systers' purpose is to be a forum for discussion of both the problems and joys of women in our field and to provide a medium for networking and mentoring.” The network prevents these women, who are few and dispersed, from feeling that they alone experience certain problems. Says Borg, “You can spit out what you want with this group and get women's perspectives back. You get a sense of community.” Is it sexist to have an all-women's forum? “Absolutely not,” says Borg, “It's absolutely necessary. We didn't want to include men because there is a different way that women talk when they're talking with other women, whether it be in person or over the net. Knowing that we are all women is very important.” (Professional women in computer science who are interested in the Systers mailing list may send email to systers-request@decwrl.dec.com)The burden from women in computing seems to be very heavy indeed. Investigators in gender-related research, and women themselves, say females experience cumulative disadvantages from grade school through graduate school and beyond. Because statistical studies frequently come under fire and do not always explain the entire picture, it is important to listen to how women themselves tell their story. In the Sidebar entitled “Graduate School in the Early 80s,” women describe experiences of invisibility, patronizing behavior, doubted qualifications, and so on. Given these experiences, it is not surprising that many women find the academic climate inclement. But while more women may choose to contribute to research in industry, is the computer business really a haven for women, or just the only alternative? In the Sidebar entitled “The Workplace in the late '80s,” women in industry also tell their story and describe dilemmas in a dialogue on academia versus industry; this discussion erupted freely last Spring on Systers. In addition, findings of scholars conducting gender-related research are presented in a report of a workshop on women and computing. Finally, Communications presents “Becoming a Computer Scientist: A Report by the ACM Committee on the Status of Women in Computer Science.” A draft was presented at the workshop and the report appears in its entirety in this issue.
Research and Advances

Becoming a computer scientist

It is well known that women are significantly underrepresented in scientific fields in the United States, and computer science is no exception. As of 1987- 1988, women constituted slightly more than half of the U.S. population and 45% of employed workers in the U.S., but they made up only 30% of employed computer scientists. Moreover, they constituted only 10% of employed doctoral-level computer scientists. During the same time period, women made up 20% of physicians and, at the doctoral level, 35% of psychologists, 22% of life scientists, and 10% of mathematicians employed in the U.S. On the other hand, there are some disciplines in which women represent an even smaller proportion at the doctoral level: in 1987-88, 8% of physical scientists, and only 2.5% of engineers were women [21].1 The underrepresentation of women in computer science is alarming for at least two reasons. First, it raises the disturbing possibility that the field of computer science functions in ways that prevent or hinder women from becoming part of it. If this is so, those in the discipline need to evaluate their practices to ensure that fair and equal treatment is being provided to all potential and current computer scientists. Practices that exclude women are not only unethical, but they are likely to thwart the discipline's progress, as potential contributors to the field are discouraged from participation. The second reason for concern about the underrepresentation of women in computer science relates to demographic trends in the U.S., which suggest a significant decrease in the number of white males entering college during the next decade. At the same time, the number of jobs requiring scientific or engineering training will continue to increase. Because white males have traditionally constituted the vast majority of trained scientists and engineers in this country, experts have predicted that a critical labor shortage is likely early in the next century [4, 25]. To confront this possibility, the federal government has begun to expend resources to study the problem further. A notable example is the establishment of a National Task Force on Women, Minorities, and the Handicapped in Science and Technology. Their final report, issued in December of 1989, lists a number of government and industrial programs aimed at preventing a labor shortage by increasing the number of women and minorities trained as scientists and engineers [5]. In light of these facts, the Committee on the Status of Women in Computer Science, a subcommittee of the ACM's Committee on Scientific Freedom and Human Rights, was established with the goal of studying the causes of women's continued underrepresentation in the field, and developing proposed solutions to problems found. It is the committee's belief that the low number of women working as computer scientists is inextricably tied up with the particular difficulties that women face in becoming computer scientists. Studies show that women in computer science programs in U.S. universities terminate their training earlier than men do. Between 1983 and 1986 (the latest year for which we have such figures) the percentage of bachelor's degrees in computer science awarded to women was in the range of 36-37%, while the percentage of master's degrees was in the range of 28-30s. During the same time span, the percentage of doctoral degrees awarded to women has only been in the range of 10-12%, and it has remained at that level, with the exception of a slight increase in 1989 [16, 21]. Moreover, the discrepancy between the numbers of men and women continues to increase when we look at the people who are training the future computer scientists: women currently hold only 6.5% of the faculty positions in the computer science and computer engineering departments in the 158 Ph.D.-granting institutions included in the 1988- 1989 Taulbee Survey (See Communications September 1990). In fact, a third of these departments have no female faculty members at all [16]. This pattern of decreasing representation is generally consistent with that of other scientific and engineering fields [4, 25]. It is often described as “pipeline shrinkage”: as women move along the academic pipeline, their percentages continue to shrink. The focus of this report is pipeline shrinkage for women in computer science. We describe the situation for women at all stages of training in computer science, from the precollege level through graduate school. Because many of the problems discussed are related to the lack of role models for women who are in the process of becoming computer scientists, we also concern ourselves with the status of women faculty members. We not only describe the problems, but also make specific recommendations for change and encourage further study of those problems whose solutions are not yet well understood. Of course, our focus on computer science in the university by no means exhausts the set of issues that are relevant to an investigation of women in computer science. Most notably, we do not directly address issues that are of concern exclusively or primarily to women in industry. Although some of the problems we discuss are common to all women computer scientists, there are, without doubt, other problems that are unique to one group or the other. Nonetheless, the committee felt that an examination of the process of becoming a computer scientist provided a good starting point for a wider investigation of women in the field. Clearly, to increase the number of women in industrial computer science, one must first increase the number of women trained in the discipline. Thus, we need to consider why women stop their training earlier than men: too few women with bachelor's degrees in computer science translates into too few women in both industry and academia. Moreover, because of the documented positive effects of same-sex role models [12], it is also important to consider why women drop out in higher numbers than do men even later in their academic training: too few women with doctorate degrees results in too few women faculty members. This in turn means inadequate numbers of role models for younger women in the process of becoming computer scientists.
Research and Advances

Connectionist ideas and algorithms

In our quest to build intelligent machines, we have but one naturally occurring model: the human brain. It follows that one natural idea for artificial intelligence (AI) is to simulate the functioning of the brain directly on a computer. Indeed, the idea of building an intelligent machine out of artificial neurons has been around for quite some time. Some early results on brain-line mechanisms were achieved by [18], and other researchers pursued this notion through the next two decades, e.g., [1, 4, 19, 21, 24]. Research in neural networks came to a virtual halt in the 1970s, however, when the networks under study were shown to be very weak computationally. Recently, there has been a resurgence of interest in neural networks. There are several reasons for this, including the appearance of faster digital computers on which to simulate larger networks, interest in building massively parallel computers, and most importantly, the discovery of powerful network learning algorithms. The new neural network architectures have been dubbed connectionist architectures. For the most part, these architectures are not meant to duplicate the operation of the human brain, but rather receive inspiration from known facts about how the brain works. They are characterized by Large numbers of very simple neuron-like processing elements; Large numbers of weighted connections between the elements—the weights on the connections encode the knowledge of a network; Highly parallel, distributed control; and Emphasis on learning internal representations automatically. Connectionist researchers conjecture that thinking about computation in terms of the brain metaphor rather than the digital computer metaphor will lead to insights into the nature of intelligent behavior. Computers are capable of amazing feats. They can effortlessly store vast quantities of information. Their circuits operate in nanoseconds. They can perform extensive arithmetic calculations without error. Humans cannot approach these capabilities. On the other hand, humans routinely perform simple tasks such as walking, talking, and commonsense reasoning. Current AI systems cannot do any of these things better than humans. Why not? Perhaps the structure of the brain is somehow suited to these tasks, and not suited to tasks like high-speed arithmetic calculation. Working under constraints suggested by the brain may make traditional computation more difficult, but it may lead to solutions to AI problems that would otherwise be overlooked. What constraints, then, does the brain offer us? First of all, individual neurons are extremely slow devices when compared to their counterparts in digital computers. Neurons operate in the millisecond range, an eternity to a VLSI designer. Yet, humans can perform extremely complex tasks, like interpreting a visual scene or understanding a sentence, in just a tenth of a second. In other words, we do in about a hundred steps what current computers cannot do in ten million steps. How can this be possible? Unlike a conventional computer, the brain contains a huge number of processing elements that act in parallel. This suggests that in our search for solutions, we look for massively parallel algorithms that require no more than 100 processing steps [9]. Also, neurons are failure-prone devices. They are constantly dying (you have certainly lost a few since you began reading this article), and their firing patterns are irregular. Components in digital computers, on the other hand, must operate perfectly. Why? Such components store bits of information that are available nowhere else in the computer: the failure of one component means a loss of information. Suppose that we built AI programs that were not sensitive to the failure of a few components, perhaps by using redundancy and distributing information across a wide range of components? This would open the possibility of very large-scale implementations. With current technology, it is far easier to build a billion-component integrated circuit in which 95 percent of the components work correctly than it is to build a perfectly functioning million-component machine [8]. Another thing people seem to be able to do better than computers is handle fuzzy situations. We have very large memories of visual, auditory, and problem-solving episodes, and one key operation in solving new problems is finding closest matches to old situations. Inexact matching is something brain-style models seem to be good at, because of the diffuse and fluid way in which knowledge is represented. The idea behind connectionism, then, is that we may see significant advances in AI if we approach problems from the point of view of brain-style computation rather than rule-based symbol manipulation. At the end of this article, we will look more closely at the relationship between connectionist and symbolic AI.
Research and Advances

Transaction processing monitors

A transaction processing (TP) application is a program that performs an administrative function by accessing a shared database on behalf of an on-line user. A TP system is an integrated set of products that supports TP applications. These products include both hardware, such as processors, memories, disks and communications controllers, and software such as operating systems (Oss), database management systems (DBMSs), computer networks and TP monitors. Much of the integration of these products is provided by TP monitors which coordinate the flow of transaction request between terminals that issue requests and TP applications that can process them.Today, TP represents over 25 percent of the computer systems market and is one of the growing segments of the computer business. TP applications appear in most sectors of large-scale enterprises such as airline reservation, electronic banking, securities trading, inventory and production control, communications switching, videotex, sales management, military command and control and government services. 84552 75 CACM 33 10 Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that @@@@ = @@@@(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize @@@@(&thgr;) over &thgr;. Since @@@@(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate. In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines @@@@(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.
Opinion

Inside risks: risks in computerized elections

Background: Errors and alleged fraud in computer-based elections have been recurring Risks Forum themes. The state of the computing art continues to be primitive. Punch-card systems are seriously flawed and easily tampered with, and still in widespread use. Direct recording equipment is also suspect, with no ballots, no guaranteed audit trails, and no real assurances that votes cast are properly recorded and processed.

Recent Issues

  1. January 2025 cover
    January 2025 Vol. 68 No. 1
  2. December 2024 CACM cover
    December 2024 Vol. 67 No. 12
  3. November 2024 CACM cover
    November 2024 Vol. 67 No. 11
  4. October 2024 CACM cover
    October 2024 Vol. 67 No. 10