Advertisement

Research and Advances

An empirical study of the impact of user involvement on system usage and information satisfaction

"User involvement" in information system development is generally considered an important mechanism for improving system quality and ensuring successful system implementation. The common assumption that user involvement leads to system usage and/or information satisfaction is examined in a survey of 200 production managers. Alternative models exploring the causal ordering of the three variables are developed and tested via path analysis. The results demonstrate that user involvement in the development of information systems will enhance both system usage and the user's satisfaction with the system. Further, the study provides evidence that the user's satisfaction with the system will lead to greater system usage.
Research and Advances

Organizational power and the information services department: a reexamination

In a recent application of the theory of strategic contingencies in three large multinational firms, Lucas found that information services departments were perceived by others as having low levels of power and influence and suggested a variety of reasons for the results. This note continues the application of the theory of strategic contingencies to the information services department by describing a study of intraorganizational power that uses basically the same procedures as the Lucas study and obtains similar results. In an effort to stimulate future power-related research in the information systems area, this note concludes by suggesting several reasons, beyond those given by Lucas, for the levels of power attributed to information services departments.
Research and Advances

Complexity and parallel processing: an interview with Richard Karp

In the following interview, which took place at ACM 85 in Denver, Karp discusses the relation of his work to leading-edge computing topics like parallel processing and artificial intelligence. Tracing his experience as a pioneer in highly theoretical computer science, Karp describes how the decision to go against established wisdom led to the work for which he is best known and how a colleague's findings led him to see links between two previously unrelated areas. Throughout, he stresses the exchange of ideas with colleagues that helped yield fundamental insights.
Research and Advances

Piecing together complexity

To illustrate the "remarkable extent to which complexity theory operates by means of analogs from computability theory," Richard Karp created this conceptual map or jigsaw puzzle. To lay out the puzzle in the plane, he used a "graph planarity algorithm." The more distantly placed parts might not at first seem related, "but in the end, the theory of NP-completeness does bring them all together," Karp says. The upper right portion of the puzzle shows concepts related to combinatorial explosions and the notion of a "good" or "efficient" algorithm. In turn, "Complexity" connects these concepts to the upper left portion, which represents the concerns of early computability theorists. The traveling salesman problem is closer to the upper right corner because it is probably intractable. It therefore borders on "NP-completeness" and "Combinatorial explosion." To some extent, however, certain divisions blur. "Linear programming," for example, has an anomalous status—the most widely used algorithms for solving such problems in practice are not good in the theoretical sense, and those that are good in the theoretical sense are often not good in practice. One example is the ellipsoid method that was the object of so much attention six years ago. It ran in polynomial time, but the polynomial was of such a high degree that the method proved good in the technical sense, but ineffective in practice. "The reason is that our notion of polynomial-time algorithms doesn't exactly capture the notion of an intuitively efficient algorithm," Karp explains. "When you get up to n5 or n6, then it's hard to justify saying that it is really efficient. So Edmonds's concept of a good algorithm isn't quite a perfect formal counterpart of good in the intuitive sense." Further, the simplex algorithm is good in every practical sense, Karp says, but not good according to the standard paradigm of complexity theory. The most recent addition to linear programming solutions, an algorithm devised by Narendra Karmarkar that some think challenges the simplex algorithm, is good in the technical sense and also appears to be quite effective in practice, says Karp. The good algorithm segment is adjacent to "Heuristics" because a heuristic algorithm may work well, but lack a theoretical pedigree. Some heuristic algorithms are always fast, but sometimes fail to give good solutions. Others always give an optimal solution, but are not guaranteed to be fast. The simplex algorithm is of the latter type. "Undecidability, " "Combinatorial explosion," and "Complexity" are on the same plane because they are analogs of one another; undecidability involves unbounded search, whereas combinatorial explosions are by definition very long but not unbounded searches. Complexity theory bridges the gap because, instead of asking whether a problem can be solved at all, it poses questions about the resources needed to solve a problem. The lower left region contains the segments Karp has been concerned with most recently and that contain open-ended questions. "Randomized algorithm," for example, is situated opposite "Probabilistic analysis" because both are alternatives to worst-case analyses of deterministic algorithms. Randomized algorithms might be able to solve problems in polynomial time that deterministic ones cannot and that could mean an extension of the notion of good algorithms. Perhaps through software designs for non-von Neumann machines, algorithms can be made more efficient in practice through parallelism. Finally, some parts of the puzzle are not yet defined. Says Karp, "They correspond to the unknown territory that remains to be explored in the future."
Research and Advances

Dynamic initial allocation and local reallocation procedures for multiple stacks

Two new procedures for manipulating multiple stacks which share sequential memory locations are discussed. The first is the dynamic initial allocation procedure in which each stack is allocated as its first element arrives rather than having every stack preallocated at the very beginning of the entire process. The second is the local reallocation procedure; in this scheme, when a stack overflows, only its neighboring stacks, rather than the entire memory area, are reorganized provided that a certain condition is satisfied. The results of simulation appear to suggest that these new approaches improve the operational performance in many applications. With appropriate modifications, these concepts may also be applied to any other type of multiple linear lists (e.g., multiple queues) sharing sequential memory locations.
Research and Advances

Structured tools and conditional logic: an empirical investigation

Prior research has identified two psychological processes that appear to be used by programmers when they perform design and coding tasks: (a) taxonomizing—identifying the conditions that evoke particular actions; and (b) sequencing—converting the taxa into a linear sequence of program code. Three structured tools—structured English, decision tables, and decision trees—were investigated in a laboratory experiment to determine how they facilitated these two processes. When taxonomizing had to be undertaken, structured English outperformed decision tables, and decision trees outperformed structured English. When sequencing had to be undertaken, decision trees and structured English outperformed decision tables, but decision trees and structured English evoked the same level of performance.

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More