Advertisement

Research and Advances

Volume rendering

Volumetric rendering speaks volumes for data 20 orders of magnitude apart—from human anatomy to neuroanatomy, and from electrostatic charges of macromolecules to failure analysis of manufactured parts.
Research and Advances

Parallel thinning with two-subiteration algorithms

Two parallel thinning algorithms are presented and evaluated in this article. The two algorithms use two-subiteration approaches: (1) alternatively deleting north and east and then south and west boundary pixels and (2) alternately applying a thinning operator to one of two subfields. Image connectivities are proven to be preserved and the algorithms' speed and medial curve thinness are compared to other two-subiteration approaches and a fully parallel approach. Both approaches produce very thin medial curves and the second achieves the fastest overall parallel thinning.
Research and Advances

Concurrent operations on priority queues

Among the fastest priority queue implementations, skew heaps have properties that make them particularly suited to concurrent manipulation of shared queues. A concurrent version of the top down implementation of skew heaps can be produced from previously published sequential versions using almost mechanical transformations. This implementation requires O(log n) time to enqueue or dequeue an item, but it allows new operations to begin after only O(1) time on a MIMD machine. Thus, there is potential for significant concurrency when multiple processes share a queue. Applications to problems in graph theory and simulation are discussed in this article.
Research and Advances

Fast parallel thinning algorithms: parallel speed and connectivity preservation

A recently published parallel thinning approach [4] is evaluated. An improvement is suggested to enable preservation of certain diagonal lines which are not preserved by this algorithm. A unified notion of what is meant by an iteration (or subiteration) and parallel speed is presented, and with regard to its parallel speed this algorithm is argued to be comparable to other two-subiteration algorithms. The parallel speed of this algorithm is compared experimentally to the original algorithm that it improves [12] and it is shown that, unlike execution time on a specific machine, parallel speed is not improved. Finally, a more complete connectivity analysis is given illustrating sufficient additional conditions for applying fully in parallel the basic thinning operator used in both algorithms while maintaining all image connectivity properties.
Research and Advances

High level knowledge sources in usable speech recognition systems

The authors detail an integrated system which combines natural language processing with speech understanding in the context of a problem solving dialogue. The MINDS system uses a variety of pragmatic knowledge sources to dynamically generate expectations of what a user is likely to say.
Research and Advances

Efficient distributed event-driven simulations of multiple-loop networks

Simulating asynchronous multiple-loop networks is commonly considered a difficult task for parallel programming. Two examples of asynchronous multiple-loop networks are presented in this article: a stylized queuing system and an Ising model. In both cases, the network is an n × n grid on a torus and includes at least an order of n2 feedback loops. A new distributed simulation algorithm is demonstrated on these two examples. The algorithm combines three elements: (1) the bounded lag restriction; (2) minimum propagation delays; and (3) the so-called opaque periods. We prove that if N processing elements (PEs) execute the algorithm in parallel and the simulated system exhibits sufficient density of events, then, on average, processing one event would require O(log N) instructions of one PE. Experiments on a shared memory MIMD bus computer (Sequent's Balance) and on a SIMD computer (Connection Machine) show speed-ups greater than 16 on 25 PEs of a Balance and greater than1900 on 214 PEs of a Connection Machine.
Research and Advances

The use of memory in text processing

The performance of a natural language processing system should improve as it reads more and more texts. This is true both for systems intended as cognitive models and for practical text processing systems. Permanent long-term memory should be useful during all stages of text understanding. For example, if, while reading a patent abstract about a new disk drive, a system can retrieve information about similar objects from memory, processing should be simplified. However, most natural language programs do not exhibit such learning behavior. We describe in this article how RESEARCHER, a program that reads, remembers and generalizes from patent abstracts, makes use of its automatically generated memory to assist in low-level text processing, primarily involving disambiguation that could be accomplished no other way. We describe both RESEARCHER's basic understanding methods and the integration of memory access. Included is an extended example of RESEARCHER processing a patent abstract by using information about several other abstracts already in memory.

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More