Research and Advances
Computing Applications Research highlights

Technical Perspective: An Experiment in Determinism

Posted
  1. Article
  2. Author
Read the related Research Paper

The advent of multicore processors means that complex software is becoming more complex: developers must turn to the use of multiple threads to exploit hardware parallelism, yet it is widely held that parallel programming is far more difficult and error prone than writing sequential code. In particular, the myriad allowable interleavings of thread execution mean that different runs of a program can produce different behaviors, including different outputs or—worse—bugs!

To address this concern, the recent past has seen considerable interest in deterministic parallelism: techniques to ensure that multiple executions of a program observe the same interleavings, regardless of variations in the hardware, the vagaries of the scheduler, the use of synchronization primitives, or the presence or absence of external workloads. The hope is that deterministic parallelism will make it easier to have confidence in testing software quality: for example, if a bug can manifest due to a particular ordering of read and writes to objects in memory, then deterministic parallelism will ensure that this either always happens—thus making it easier to debug—or ensure it never happens, avoiding the bug altogether.

Some previous work has investigated providing deterministic parallelism in the language (Deterministic Parallel Java), at the level of the compiler and/or runtime system (Kendo, CoreDet, dThreads), or within the hardware (DMP, RCDC). Most of these schemes do little to make parallel programming inherently simpler, and most are limited to multithreaded (rather than multiprocess) systems. In the following paper, however, Aviram et al. explore the idea of building an operating system that inherently enforces deterministic parallelism both within and across all hosted programs.

Most traditional operating systems provide interfaces that are nondeterministic; for example, creating a new process with fork() or opening a file with open() result in handles that depend on the set of prior operations performed, either systemwide or within the current process; similar behavior can be seen with memory allocations. The values can directly or indirectly affect control flow and general computation, thus leading to different behaviors even if there are no explicit ‘race conditions.’ And, of course, explicit nondeterminism is often exposed via operations such as wait() on a process or condition variable, or invoking select() on a set of file descriptors.

To avoid such issues, the authors take a clean-slate approach with their Determinator operating system. In Determinator, rather than threads sharing an address space, each independent thread of execution gets a private workspace. This is a copy of the nominal global state, and all of the thread’s read and write operations are performed directly on the copy. At thread exit—or certain other well-defined points—the effects of the thread’s execution are reconciled with the global state. This can be considered analogous to how modern revision control systems operate: a person "checks out" a private copy of the data, makes whatever changes he or she wishes, and then "checks in" the changes. The use of private workspaces eliminates any nondeterminism involved by fine-grained interleavings; in essence, read/write data races are impossible, since any thread will only ever read values that are canonical, or that it has itself written.

Much like with revision control systems, however, write/write conflicts are possible: two child threads could modify the same (logically shared) location during their execution. In Determinator this can be readily ascertained during the reconciliation phase, and causes a runtime exception. The authors argue that turning write/write conflicts into readily reproducible bugs simplifies the programming model, and avoids any reliance on "benign" data races.

Determinator also controls other aspects of nondeterminism. For example, identifiers returned by the kernel are thread-private rather than global, and all synchronization operations are deterministic, rather than first-come-first-served (like many mutual exclusion locks). This leads to a system that has a quite different "feel" to traditional operating systems, yet the authors demonstrate it is possible to run a number of parallel benchmark programs without any ill effect. They also describe an emulation layer that provides some familiar Unix APIs, albeit with occasionally different semantics. One good example is in the implementation of process synchronization; while WAITPID() can be easily supported, the nondeterminism of WAIT() cannot be. Instead WAIT() turns into "wait for the child process with the lowest (thread-local) process id" that, in many cases, suffices.

Determinator is an interesting and thought-provoking piece of research. However, it is unclear whether or not future operating systems will be built this way. There are a number of challenges in terms of how to support I/O operations, particularly those that interact with users or remote systems; and in terms of how to support inherently nondeterministic computation, such as generating cryptographic keys.

This is not just a limitation of Determinator, but rather of most work in the field. Although deterministic parallelism has the potential to reduce the number of allowable behaviors a program (or system) can express, which should increase the efficacy of testing, it does not help with the fact that most useful programs behave differently given different inputs. Relying on extensive testing to validate the correctness of programs is intellectually unsatisfactory, particularly given the woefully inadequate coverage obtained, and the general lack of any specification. Nonetheless, it is possible that deterministic parallelism will make it all a smidge easier—and that can’t be a bad thing, can it?

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More