Research and Advances
Artificial Intelligence and Machine Learning Research highlights

Technical Perspective: Transactions Are Tomorrow’s Loads and Stores

Posted
  1. Article
  2. Author
  3. Footnotes
Read the related Research Paper

In computer science, when we say "time is money," we typically refer to two types of time that determine the costs and benefits of a given computer program: the time it takes the program to run, and the time it takes to write and maintain it. There is a delicate trade-off between these two types of time: the faster we want a program to run, the more time we need to spend when writing and maintaining it, and vice versa.

Until very recently, it seemed this trade-off could be mostly ignored. The job of making programs run faster fell into the lap of the hardware architects, who continued to deliver advances in single CPU clock speed at a reliable pace. These reliable speed increases allowed software engineers and programming language designers to focus on adding software constructs that offered substantial reductions in the time it takes to write and maintain code. How was this done? The terms that come to mind are abstraction, modularity, and compositionality.

Unfortunately, as we have all heard, things are about to change dramatically. Moore’s Law has not been repealed—each year more and more transistors are being fit into the same space—but CPU clock speeds can no longer be effectively increased. Instead, hardware designers have turned to multicore architectures, in which multiple computing cores are included on each processor chip. The switch to multicore architectures promises increased parallelism, but not increased single-thread performance. Even if this increased parallelism is delivered at a reliable pace, the hardware designers cannot by themselves deliver reliable increases in the speed at which programs run. This job will fall into the laps of software engineers.

The main tool for handling concurrency in today’s programming languages are locks—software constructs that allow sequences of loads and stores to access data in a mutually exclusive manner. Indeed, lock-based programs have been known to deliver amazing performance on multicore architectures. However, it is becoming clear that, while using locks will allow us to continue to reduce the time it takes programs to run, they will cause the time it takes us to write and maintain our programs to shoot back up.

The heart of the problem is, perhaps, that no one really knows how to organize and maintain large systems that rely on locking. Locks are not modular and do not compose, and the association between locks and data is established mostly by convention. Ultimately, the association exists only in the mind of the programmer, and may be documented only in comments.

A promising solution to this problem is the introduction of atomic memory transactions as a multicore programming abstraction. While transactions have been used for years in the database community, they are now being proposed as an equal partner to, and perhaps even a replacement for, the loads and stores we typically use in our programs. The idea is simple: encapsulate sequences of loads and stores within a transaction, with the guarantee that if any operation takes place, they all do, and that if they do, they appear to other threads to do so atomically, as one indivisible operation.

Work on the design of efficient transactional memory implementations has been proceeding in earnest. However, there is a missing element: a framework of transactional memory-based concurrency that would provide the modularity and compositionality necessary when designing and maintaining large-scale concurrent systems.

This is where the breakthrough work on composable memory transactions by Tim Harris, Simon Marlow, Simon Peyton Jones, and Maurice Herlihy takes center stage. They have provided, for the first time, a concurrent language model and a set of constructs that allow true simplicity in transactional programming.

One source of difficulty they had to overcome was that transactions are optimistic: they are tried but may fail and have to be rolled back, so one cannot allow events with side effects, for example I/O, to take place within transactions.

The authors’ solution was to use transactions within a purely declarative language, which, as it turns out, is a perfect match for transactions since the type system explicitly separates computations that may have side effects from effect-free ones.

Another big problem was that concurrent programming requires that threads await events by other threads. Waiting on a condition outside a transaction greatly limits what it can do, and waiting inside a transaction can get it stuck.

The authors solved the problem by introducing new transactional constructs, among them an elegant retry command that allows a transaction to effectively abort, then restart only after a potentially good event has increased the likelihood of the condition being met. Quite surprisingly, retry is a key factor in allowing sequences of transactional actions to be composed so they all take effect together.

The language structure together with the added transactional composition constructs provide a clean framework that allows transactions to compose without giving up their natural advantages over locks: concurrent programmers can program using atomic operations that span multiple objects in memory, without having to break abstraction barriers.

If multicore architectures are the way to continue improving the time it takes programs to run, then perhaps composable memory transactions are the way to improve the time its takes us to write and maintain them.

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More