Architecture and Hardware News

Containers Push Toward the Mayfly Server

The container revolution represents a large-scale shift in thinking about multitasking systems.
  1. Article
  2. Author
  3. Figures
shipping containters

The drive for efficiency in servers is changing the way applications and operating systems interact. The process has accelerated in just the past five years, as server-farm operators have moved on from virtual-machine technology as a way of improving hardware utilization toward even more streamlined options. The work has led as far as the operating system and application being compiled into one block of software and stripping out any unused services to reduce both memory footprint and startup times.

Speaking about a project he and fellow researcher Anil Madhavapeddy worked on to pursue more efficient server software, Richard Mortier, University Lecturer in the University of Cambridge’s Computer Laboratory, says: “The original motivation that Anil and I had was that you should be able to write software for the cloud, particularly for network-connected services. But if we were to do that, what would it look like? Related to that was the idea that it should be possible to build software without having to worry about what platform it was targeted for.”

To a limited extent, the move to virtualization provided an answer for the second problem. Virtualization lets completely different operating systems and their associated applications share the same processors on a server blade. A hypervisor manages and schedules the operating systems running within each virtual machine (VM).

The problem with virtualization is that each VM partition calls for a complete installation of the operating system and its support software, even if those partitions run the same versions and differ only terms of the applications they or the users who own them utilize. The container, an approach popularized by companies such as Docker, removes a lot of this overhead by sharing one operating system image among multiple partitions. Each container only stores the additional services and tasks required by the applications they hold, which can greatly reduce the memory footprint. Runtime also improves because full virtualization demands multiple context switches whenever I/O calls are made. Not only does the operating system need to switch into a supervisor mode to handle I/O, the hypervisor itself forces a switch to a more heavily protected mode in order to service the I/O request.

Studies by Ericsson and IBM have found containers to have little more overhead than a conventional operating system running on bare metal. Virtualized installations imposed a performance penalty for I/O-intensive applications, although improvements in hardware support for virtualization have narrowed the performance gap.

Even as the performance gap has reduced, the growing base of support software that has emerged around Docker and its competitors has bolstered market acceptance of containers. Orchestration software, such as Google’s Kubernetes or Apache’s Mesos, has given large users of server farms the ability to quickly start containers and to delete them just as rapidly. Chris Aniszczyk, interim executive director of the Cloud Native Computing Foundation and former engineering manager at Twitter, a major user of Mesos, says the average container-based workload at the social-media company ran for just 10 minutes of execution time.

A variety of open source projects have emerged that build on top of orchestration. Services will find the best mixture of hardware for a given group of containers and link them to data stores. Monitoring and logging services ensure the containers run correctly and trigger remedial action if things go wrong. But as the layers of software around orchestration build up, they cause a divergence between development and deployment.

Casey Bisson, director of product management at cloud computing service provider Joyent, says it has become more difficult for developers to emulate an orchestration environment that can greatly affect how containerized applications run when deployed to servers. “We have to make the orchestration software laptop more friendly,” he says.

The unikernel architecture developed at the University of Cambridge aims to help solve the problem of achieving platform independence. “Today, we write software that embeds assumptions about the execution platform. If you need to change the platform, at best you need to recompile; worst case, it calls for a rewrite,” Mortier says. “One of the concepts behind unikernels is that you are pushing these things into the toolchain.”

Mortier says the unikernel borrows from the library operating system and exokernel research of the 1990s. “It should be possible to do better than we are now. If you look at how hardware resources are handled today, you have hardware that’s abstracted through a virtual-machine hypervisor. Then it goes through the operating system kernel, the language runtime, and then more libraries on top. You have four or five layers of scheduling all trying to do the same thing. It seems a bit ridiculous,” he argues.

The unikernel bakes the application and the operating system into one executable image, removing most of the layers between them. To prepare the unikernel, a compiler analyzes the application for its dependencies so only those parts of the operating system that are needed are incorporated into the image. Mortier says the model provides better security because the unikernel has a much smaller attack surface than a full operating system and its attendant libraries.

Mortier says the rapid creation and deletion of unikernal-based systems could enhance security by moving resources around the network.

Although in many cases the unikernel will run on directly on the host processor with no intervening software, proponents envision implementations where multiple unikernels share one processor using hardware-assisted virtualization. The lack of layering and software duplication should make the installation more efficient than traditional techniques.

Bryan Cantrill, chief technology officer of Joyent, argues the restrictions of unikernels are too great to bear, decrying the idea as a move back to the days of single-tasking operating systems such as DOS. The lack of multitasking within the unikernel makes it difficult to run standard debug tools, he says. Mortier points out it is possible to link debug and trace libraries into the executable, and additional tooling is likely to develop to support unikernels.

A further apparent downside of the first generation of unikernels is that they are designed for single languages that use strong typing, such as Haskell and OCaml. Yet unikernel projects such as RumpRun have opened the field to a wider range of languages and software by supporting software that uses the same Posix interfaces as those provided by operating systems such as BSD. The Cambridge group favored OCaml because it made sense to them to focus on more modern languages for something that potentially can reshape how server-based computing is done. Docker has signaled its willingness to investigate the wider adoption of unikernels through its purchase of Unikernel Systems, a spinout from research at the University of Cambridge.

Mortier says unikernels are unlikely to be used in isolation, but will be aimed at particular jobs where security or performance are most important. “There is an assumption here that everything is networked,” he adds.

An experimental installation at the university is divided into micro-services provided by a group of networked processors. Conventional containers host services such as Media-Wiki, with unikernels used to handle redirection to HTTPS addresses and the transaction-layer security (TLS) protocol itself. The relatively low overhead of the unikernel software makes it possible to create them for single transactions.

One 2015 experiment on a system called Jitsu reduced boot time to 20ms on an Intel server processor, compared to five seconds for a conventional web-server running on a VM. Mortier says the rapid creation and deletion of unikernel-based services could enhance security by moving resources around the network. “There is no stable machine that can be targeted,” he claims.

The rapid movement of software and microservices around the network creates its own problems. Midokura systems engineer Cynthia Thomas points to issues such as “traffic tromboning,” in which traffic between micro-services and their data stores crisscross the network many times, making the connections look like the folded pipes in a brass instrument.

The tromboning effect not only increases the response time as perceived by the user, but can cause the address tables in networking equipment to run out of space because of the larger number of live connections they need to maintain between services that previously would have been hosted on a single machine. The support software for orchestration software is evolving hand-in-hand with virtual networking software to create dynamic clusters of microservices that make better use of the underlying network hardware.

Although unikernels could be improve security at several levels, one potential disadvantage of the containers that continue to run alongside those unikernels is the weakening of security compared to traditional VM environments. Most container platforms today use only software protection for isolation, and do not have recourse to the hardware enforcement available with a hypervisor-based VM environment.

Environments such as Docker use kernel-provided namespaces to provide software in each container with the illusion it is the sole inhabitant of the Linux system it sees. In principle, and as long there are no security vulnerabilities in the underlying container software or operating system to exploit, the containerized application has no way to alter data in other containers running alongside it on the processor blade. However, researchers have published proof-of-concept attacks that use side-channel techniques to eavesdrop on neighboring containers. An attack published by researchers at the University of North Carolina in 2014 monitored contention in the cache to listen in on applications running in other containers.

The Cloud Native Computing Foundation’s Aniszczyk points out that the rapid creation and deletion of services containers encourage makes it more feasible to run them exclusively on their target hardware. As a result, timeslicing moves from being performed on the order of tens of milliseconds to that of minutes. It supports a model where server-farm operators can dynamically allocate entire blades to an application for the seconds or minutes it needs to run.

“With availability comes flexibility and dynamism,” says Mortier. “You can scale up and down quickly.”

In this way, the container revolution represents a large-scale shift in thinking about multitasking systems—one that treats compute as a resource made abundant by Moore’s Law, rather than the traditional view that processor capacity is scarce.

*  Further Reading

Morabito, R.
Power Consumption of Virtualization Technologies: An Empirical Investigation, Proceedings of the 8th IEEE/ACM International Conference on Utility and Cloud Computing (2015).

Madhavapeddy, A., et al
Jitsu: Just-in-Time Summoning of Unikernels, 12th USENIX Symposium of Networked System Design and Implementation (NSDI15), 559–573

Engler, D.R., Kaashoek, M. F., and O’Toole, Jr., J.
Exokernel: An Operating System Architecture for Application-Level Resource Management, Proceedings of the fifteenth ACM Symposium on Operating Systems Principles: 251–66 (1995).

Zhang, Y., Juels, A., Reiter, M., and Ristenpart, T.
Cross-Tenant Side-Channel Attacks in PaaS Clouds, Proceedings of CCS’14, 990

Back to Top

Back to Top


UF1 Figure. A container-based virtualization architecture.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More