March 1972 - Vol. 15 No. 3

March 1972 issue cover image

Features

Research and Advances

TENEX, a paged time sharing system for the PDP – 10

TENEX is a new time sharing system implemented on a DEC PDP-10 augmented by special paging hardware developed at BBN. This report specifies a set of goals which are important for any time sharing system. It describes how the TENEX design and implementation achieve these goals. These include specifications for a powerful multiprocess large memory virtual machine, intimate terminal interaction, comprehensive uniform file and I/O capabilities, and clean flexible system structure. Although the implementation described here required some compromise to achieve a system operational within six months of hardware checkout, TENEX has met its major goals and provided reliable service at several sites and through the ARPA network.
Research and Advances

The design of the Venus operating system

The Venus Operating System is an experimental multiprogramming system which supports five or six concurrent users on a small computer. The system was produced to test the effect of machine architecture on complexity of software. The system is defined by a combination of microprograms and software. The microprogram defines a machine with some unusual architectural features; the software exploits these features to define the operating system as simply as possible. In this paper the development of the system is described, with particular emphasis on the principles which guided the design.
Research and Advances

An operating system based on the concept of a supervisory computer

An operating system which is organized as a small supervisor and a set of independent processes are described. The supervisor handles I/O with external devices—the file and directory system—schedules active processes and manages memory, handles errors, and provides a small set of primitive functions which it will execute for a process. A process is able to specify a request for a complicated action on the part of the supervisor (usually a wait on the occurrence of a compound event in the system) by combining these primitives into a “supervisory computer program.” The part of the supervisor which executes these programs may be viewed as a software implemented “supervisory computer.” The paper develops these concepts in detail, outlines the remainder of the supervisor, and discusses some of the advantages of this approach.
Research and Advances

A hardware architecture for implementing protection rings

Protection of computations and information is an important aspect of a computer utility. In a system which uses segmentation as a memory addressing scheme, protection can be achieved in part by associating concentric rings of decreasing access privilege with a computation. This paper describes hardware processor mechanisms for implementing these rings of protection. The mechanisms allow cross-ring calls and subsequent returns to occur without trapping to the supervisor. Automatic hardware validation of references across ring boundaries is also performed. Thus, a call by a user procedure to a protected subsystem (including the the supervisor) is identical to a call to a companion user procedure. The mechanisms of passing and referencing arguments are the same in both cases as well.
Research and Advances

Synchronization of communicating processes

Formalization of a well-defined synchronization mechanism can be used to prove that concurrently running processes of a system communicate correctly. This is demonstrated for a system consisting of many sending processes which deposit messages in a buffer and many receiving processes which remove messages from that buffer. The formal description of the synchronization mechanism makes it very easy to prove that the buffer will neither overflow nor underflow, that senders and receivers will never operate on the same message frame in the buffer nor will they run into a deadlock.
Research and Advances

A comparative analysis of disk scheduling policies

Five well-known scheduling policies for movable head disks are compared using the performance criteria of expected seek time (system oriented) and expected waiting time (individual I/O request oriented). Both analytical and simulation results are obtained. The variance of waiting time is introduced as another meaningful measure of performance, showing possible discrimination against individual requests. Then the choice of a utility function to measure total performance including system oriented and individual request oriented measures is described. Such a function allows one to differentiate among the scheduling policies over a wide range of input loading conditions. The selection and implementation of a maximum performance two-policy algorithm are discussed.
Research and Advances

A study of storage partitioning using a mathematical model of locality

Both fixed and dynamic storage partitioning procedures are examined for use in multiprogramming systems. The storage requirement of programs is modeled as a stationary Gaussian process. Experiments justifying this model are described. By means of this model dynamic storage partitioning is shown to provide substantial increases in storage utilization and operating efficiency over fixed partitioning.
Research and Advances

Properties of the working-set model

A program's working set W(t, T) at time t is the set of distinct pages among the T most recently referenced pages. Relations between the average working-set size, the missing-page rate, and the interreference-interval distribution may be derived both from time-average definitions and from ensemble-average (statistical) definitions. An efficient algorithm for estimating these quantities is given. The relation to LRU (lease recently used) paging is characterized. The independent-reference model, in which page references are statistically independent, is used to assess the effects of interpage dependencies on working-set size observations. Under general assumptions, working-set size is shown to be normally distributed.

Recent Issues

  1. January 2025 cover
    January 2025 Vol. 68 No. 1
  2. December 2024 CACM cover
    December 2024 Vol. 67 No. 12
  3. November 2024 CACM cover
    November 2024 Vol. 67 No. 11
  4. October 2024 CACM cover
    October 2024 Vol. 67 No. 10