Research and Advances
Systems and Networking Research highlights

Lower Bounds for External Memory Integer Sorting via Network Coding

Posted
  1. Abstract
  2. 1. Introduction
  3. 2. Preliminaries
  4. 3. Proof Overview
  5. 4. External Memory Lower Bounds
  6. References
  7. Authors
  8. Footnotes
Read the related Technical Perspective
bubble sort, static visualization

Sorting extremely large datasets is a frequently occurring task in practice. These datasets are usually much larger than the computer’s main memory; thus, external memory sorting algorithms, first introduced by Aggarwal and Vitter, are often used. The complexity of comparison-based external memory sorting has been understood for decades by now; however, the situation remains elusive if we assume the keys to be sorted are integers. In internal memory, one can sort a set of n integer keys of Θ(lg n) bits each in O(n) time using the classic Radix Sort algorithm; however, in external memory, there are no faster integer sorting algorithms known than the simple comparison-based ones. Whether such algorithms exist has remained a central open problem in external memory algorithms for more than three decades.

In this paper, we present a tight conditional lower bound on the complexity of external memory sorting of integers. Our lower bound is based on a famous conjecture in network coding by Li and Li, who conjectured that network coding cannot help anything beyond the standard multicommodity flow rate in undirected graphs.

The only previous work connecting the Li and Li conjecture to lower bounds for algorithms is due to Adler et al. Adler et al. indeed obtain relatively simple lower bounds for oblivious algorithms (the memory access pattern is fixed and independent of the input data). Unfortunately, obliviousness is a strong limitation, especially for integer sorting: we show that the Li and Li conjecture implies an Ω(n lg n) lower bound for internal memory oblivious sorting when the keys are Θ(lg n) bits. This is in sharp contrast to the classic (nonoblivious) Radix Sort algorithm. Indeed, going beyond obliviousness is highly nontrivial; we need to introduce several new methods and involved techniques, which are of their own interest, to obtain our tight lower bound for external memory integer sorting.

Back to Top

1. Introduction

Sorting is one of the most basic algorithmic primitives and has attracted lots of attention from the beginning of the computing era. Many classical algorithms have been designed for this problem such as Merge Sort, Bubble Sort, Insertion Sort, etc. As sorting extremely large data has become essential for many applications, there has been a strong focus on designing more efficient algorithms for sorting big datasets2 These datasets are often much larger than the computer’s main memory and the performance bottleneck changes from being the number of CPU instructions executed to being the number of accesses to slow secondary storage. In this external memory setting, one usually uses the external memory model to analyze the performance of algorithms. External memory algorithms are designed to minimize the number of input/output (I/O)s between the internal memory and external memory (e.g., hard drives and cloud storage), and we measure the complexity of an algorithm in terms of the number of I/Os it performs.

Formally, the external memory model consists of a main memory that can hold M words of w bits each (the memory has a total of m = Mw bits), and an infinite (random access) disk partitioned into blocks of B consecutive words of w bits each (a block has a total of b = Bw bits). The input to an external memory algorithm is initially stored on disk and is assumed to be much larger than M. An algorithm can then read blocks into memory or write blocks to disk. We refer jointly to these two operations as an I/O. The complexity of an algorithm is measured solely in terms of the number of I/Os it makes.

Aggarwal and Vitter2 considered the sorting problem in the external memory model. A simple modification to the classic Merge Sort algorithm yields a comparison based sorting algorithm that makes O((n/B) lgM/B(n/B) ) I/Os for sorting an array of n comparable records (each storable in a word of w bits). Notice that O(n/B) would correspond to linear I/Os, as this is the amount of I/Os needed to read/write the input/output. Aggarwal and Vitter2 complemented their upper bound with a matching lower bound, showing that comparison-based external memory sorting algorithms must make Ω((n/B) lgM/B(n/B)) I/Os. In the same paper, Aggarwal and Vitter also showed that any algorithm treating the keys as indivisible atoms, meaning that keys are copied to and from disk blocks, but never reconstructed via bit tricks and the like, must make Ω(min{n, (n/B) lgM/B(n/B)}) I/Os. This lower bound does not assume a comparison-based algorithm, but instead makes an indivisibility assumption. Notice that the lower bound matches the comparison-based lower bound for large enough B (B > lg n suffices). The comparison and indivisibility settings have thus been (almost) fully understood for more than three decades.

However, if the input to the sorting problem is assumed to be w bit integers and we allow arbitrary manipulations of the integers (hashing, XOR tricks, etc.), then the situation is completely different. In the standard internal memory computational model, known as the word-RAM, one can design integer sorting algorithms that far outperform comparison-based algorithms regardless of w. More concretely, if the word and key size is w = Θ(lg n), then Radix Sort solves the problem in O(n) time, and for arbitrary w, one can design sorting algorithms with a running time of cacm6310_c.gif in the randomized case6 and O(n lg lg n) in the deterministic case5 (both bounds assume that the word size and key size are within constant factors of each other). In external memory, no integer sorting algorithms faster than the comparison-based O((n/B) lgM/B(n/B)) bound are known! Whether faster integer sorting algorithms exist was posed as an important open problem in the original paper by Aggarwal and Vitter2 that introduced the external memory model. Three decades later, we still do not know the answer to this question.

In this paper, we present tight conditional lower bounds for external memory integer sorting via a central conjecture by Li and Li7 in the area of network coding. Our conditional lower bounds show that it is impossible to design integer sorting algorithms that outperform the optimal comparison-based algorithms, thus settling the complexity of integer sorting under the conjecture by Li and Li.

*  1.1. Network coding

The field of network coding studies the following communication problem over a network: Given a graph G with capacity constraints on the edges and k data streams, each with a designated source-sink pair of nodes (si, ti) in G, what is the maximum rate at which data can be transmitted concurrently between the source-sink pairs? A simple solution is to forward the data as indivisible packages, effectively reducing the problem to multicommodity flow (MCF). The key question in network coding is whether one can achieve a higher rate by using coding/bit-tricks. This question is known to have a positive answer in directed graphs, where the rate increase may be as high as a factor Ω(|G|) (by sending XOR’s of carefully chosen input bits); see for example, Adler et al.1 However, the question remains wide open for undirected graphs where there are no known examples for which network coding can do anything better than the multicommodity flow rate. The lack of such examples resulted in the following central conjecture in network coding.7

CONJECTURE 1 (UNDIRECTED k-PAIRS CONJECTURE). The coding rate is equal to the multicommodity flow rate in undirected graphs.

Despite the centrality of this conjecture, it has so forth resisted all attempts at either proving or refuting it. Adler et al.1 made an exciting connection between the conjecture and lower bounds for algorithms. More concretely, they proved that if Conjecture 1 is true, then one immediately obtains nontrivial lower bounds for all of the following:

  • Oblivious external memory algorithms
  • Oblivious word-RAM algorithms
  • Oblivious two-tape Turing machines

In the above, oblivious means that the memory access pattern of the algorithm (or tape moves of the Turing machine) is fixed and independent of the input data. Thus proving Conjecture 1 would also give the first nontrivial lower bounds for all these classes of algorithms. One can view this connection in two ways: Either as exciting conditional lower bounds for (restricted) algorithms, or as a strong signal that proving Conjecture 1 will be very difficult.

In this paper, we revisit these complexity theoretic implications of Conjecture 1. Our results show that the restriction to oblivious algorithms is unnecessary. In more detail, we show that Conjecture 1 implies nontrivial (and in fact tight) lower bounds for external memory sorting of integers and for external memory matrix transpose algorithms. We also obtain tight lower bounds for word-RAM sorting algorithms when the word size is much larger than the key size, as well as tight lower bounds for transposing a b × b matrix on a word-RAM with word size b bits. The striking thing is that our lower bounds hold without any extra assumptions such as obliviousness, indivisibility, comparison-based, or the like. Thus proving Conjecture 1 is as hard as proving super-linear algorithm lower bounds in the full generality word-RAM model, a barrier far beyond current lower bound techniques! Moreover, we show that the assumption from previous papers about algorithms being oblivious makes a huge difference for integer sorting: We prove an Ω(n lg n) lower bound for sorting Θ(lg n) bit integers using an oblivious word-RAM algorithm with word size Θ(lg n) bits. This is in sharp contrast to the classic (nonoblivious) Radix Sort algorithm, which solves the problem in O(n) time. Thus, the previous restriction to oblivious algorithms may be very severe for some problems.

*  1.2. Lower bounds for sorting

Our main result for external memory integer sorting is the following connection to Conjecture 1:

THEOREM 2. Assuming Conjecture 1, any randomized algorithm for the external memory sorting problem with w = Ω(lg n) bit integers, having error probability at most 1/3, must make an expected

ueq01.gif

I/Os.

Thus if we believe Conjecture 1, then even for randomized algorithms, there is no hope of exploiting integer input to improve over the simple external memory comparison-based algorithms (when B ≥ lg n such that the latter term in the lower bound is the min).

Now observe that because our lower bound only counts I/Os, the lower bound immediately holds for word-RAM algorithms when the word size is some b = Ω(lg n) by setting m = O(b) and B = b/w in the above lower bound (the CPU’s internal state, i.e., registers, can hold only a constant number of words). Thus, we get the following lower bound:

COROLLARY 3. Assuming Conjecture 1, any randomized word-RAM algorithm for sorting w = Ω(lg n) bit integers, having error probability at most 1/3 and word size bw bits, must spend

ueq02.gif

time.

We note that a standard assumption in the word-RAM is a word size and key size of b, w = Θ(lg n) bits. For that choice of parameters, our lower bound degenerates to the trivial t = Ω(n). This has to be the case, as Radix Sort gives a matching upper bound. Nonetheless, our lower bound shows that when the key size is much smaller than the word size, one cannot sort integers in linear time (recall linear is O(nw/b) as this is the time to read/write the input/output).

Finally, we show that the obliviousness assumption made in the previous paper by Adler et al.1 allows one to prove very strong sorting lower bounds that even surpass the known (nonoblivious) Radix Sort upper bound:

THEOREM 4. Assuming Conjecture 1, any oblivious randomized word-RAM algorithm for sorting Θ(lg n) bit integers, having error probability at most 1/3 and word size Θ(lg n), must spend Ω(n lg n) time.

Thus, at least for the natural problem of integer sorting, being oblivious has a huge impact on the possible performance of algorithms. Our results are therefore not just an application of the previous technique to a new problem, but a great strengthening. Moreover, as we discuss in Section 3, removing the obliviousness assumption requires new and deep ideas that result in significantly more challenging lower bound proofs.

*  1.3. Lower bounds for matrix transpose

We also reprove an analog of the lower bounds by Adler et al.1 for the matrix transpose problem, this time without any assumptions of obliviousness. In the matrix transpose problem, the input is an n × n matrix A with w-bit integer entries. The matrix is given in row-major order, meaning that each row of A is stored in n/B blocks of B consecutive entries each. The goal is to compute AT, that is, output the column-major representation of A that stores n/B disk blocks for each column of A, each containing a consecutive range of B entries from the column.

THEOREM 5. Assuming Conjecture 1, any randomized algorithm for the external memory matrix transpose problem with w bit integer entries, having error probability at most 1/3, must make an expected

ueq03.gif

I/Os.

Consider now the matrix transpose problem on the word-RAM with word size b bits (and thus memory size m = O(b)). Given an n × n matrix A with w-bit integer entries, the lower bound in Theorem 5 implies (by setting B = b/w):

COROLLARY 6. Assuming Conjecture 1, any randomized word-RAM algorithm for computing the transpose of an n × n matrices with w-bit integer entries, having error probability at most 1/3 and word size b bits, must spend

ueq04.gif

time.

Back to Top

2. Preliminaries

We now give a formal definition of the k-pairs communication problem and the multicommodity flow problem.

k-pairs communication problem. To keep the definition as simple as possible, we restrict ourselves to directed acyclic communication networks/graphs and we assume that the demand between every source-sink pair is the same. This will be sufficient for our proofs. For a more general definition, we refer the reader to Adler et al.1

The input to the k-pairs communication problem is a directed acyclic graph G = (V, E) where each edge eE has a capacity c(e) ∈ R+. There are k sources s1,…,skV and k sinks t1,…,tkV. Typically, there is also a demand di between each source-sink pair, but for simplicity, we assume di = 1 for all pairs. This is again sufficient for our purposes.

Each source si. receives a message Ai from a predefined set of messages A(i). It will be convenient to think of this message as arriving on an in-edge. Hence, we add an extra node Si for each source, which has a single out-edge to si. The edge has infinite capacity.

A network coding solution specifies for each edge eE an alphabet Γ(e) representing the set of possible messages that can be sent along the edge. For a node vV, define In(u) as the set of in-edges at u. A network coding solution also specifies, for each edge e = (u, v) ∈ E, a function fe: Πe‘∈Ιn(u) Γ(e‘)→Γ(e) that determines the message to be sent along the edge e as a function of all incoming messages at node u. Finally, a network coding solution specifies for each sink ti a decoding function σi: Πe∈Ιn(ti) Γ(e)→M(i). The network coding solution is correct if, for all inputs A1,…,Ak ∈Πi A(i), it holds that σi applied to the incoming messages at ti equals Ai, that is, each source must receive the intended message.

In an execution of a network coding solution, each of the extra nodes Si starts by transmitting the message Ai to si along the edge (Si, si). Then, whenever a node u has received a message ae along all incoming edges e = (v, u), it evaluates fe’e∈Ιn(u) ae) on all out-edges and forwards the message along the edge e’.

Following Adler et al.1 (and simplified a bit), we define the rate of a network coding solution as follows: Let each source receive a uniform random and independently chosen message Ai from A(i). For each edge e, let Ae denote the random variable giving the message sent on the edge e when executing the network coding solution with the given inputs. The network coding solution achieves rate r if:

  • H(Ai) ≥ rdi = r for all i.
  • For each edge eE, we have H(Ae) ≤ c(e).

Here H(x) denotes binary Shannon entropy. The intuition is that the rate is r, if the solution can handle upscaling the entropy of all messages by a factor r compared to the demands.

Multicommodity flow. A multicommodity flow problem in an undirected graph G = (V, E) is specified by a set of k source-sink pairs (si, ti) of nodes in G. We say that si is the source of commodity i and ti is the sink of commodity i. Each edge eE has an associated capacity c(e) ∈ R+. In addition, there is a demand di between every source-sink pair. For simplicity, we assume di = 1 for all i as this is sufficient for our needs.

A (fractional) solution to the multicommodity flow problem specifies for each pair of nodes (u, v) and commodity i, a flow fi(u, v) ∈ [0, 1]. Intuitively, fi(u, v) specifies how much of commodity i is to be sent from u to v. The flow satisfies flow conservation, meaning that:

  • For all nodes u that is not a source or sink, we have ∑wv fi(u,w) − ∑wv fi(w,u)=0.
  • For all sources si, we have ∑wv fi(si, w)−∑wv fi(w, si)=1.
  • For all sinks, we have ∑wv fi(w, ti)−∑wv fi(ti, w)=1.

The flow also satisfies that for any pair of nodes (u, v) and commodity i, there is only flow in one direction, that is, either fi(u, v) = 0 or fi(v, u) = 0. Furthermore, if (u, v) is not an edge in E, then fi(u, v) = fi(v, u) = 0. A solution to the multi-commodity flow problem achieves a rate of r if:

  • For all edges e = (u, v) ∈E, we have r· ∑idi(fi(u, v)+fi(v, u))=r· ∑i(fi(u,v)+ fi(v,u))≤c(e).

Intuitively, the rate is r if we can upscale the demands by a factor r without violating the capacity constraints.

The undirected k-pairs conjecture. Conjecture 1 implies the following for our setting: Given an input to the k-pairs communication problem, specified by a directed acyclic graph G with edge capacities and a set of k source-sink pairs with a demand of 1 for every pair, let r be the best achievable network coding rate for G. Similarly, let G’ denote the undirected graph resulting from making each directed edge in G undirected (and keeping the capacities, source-sink pairs and a demand of 1 between every pair). Let r’ be the best achievable flow rate in G’. Conjecture 1 implies that rr’.

Having defined coding rate and flow rate formally, we also mention that the result of Braverman et al.4 implies that if there exists a graph G where the network coding rate r and the flow rate r’ in the corresponding undirected graph G’ satisfy r ≥ (1 + ▭)r’ for a constant ε > 0, then there exists an infinite family of graphs {G*} for which the corresponding gap is at least (lg|G*|)c for a constant c > 0. So far, all evidence suggests that no such gap exists, as formalized in Conjecture 1.

Back to Top

3. Proof Overview

In this section, we give an overview of the main ideas in our proof and explain the barriers we overcome in order to remove the assumption of obliviousness. To prove our lower bound for external memory sorting, we focus on the easier problem of permuting. In the permutation problem, we are given an array A of n entries. The i‘th entry of A stores a w-bit data item di and a destination π(i). The destinations π(i) form a permutation π of {1,…,n}. The goal is to produce the output array C where di is stored in entry C[π(i)]. The arrays A and C are both stored in disk blocks, such that each disk block of A stores b/(lg n + w) entries, and each disk block of C stores b/w entries (the maximum number of entries that can be packed in a block). A sorting algorithm that can sort (lg n + w) bit integer keys can be used to solve the permutation problem by replacing each entry (π(i), di) with the integer π(i)×2w + di (in the addition, we think of di as an integer in [2w]). Thus, it suffices to prove lower bounds for permuting.

Consider now an algorithm A for permuting, and assume for simplicity that it is deterministic and always correct. As in the previous work by Adler et al.1, we define a graph G(A) that captures the memory accesses of A on an input array A. The graph G has a node for every block in the input array, a node for every block in the output, and a node for every intermediate block written/read by A. We call these block nodes. Moreover, the graph has a memory node that represents the memory state of A. The idea is that whenever A reads a block into memory, then we add a directed edge from the corresponding block node to the memory node. When A writes to a block, we create a new node (that replaces the previous version of the block) and add a directed edge from the memory node to the new node. The algorithm A can now be used to send messages between input and output block nodes as follows: Given messages X1, …, Xn of w bits each and an intended output block node (storing C[π(i)]) for each message i, we can transmit the message Xi from the input block node representing the array entry A[i] to the output block node representing the array entry C[π(i)] simply by simulating the algorithm A: Each block node of the network always forwards any incoming message to the memory node along its outgoing edge. The memory node thus receives the contents of all blocks that it ever reads. It can therefore simulate A. Whenever it performs a write operation, it sends the contents along the edge to the designated block node. By the correctness of A, this results in every output block node knowing the contents of all array entries C[π(i)] that should be stored in that output block. Examining this simulation, we see that we need a capacity of b bits on all edges for the simulation to satisfy capacity constraints. Moreover, by the definition of network coding rate (Section 2), we see that the coding rate is w bits.

The idea is that we want to use Conjecture 1 to argue that the graph G must be large (i.e., there must be many I/Os). To do so, we would like to argue that if we undirect G, then there is a permutation π such that for many pairs A[i] and C[π(i)], there are no short paths between the block nodes storing A[i] and C[π(i)]. If we could argue that for n/2 pairs (A[i], C[π(i)]), there must be a distance of at least steps in the undirected version of G, then to achieve a flow rate of w, it must be the case that the sum of capacities in G is at least lwn/2. But each I/O adds only 2b bits of capacity to G. Thus, if A makes t I/Os, then it must be the case that tb = Ω(ℓwn) ⇒ t = Ω((nw/b)×ℓ) = Ω((n/B)×ℓ).

Unfortunately, we cannot argue that there must be a long path between many pairs in the graph G we defined above. The problem is that the memory node is connected to all block nodes and thus the distance is never more than 2. To fix this, we change the definition of G slightly: After every m/b I/Os, we deactivate the memory node and create a new memory node to replace it. Further I/Os insert edges to and from this new memory node. In order for the new memory node to continue the simulation of A, the new memory node needs to know the memory state of A. Hence, we insert a directed edge from the old deactivated memory node to the new memory node. The edge has capacity m bits. Thus, in the simulation, when the current memory node has performed m/b I/Os, it forwards the memory state of A to the next memory node who continues the simulation. The m/b I/Os between the creation of new memory nodes has been chosen such that the amortized increase in capacity due to an I/O remains O(b).

We have now obtained a graph G where the degrees of all nodes are bounded by 2m/b. Thus, for every node G, there are at most (2m/b) nodes within a distance of . Thus, intuitively, a random permutation π should have the property that for most pairs (A[i], C[π(i)]), there will be a distance of ℓ= Ω(lg2m/b n/B) between the corresponding block nodes. This gives the desired lower bound of t = Ω((n/B)×ℓ)=Ω((n/B)×lg2m/b n/B).

If we had assumed that the algorithm A was oblivious as in previous work, we would actually be done by now. This is because, under the obliviousness assumption, the graph G will be the same for all input arrays. Thus, one can indeed find the desired permutation π where there is a large distance between most pairs (A[i], C[π(i)]). Moreover, all inputs corresponding to that permutation π and data bit strings d1, …, dn can be simulated correctly using A and the graph G. Hence, one immediately obtains a network coding solution. However, when A is not constrained to be oblivious, there can be a large number of distinct graphs G resulting from the execution of A.

To overcome this barrier, we first argue that even though there can be many distinct graphs, the number of such graphs is still bounded by roughly (nw/b + t)t (each I/O chooses a block to either read or write and there are t I/Os). This means that for t = o(n), one can still find a graph G that is the result of running A on many different input arrays A. We can then argue that among all those inputs A, there are many that all correspond to the same permutation π and that permutation π has the property from before that, and for most pairs (A[i], C[π(i)]), there will be a distance of ℓ= Ω(lg2m/b n/B) between the corresponding block nodes. Thus, we would like to fix such a permutation and use A to obtain a network coding solution. The problem is that we can only argue that there are many data bit strings d1, …, dn that together with π result in an array A for which A uses the graph G. Thus, we can only correctly transmit a large collection of messages, not all messages. Let us call this collection F ⊆ {{0, 1}w}n and let us assume |F| ≥ 2nwo(nw). Intuitively, if we draw a uniform random input from F, then we should have a network coding solution with a rate of wo(w). The problem is that the definition of network coding requires the inputs to the nodes to be independent. Thus, we cannot immediately say that we have a network coding solution with rate wo(w) by solving a uniform random input from F. To remedy this, we instead take the following approach: We let each data bit string di be a uniform random and independently chosen w-bit string. Thus, if we can solve the network coding problem with these inputs, then we indeed have a network coding solution. We would now like to find an efficient way of translating the bit strings d1, …, dn to new bit strings d’1, …, d’n with d’1, …, d’nF. The translation should be such that each input block node can locally compute the d’i, and the output block nodes should be able to revert the transformation, that is, compute from d’i the original bit string di. To achieve this, we need to modify G a bit. Our idea is to introduce a coordinator node that can send short descriptions of the mappings between the dis and d’is. We accomplish this via the following lemma:

LEMMA 7. Consider a communication game with a coordinator u, a set F ⊆ {0, 1}nw and n players. Assume |F| ≥ 2mw-r for some r. The coordinator receives as input n uniform random bit strings Xi of w bits each, chosen independently of the other Xj. The coordinator then sends a prefix-free message Ri to the i’th player for each i. From the message Ri alone (i.e., without knowing Xi), the i’th player can then compute a vector τi ∈ {0, 1}w with the property that the concatenation q :=(τ1X1)▭(τ2X2)ο … ο (τnXn) satisfies qF, wheredenotes bit wise X0R. There exists such a protocol where

ueq05.gif

In particular, if r = o(nw) and w = ω(1), then the communication satisfiesi E[|Ri|]=o(nw).

We use the lemma as follows: We create a coordinator node u that is connected to all input block nodes and all output block nodes. In a simulation of A, the input block nodes start by transmitting their inputs to the coordinator node u. The coordinator then computes the messages in the lemma and sends Ri back to the input block node storing A[i] as well as to the output block node storing the array entry C[π(i)]. The input block nodes can now compute d’i = τidi to obtain an input d’1, …, d’nF. We can then run algorithm A because this is an input that actually results in the graph G. Finally, the output block nodes can revert the mapping by computing di = τid’i. Thus, what the lemma achieves is an efficient way of locally modifying the inputs of the nodes, so as to obtain an input for which the algorithm A works. We find this contribution very novel and suspect it might have applications in other lower bound proofs.

The introduction of the node u of course allows some flow to traverse paths not in the original graph G. Thus, we have to be careful with how we set the capacities on the edges to and from u. We notice that edges from the input nodes to u need only a capacity of w bits per array entry (they send the inputs), and edges out of u need E [|Ri|] capacity for an input di (one such edge to the input block node for array entry A[i] and one such edge to the output block node for array entry C[π(i)]). The crucial observation is that any flow using the node u as an intermediate node must traverse at least two edges incident to u. Hence, only (nw+2∑i E[|Ri|])/2 2 flow can traverse such paths. If |F| ≥ 2nw−o(nw), then Lemma 7 says that this is no more than nw/2 + o(nw) flow. There therefore remains nw/2 − o(nw) flow that has to traverse the original length = Ω(lg2m/b n/B) paths and the lower bound follows.

One may observe that our proof uses the fact that the network coding rate is at most the flow rate in a strong sense. Indeed, the introduction of the node u allows a constant fraction of the flow to potentially use a constant length path. Thus, it is crucial that the network coding rate r and flow rate r’ is conjectured to satisfy rr’ and not, for example, r ≤ 3r’. Indeed, we can only argue that a too-good-to-be-true permutation algorithm yields a graph in which rar’ for some constant a > 1. However, Braverman et al.4 recently proved that if there is a graph where r ≥ (1 + ε)r’ for a constant ε > 0, then there is an infinite family of graphs {G’} where the gap is Ω((lg |G’|)c) for a constant c > 0. Thus, a too-good-to-be-true permutation algorithm will indeed give a strong counter example to Conjecture 1.

Our proof of Lemma 7 is highly nontrivial and is based on the elegant proof of the cacm6310_d.gif bound by Barak et al.3 for compressing interactive communication under nonproduct distributions. Our main idea is to argue that for a uniform random bit string in {0, 1}nw (corresponding to the concatenation X = X1 ο … ο Xn of the Xi‘s in the lemma), it must be the case that the expected Hamming distance to the nearest bit string Y in F is cacm6310_e.gif . The coordinator thus finds Y and transmits the XOR XY to the players. The XOR is sparse and thus the message can be made short by specifying only the nonzero entries. Proving that the expected distance to the nearest vector is cacm6310_e.gif is the main technical difficulty and is the part that uses ideas from protocol compression.

Back to Top

4. External Memory Lower Bounds

As mentioned in the proof overview in Section 3, we prove our lower bound for external memory sorting via a lower bound for the easier problem of permuting: An input to the permutation problem is specified by a permutation π of {1, 2, …, n} as well as n bit strings d1, …, dn ∈ {0, 1}w. We assume w ≥ lg n such that all bit strings may be distinct. The input is given in the form of an array A where the i‘th entry A[i] stores the tuple (π(i), di). We assume the input is given in the following natural way: Each π(i) is encoded as a [lg n]—bit integer and the di‘s are given as they are—using w bits for each.

The array A is presented to an external memory algorithm as a sequence of blocks, where each block contains ⌊b/(w+lg n)⌋ consecutive entries of A (the blocks have b = Bw bits). For simplicity, we henceforth assume (w + lg n) divides b.

The algorithm is also given an initially empty output array C. The array C is represented as a sequence of n words of w bits each, and these are packed into blocks containing b/w words each. The goal is to store dπ−1(i) in C[i]. That is, the goal is to copy the bit string di from A[i] to C[π(i)]. We say that an algorithm A has an error of ε for the permutation problem, if for every input to the problem, it produces the correct output with the probability at least 1 − ε.

The best known upper bounds for the permutation problem work also under the indivisibility assumption. These algorithms solve the permutation problem in

ueq06.gif

I/Os.2 Moreover, this can easily be shown to be optimal under the indivisibility assumption by using a counting argument.2 The n bound is the bound obtained by running the naive “internal memory” algorithm that simply puts each element into its correct position one at a time. The other term is equivalent to the optimal comparison-based sorting bound (one thinks of di as an integer in [2w] and concatenates π(i) ○ di = π(i)×2w + di and sorts the sequence). Thus, any sorting algorithm that handles (lg n + w)-bit keys immediately yields a permutation algorithm with the same number of I/Os. We thus prove lower bounds for the permutation problem and immediately obtain the sorting lower bounds as corollaries.

We thus set out to use Conjecture 1 to provide a lower bound for the permutation problem in the external memory model. Throughout the proof, we assume that nw/b = n/B is at least some large constant. This is safe to assume, as otherwise we only claim a trivial lower bound of Ω(1).

Let A be a randomized external memory algorithm for the permutation problem on n integers of w bits each. Assume A has error probability at most 1/3 and let b denote the disk block size in number of bits. Let m denote the memory size measured in number of bits. Finally, let t denote the expected number of I/Os made by A (on the worst input).

I/O-graphs. For an input array A representing a permutation π and bit strings d1,…dn, and an output array C, define the (random) I/O-graph G of A as follows: Initialize G to have one node per disk block in A and one node per disk block in C. Also, add one node to G representing the initial memory of A. We think of the nodes representing the disk blocks of A and C as block nodes and the node representing the memory as a memory node (see Figure 1a). We will add more nodes and edges to G by observing the execution of A on A. To simplify the description, we will call nodes of G either dead or live. We will always have at most one live memory node. Initially, all nodes are live. We use 0 to label the memory node. Moreover, we label the block nodes by consecutive integers starting at 1. Thus, the block nodes in the initial graph are labeled 1, 2, …, n(w + lg n)/b + nw/b.

Now, run algorithm A on A. Whenever it makes an I/O, do as follows: If this is the first time, the block is being accessed and it is not part of the input or output (a write operation to an untouched block). Then, create a new live block node in G and add a directed edge from the current live memory node to the new block node (see Figure 1e). Label the new node by the next unused integer label. Otherwise, let v be the live node in G corresponding to the last time the disk block was accessed. We add a directed edge from v to the live memory node, mark v as dead, create a new live block node v’, and add a directed edge from the live memory node to v’. We give the new node the same label as v (Figure 1b and c). Finally, once for every m/b I/Os, we mark the memory node as dead, create a new live memory node, and add a directed edge from the old memory node to the new live memory node (Figure 1d).

f1.jpg
Figure 1. I/O-graph for an array A consisting of 3-bit strings d1,…,d8. In this example, each disk block contains two words of w = 3 bits, that is, B = 2 (and b = Bw = 6). Also, the main memory holds M = 6 words (m = 18). Figure (a) shows the initial I/O-graph. For each disk block, we have initially one block node that is illustrated underneath them. Black nodes are dead, and white nodes are live. Figure (b) shows the updated I/O-graph after making an I/O to access the first disk block. Figure (c) is the I/O-graph after accessing the block containing C[1] and C[2]. Figure (d) shows the graph after making another I/O on the first disk block. Also, we create a new memory node after every m/b = M/B = 3 I/Os and mark the old memory node as dead. Figure (e) shows the updated graph after accessing some block other than the input or output.

To better understand the definition of G, observe that all the nodes with the same label represent the different versions of a disk block that existed throughout the execution of the algorithm. Moreover, there is always exactly one live node with any fixed label, representing the current version of the disk block. Also, observe that at the end of the execution, there must be a live disk block node in G representing each of the output blocks in C, and these have the same labels as the original nodes representing the empty disk blocks of C before the execution of A.

Fixing the randomness of A. Consider the execution of A on an input A representing a uniform random permutation π as well as independent and uniform random bit strings d1, …, dn ∈ {0, 1}w. Because A makes an expected t I/Os, it follows by Markov’s inequality that A makes more than 6t I/Os with probability less than 1/6. If we simply abort in such cases, we obtain an algorithm with worst case O(t) I/Os and error probability at most 1/3 + 1/6 = 1/2. Now fix the random choices of A to obtain a deterministic algorithm A* with error probability 1/2 over the random choice of π and d1, …, n. A* makes t* = 6t I/Os in the worst case. Observe that for A*, we get a fixed I/O graph G(A) for every input array A because A* is deterministic.

Finding a popular I/O-graph. We now find an I/O-graph G that is the result of running A* on a large number of different inputs. For notational convenience, let t denote the worst case number of I/Os made by A* (instead of using t* or 6t). Observe that the total number of different I/O-graphs one can obtain as the result of running A* is small:

LEMMA 8. There are no more than

ueq07.gif

I/O-graphs that may result from the execution of A*.

This means that we can find an I/O-graph, which corresponds to the execution of A* on many different inputs, and moreover, we can even assume that A* is correct on many such inputs:

LEMMA 9. There exists a set Γ containing at least (n!2nw)/(2(t + n(w + lg n)/b + nw/b + 1)t+1) different input arrays A, such that A* is correct on all inputs A ∈ Γ and the I/O-graph is the same for all A ∈ Γ.

Data must travel far. The key idea in our lower bound proof is to argue that there is a permutation for which most data bit strings di are very far away from output entry C[π(i)] in the corresponding I/O-graph. This would require the data to “travel” far. By Conjecture 1, this is impossible unless the I/O-graph is large. Thus, we start by arguing that there is a fixed permutation where data has to travel far on the average, and where it also holds that there are many different data values that can be sent using the same I/O-graph. To make this formal, let dist(π, i, G) denote the distance between the block node in G representing the input block storing A[i] (the initial node, before any I/Os were performed) and the node in G representing the output block storing C[π(i)] in the undirected version of G (undirect all edges).

We prove the following:

LEMMA 10. If (t+n(w+lg n)/b+nw/b+1)t+1 ≤ (nw/b)(n/30), then there exists a permutation π, a collection of values F ⊆ {{0, 1}w}n and an I/O-graph G such that the following holds:

  1. For all (d1, …, dn) ∈ F, it holds that the algorithm A* executed on the input array A corresponding to inputs π and d1, …, dn results in the I/O-graph G and A* is correct on A.
  2. cacm6310_f.gif
  3. There are at least (4/5)n indices i ∈ {1, …, n} for which dist(π, i, G) > (1/2) lg2m/b(nw/b).

Reduction to network coding. We are now ready to make our reduction to network coding. The basic idea in our proof is to use Lemma 10 to obtain an I/O-graph G and permutation π with large distance between the node containing A[i] and the node containing C[π(i)] for many i. We will then create a source si at the node representing A[i] and a corresponding sink ti at the node corresponding to C[π(i)]. These nodes are far apart, but using the external memory permutation algorithm A*, there is an algorithm for transmitting di from si to ti. Because the distance between si and ti is at least (1/2) lg2m/b(nw/b) for (4/5) n of the pairs (si, ti), it follows from Conjecture 1 that the sum of capacities in the network must be at least Ω(nw lg2m/b(nw/b)) (we can transmit w bits between each of the pairs). However, running the external memory algorithm results in a network/graph G with only O(t) edges, each needing to transmit only b bits (corresponding to the contents of block on a read or write). Thus, each edge needs only have capacity b bits for the reduction to go through. Hence, the sum of capacities in the network is O(tb). This means that t = Ω((nw/b) lg2m/b (nw/b)) as desired.

However, the reduction is not as straightforward as that. The problem is that Lemma 10 leaves us only with a subset F of all the possible values d1, …, dn that one wants to transmit. For other values of d1, …, n, we cannot use the algorithm A* to transmit the data via the network/graph G. We could of course try to sample (d1, …, dn) uniformly from F and then have a network coding solution only for such inputs. The problem is that for such a uniform (d1, …, dn) ∈ F, it no longer holds that the inputs to the sources in the coding network are independent! Network coding rate only speaks of independent sources; hence, we need a way to break this dependency. We do this by adding an extra node u and some edges to the coding network. This extra node u serves as a coordinator that takes the independent sources X1, …, Xn and replaces them with an input (d1, …, dn) ∈ F in such a way that running A* on (d1, …, dn) and using a little extra communication from u allow the sinks to recover Xπ−1(i) from dπ−1(i). We proceed to give the formal construction. Let G be the I/O-graph, π the permutation, and F ⊆ {{0, 1}w}n the values promised by Lemma 10. From G, construct a coding network G* as follows:

  1. Add source and sink nodes s1, …, sn and t1, …, tn to G*.
  2. For each source si, add an additional node pi.
  3. Add all nodes of G to G*.
  4. Add all edges of G to G*. Edges between a block node and a memory node have capacity b bits. Edges between two memory nodes have capacity m bits.
  5. Remove all block nodes that have an incoming and outgoing edge to the same memory node (this makes the graph acyclic).
  6. Add a directed edge with capacity w bits from each source si to pi, and add a directed edge with capacity w bits from each pi to the input block node containing A[i].
  7. Add an edge with capacity w bits from the output block node containing C[π(i)] to the sink ti.
  8. Add a special node u to G*. Add an edge of capacity w bits from each source si to u. Also, add a directed edge from u to each pi having capacity ρi for parameters ρi > 0 to be fixed later. Also, add an edge from u to sink ti with capacity ρi.

We argue that for sufficiently large choices of ρi, one can use A* to efficiently transmit w bits of information between every source-sink pair (si, ti). Our protocol for this problem uses Lemma 7 from Section 3 as a subroutine. We defer the proof of Lemma 7. It can be shown that there exists a transmitting protocol for transmitting X1, …, Xn and it satisfies all capacity constraints of network G*. The exact protocol can be found in the full version of the paper.

Deriving the lower bound. We observe that for all edges, except those with capacity ρi, our protocol sends a fixed number of bits. Thus, messages on such edges are prefix-free. For the edges with capacity ρi, the protocol sends a prefix-free message with expected length ρi. Because all messages on all edges are prefix-free, it follows from Shannon’s Source Coding theorem that the expected length of each message is an upper bound on its entropy. Because the expected lengths are at most the capacity of the corresponding edges, we get by the definition of network coding rate from Section 2, that the above solution achieves a rate of w bits. Hence, from Conjecture 1, it follows that if we undirected G*, then the multicommodity flow rate must be at least w bits. From the definition of multicommodity flow rate in Section 2, we see that this implies that there is a (possibly fractional) way of sending w units of flow between each source-sink pair.

We first examine the amount of flow that can be transported between pairs (si, ti) along paths that visit u. We observe that any such flow must use at least two edges incident to u. But the sum of capacities of edges incident to u is nw+2∑i ρi. Hence, the amount of flow that can be transmitted along paths using u as an intermediate node is no more than nw+2∑i ρi)/2=nw/2+∑i ρi. If |F| ≥ 2nwo(nw), then this is no more than nw/2 + o(nw). From Lemma 10, we know that there are at least (4/5)n indices i for which dist(π, i, G) ≥ (1/2) lg2m/b (nw/b), provided that (t + n(w + lg n)/b + nw/b + 1)t+1 ≤ (nw/b)(1/30)n. The total flow that must be sent between such pairs is (4/5)nw. This means that there is at least (4/5)nwnw/2 – o(nw) = Ω(nw) flow that has to traverse (1/2) lg2m/b(nw/b) = Ω(lg2m/b(nw/b)) edges of G* (the flow must use a path in the undirected version of G as it cannot shortcut via u). Hence, the sum of capacities corresponding to edges in G must be Ω(nw lg2m/b(nw/b)), assuming that |F| ≥ 2nwo(nw). Every I/O made by A* increases the capacity of the edges by O(b) bits (two edges of b bit capacity when a new block node is added to G, and an amortized b bits capacity to pay for the m bit edge between memory nodes after every m/b I/Os). Thus, if A* makes at most t I/Os, it must be the case that tb = Ω(nw lg2m/b (nw/b)) if |F| ≥ 2nwo(nw). But |F| ≥ 2nw/4(t + n(w + lg n)/b +nw/b + 1)t+1. Therefore, we must have either t = Ω((nw/b)lg2m/b (nw/b) or t lg(tn(w + lg n)/b = Ω(nw). Finally, Lemma 10 is also required (t + n(w + lg n)/b + nw/b + 1)t+1 ≤ (nw/b)(1/30)n. Combining all of this means that for t = Ω((nw/b) lg2m/b(nw/b)), or t = Ω(nw/lg(nw)) or t = Ω(n lg(nw/b)/ lg(n lg(nw/b))) = Ω(n).

Thus, using the reduction to sorting we have proved:

ueq08.gif

For w = Ω(lg n), we may use the reduction to sorting and we immediately obtain Theorem 2 as a corollary.

THEOREM 2. Assuming Conjecture 1, any randomized algorithm for the external memory sorting problem with w = Ω(lg n) bit integers, having error probability at most 1/3, must make an expected

ueq09.gif

I/Os.

    1. Adler, M., Harvey, N.J.A., Jain, K., Kleinberg, R., Lehman, A.R. On the capacity of information networks. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm, SODA '06 (2006), 241–250.

    2. Aggarwal, A., Vitter, J. The input/output complexity of sorting and related problems. Commun. ACM 9, 31 (1988), 1116–1127.

    3. Barak, B., Braverman, M., Chen, X., Rao, A. How to compress interactive communication. In Proceedings of the Forty-Second ACM Symposium on Theory of Computing, STOC '10 (2010), 67–76.

    4. Braverman, M., Garg, S., Schvartzman, A. Coding in undirected graphs is either very helpful or not helpful at all. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9–11, 2017, Berkeley, CA, USA (2017), 18:1–18:18.

    5. Han, Y. Deterministic sorting in O(n lg lg n) time and linear space. In Proceedings of the Thirty-Fourth Annual ACM Symposium on Theory of Computing (2002), ACM, New York, 602–608.

    6. Han, Y., Thorup, M. Integer sorting cacm6310_g.gif expected time and linear space. In Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science (2002), IEEE, 135–144.

    7. Li, Z., Li, B. Network coding: the case of multiple unicast sessions. In Proceedings of the 42nd Allerton Annual Conference on Communication, Control and Computing, Allerton '04 (2004).

    The original version of this paper is entitled "Lower Bounds for External Memory Integer Sorting via Network Coding" and was published in STOC 2019.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More