The maximum flow problem and its dual, the minimum cut problem, are classical combinatorial optimization problems with many applications in science and engineering; see, for example, Ahuja et al.1 The problem is a special case of linear programming and can be solved using general linear programming techniques or their specializations (such as the network simplex method9). However, special-purpose algorithms are more efficient. Moreover, algorithm design techniques and data structures developed to compute maximum flows are useful for other problems as well. Although a special case of linear programming, the maximum flow problem is general enough so several important problems (such as the maximum bipartite matching problem) reduce to it.
Key Insights
Here, we survey basic techniques behind efficient maximum flow algorithms, starting with the history and basic ideas behind the fundamental maximum flow algorithms, then explore the algorithms in more detail. We restrict ourselves to basic maximum flow algorithms and do not cover interesting special cases (such as undirected graphs, planar graphs, and bipartite matchings) or generalizations (such as minimum-cost and multi-commodity flow problems).
Before formally defining the maximum flow and the minimum cut problems, we give a simple example of each problem: For the maximum flow example, suppose we have a graph that represents an oil pipeline network from an oil well to an oil depot. Each arc has a capacity, or maximum number of liters per second that can flow through the corresponding pipe. The goal is to find the maximum number of liters per second (maximum flow) that can be shipped from well to depot. For the minimum cut problem, we want to find the set of pipes of the smallest total capacity such that removing the pipes disconnects the oil well from the oil depot (minimum cut).
The maximum flow, minimum cut theorem says the maximum flow value is equal to the minimum cut capacity. This fundamental theorem has many applications, particularly in the design of maximum flow algorithms.
We distinguish between flow algorithms that are polynomial or strongly polynomial. We denote the number of vertices and arcs in the input network by n and m, respectively. For polynomial algorithms, the arc capacities are integral, with U denoting the largest capacity; capacities can be represented by O(logU)-bit integers. (In practice, it is reasonable to assume capacities are integral; as implemented by computer hardware, even floating point numbers are represented as integers.) A polynomial algorithm is one with a worst-case time bound polynomial in n, m, and log U. For many applications, algorithms manipulate numbers that fit in a machine word, and elementary arithmetic operations take unit time. We assume this is the case when stating polynomial bounds. A strongly polynomial algorithm is one with a worst-case time bound polynomial in n and m, even if capacities are arbitrary real numbers, assuming a computational model in which elementary arithmetic operations on real numbers take unit time. Strongly polynomial algorithms are more natural from a combinatorial point of view, as only their arithmetic operation complexity depends on the input number size, and other operation counts are independent of the size.
The first special-purpose algorithm for the maximum flow problem was the augmenting path method developed by Ford and Fulkerson.14 This method is, in general, not polynomial time but can be made so. One way to do this is through scaling, as introduced by Dinic.11
Edmonds and Karp12 introduced the shortest augmenting path method, making the Ford-Fulkerson method strongly polynomial. To define path lengths, the Edmonds-Karp method uses the unit length function, which sets the length of each arc to one. Edmonds and Karp note that other length functions can be used but do not seem to lead to better time bounds. The key to the analysis is the observation that the shortest augmenting path length is non-decreasing and must eventually increase.
The blocking flow method augments along a maximal set of shortest paths by finding a blocking flow. Such augmentation increases the shortest augmenting path length and thereby speeds up the shortest augmenting path method. Blocking flows are implicit in Dinic’s algorithm10 and made explicit by Karzanov,23 who also introduced a relaxation of flow called a “preflow” that allows an algorithm to change the flow on a single arc instead of on an entire augmenting path. Arc flow is updated through a push operation. Preflows allow faster algorithms for finding blocking flows.
An interesting special case of the maximum flow problem involves all arcs having unit capacities. As shown independently by Karzanov22 and Even and Tarjan,13 the blocking flow algorithm in this case achieves better time bounds than in the general case for two reasons: the number of blocking flow computations is reduced, and the computations are faster—linear time in the graph size.
The operations of a blocking flow algorithm can be divided into two parts: those that manipulate distances and those that manipulate flows. In theory, the latter dominate, motivating development of data structures that allow changing flow values on a path more efficiently than one arc at a time. The first such data structure was developed by Galil and Naamad.15 A few years later, Sleator and Tarjan29,30 introduced the dynamic tree data structure, allowing changing flow values on a path with k arcs in O(log k) time. This led to improvement in the theoretical time bound for finding a blocking flow, making it almost linear.
Goldberg and Tarjan18 developed the push-relabel method as an alternative to the blocking flow method.a It maintains a preflow and updates it through push operations. It introduces the relabel operation to perform fine-grain updates of the vertex distances. Push and relabel operations are local; that is, they apply to a single arc and vertex, respectively. These fine-grain operations provide additional flexibility that can be used to design faster algorithms. The fastest general-purpose maximum flow codes are based on the push-relabel method.7,16
For arbitrary real-valued capacities, the blocking flow problem can be solved in O(m log(n2/m)) time,19 giving an O(nm log(n2/m)) bound for the maximum flow algorithm. Note a decomposition of flows into paths can have a size of Ω(nm). This makes O(nm) a natural target bound. In 2013, Orlin27 developed an algorithm that achieves this bound.
The maximum flow, minimum cut theorem says the maximum flow value is equal to the minimum cut capacity.
The flow decomposition size is not a lower bound for computing maximum flows. A flow can be represented in O(m) space, and dynamic trees can be used to augment flow on a path in logarithmic time. Furthermore, the unit capacity problem on a graph with no parallel arcs can be solved in O(min(n2/3, )m) time,13,22 which is much better than O(nm). For a quarter century, there was a big gap between the unit capacity case and the general case. The gap was narrowed by Goldberg and Rao,17 who obtained an O(min(n2/3, )m log(n2/m) log U)-time algorithm for the problem with integral capacities.
To achieve this bound, Goldberg and Rao used a non-unit length function. In combination with new design and analysis techniques, this leads to the binary blocking flow algorithm that achieves the bound mentioned earlier. As the name implies, the algorithm is based on blocking flows. No comparable bound for the push-relabel method is known. This fact revives the theoretical importance of the blocking flow method.
Here, we assume familiarity with basic graph algorithms, including breadth- and depth-first search and have organized the article as follows: After introducing basic definitions, we discuss the algorithms. Our presentation is informal, including intuitive algorithm descriptions and the corresponding time bounds, but omits technical details, which can be found in the references.
Background
The input to the maximum flow problem is (G, s, t, u), where G = (V, A) is a directed graph with vertex set V and arc set A, s ∈ V is the source, t ∈ V is the sink (with s ≠ t), and u: A ⇒ R+ is the strictly positive capacity function. We sometimes assume capacities are integers and denote the largest capacity by U.
A flow f is a function on A that satisfies capacity constraints on all arcs and conservation constraints at all vertices except s and t. The capacity constraint for a ∈ A is 0 ≤ f(a) ≤ u(a) (flow does not exceed capacity). The conservation constraint for v is Σ(u,v)∈Af(u,v) = Σ(v,w)∈A f(v,w) (the incoming flow is equal to the outgoing flow). The flow value is the net flow into the sink: |f| = Σ(v,t)∈Af(v,t) – Σ(t,v)∈Af(t,v). If |f| is as large as possible, f is a maximum flow. A cut is a bipartition of the vertices S ∪ T = V with s ∈ S, t ∈ T.b The capacity of a cut is defined by u(S,T)=Σv∈S,w∈T,(v,w)∈Au(S,T) (the sum of capacities of arcs from S to T). The max-flow/min-cut theorem14 says the maximum flow value is equal to the minimum cut capacity. Figures 1 and 2 give an input network and a maximum flow on it, respectively.
Without loss of generality, we assume G is connected. Then m ≥ n – 1 and therefore n + m = O(m). We can also assume the graph has no parallel arcs, since we can combine parallel arcs and add their capacities.
Residual Graph and Augmenting Paths
An important notion for flow algorithms is a residual graph, encoding the possible changes of flow on arcs in a way that facilitates algorithm design. Suppose we have an arc a = (v, w) with u(a) = 9 and f(a) = 4. We can then increase the flow on a by up to five units without violating the capacity constraint. Furthermore, we can decrease the flow on a by up to four units. We would like to interpret decreasing flow on an arc a = (v.w) as increasing flow on the reverse arc aR = (w, v).
Given a flow f in G, we define the residual graph Gf = (V, Af) as follows: Af contains arcs a ∈ A such that f(a) < u(a) and arcs aR: a ∈ A such that f(a) > 0. We call these forward and reverse residual arcs, respectively. We define the residual capacity uf to be u(a) – f(a) for the former and f(aR) for the latter. For every arc a ∈ A, Gf contains the forward arc, the reverse arc, or both. Figure 3 gives the residual graph for the flow in Figure 2. Note the residual graph can have parallel arcs even if the input graph is simple, as it can contain both an arc and its reversal.
A flow g in Gf defines the flow f′ in G as follows: For a forward arc a ∈ Gf, f′(a) = f(a) + g(a); for a reverse arc a ∈ Gf, f′(aR) = f(aR) – g(a). Seeing that f′ is a valid flow is straightforward.
An augmenting path is a path from s to t in Gf. Given an augmenting path P, we can augment f as follows: Let δ be the minimum residual capacity of the arcs on P and g be the flow of value on P. The corresponding flow f′ on G has |f′| = |f| + δ > |f|. An augmenting path can be found in O(m) time (such as by using breadth- or depth-first search).
Note that during an augmentation, at least one arc of P has residual capacity δ before the augmentation and zero after the augmentation. We say such an arc is saturated by the augmentation. Saturated arcs are deleted from Gf. An arc a is added to Gf if uf (a) is zero before the augmentation, and the augmentation increases the flow on aR.
Using the max-flow/min-cut theorem, one can show a flow f has maximum value if and only if Gf does not contain an augmenting path. This motivates the augmenting path algorithm: while Gf contains an augmenting path, find such a path and augment the flow on it.
If capacities are integral, the augmenting path algorithm always terminates, since each augmentation increases the flow value by at least one. This observation, and the fact that the capacity of the cut ({s}, V – {s}) is O(nU), gives a pseudo-polynomial bound of O(nmU) on the algorithm’s running time. The bound is not polynomial because U can be exponential in the size of the problem input. If the capacities are real-valued, the algorithm need not terminate. As we shall see later, variants of this algorithm do run in polynomial time.
Scaling
Scaling is one way to make the augmenting path algorithm polynomial-time if the capacities are integral.
Recall that U is the largest arc capacity and let k = ⌈log2 U⌉ + 1, the number of bits needed to represent capacities. For i = 0, …,k, define ui(a) = ⌊u(a)/2k–i⌋. Note u0 ≡ 0, and for i > 0, ui is defined by the i most significant bits of u. The zero flow is maximum for u0.
Given a maximum flow fi for capacities ui (0 ≤ i < k), the algorithm computes a maximum flow fi+1 for capacities ui+1 as follows. Note ui+1 = 2ui + bi+1, where bi+1(a) is the (i + 1)-st most significant bit of u(a). Thus f = 2fi is a feasible flow for capacities ui+1. We start with f and apply the augmenting path algorithm to compute fi+1.
To bound the number of augmentations, consider a minimum cut (S, T) for capacities ui. Since ff is a maximum flow, for every arc a from S to T we have ui(a) = fi(a), and thus for the initial flow f, we have ui+1(a) – f(a) ≤ 1. Therefore |f| is within m of maximum, and we need at most m augmentations to compute fi+1 from fi. The running time of the scaling algorithm is thus O(m2 log U).
Shortest Augmenting Paths
Define the length of every arc in Gf to be one, and suppose we always choose a shortest augmenting path. This is natural, since breadth-first search finds shortest augmenting paths and takes linear time.
Consider a shortest-path augmentation. Let d(v) denote the distance from a vertex v to t in Gf, and let k = d(s). For an arc (v, w) on the augmenting path, we have d(v) = d(w) + 1. Therefore the reverse arc (w, v) is not on a path from s to t of length k or less. The augmentation deletes at least one arc on a path from s to t of length k and does not add any arcs on paths from s to t of length k or less. This observation leads to the key monotonicity property of the shortest augmenting path algorithm: For every vertex v, residual graph distances from s to v and from v to t are non-decreasing.
The monotonicity property yields a strongly polynomial time bound. Each augmentation saturates an arc on a path of the current shortest length. Therefore, after at most m augmentations, the distance from s to t must increase. Initially, the distance is at least one, and if t is reachable from s, the distance is at most n – 1. The total number of augmentations is thus O(nm). The time for one augmentation is O(m) to find the augmenting path and proportional to the path length; that is, O(n) = O(m) to modify the flow. This gives an O(nm2) bound on the running time.
Blocking Flow Method
Given a network G with arc capacities, a flow f in G is blocking if every s-to-t path in G contains a saturated arc. Note f need not be a maximum flow, as there can be an s-to-t path in Gf that will contain the reverse of an arc of G (see Figure 4). But a maximum flow is always a blocking flow. As we shall see later, in an acyclic graph, blocking flows can be found more quickly than maximum flows.
The blocking flow algorithm constructs an auxiliary network G′f = (V, A′f) where A′f contains all residual arcs belonging to some shortest s-to-t path. Note if (v, w) ∈ A′f, then d(v) = d(w) + 1, so A′f is acyclic. A′f can be constructed in O(m) time using breadth-first search. Suppose we compute a blocking flow g in A′f. Then f + g is a feasible flow in G. Furthermore, one can show the s-to-t distance in Gf+g is greater than that in Gf. It follows that a maximum flow can be computed in at most n – 1 iterations, where the time for an iteration is dominated by the blocking flow computation.
Dinic10 introduced an algorithm for finding blocking flows in acyclic graphs, using depth-first search to find an augmenting path in A′f that augments along the path and deletes saturated arcs from A′f. The key to the analysis is the observation that if depth-first search retreats from a vertex, there is no path from the vertex to t in A′f and the vertex can be deleted. One can use this observation to show the running time of the algorithm is proportional to n plus the total length of the augmenting paths found; the total length term dominates. As an augmenting path has O(n) arcs and each augmentation saturates an arc, the running time of the blocking flow algorithm is O(nm). This gives an O(n2m) bound for Dinic’s maximum flow algorithm.
Using Dynamic Trees
The running time of the blocking flow algorithm is dominated by changes of arc flows that do not saturate the arc. A natural approach to improving the running time bound is to use a data structure that allows one to make several such changes in one data structure operation. This can be achieved by using a data structure to remember non-saturated portions of the augmenting paths. The dynamic tree data structure29,30 was developed for this purpose.
Intuitively, the dynamic tree blocking flow algorithm uses the data structure to remember non-saturated portions of augmenting paths that may be reused later. In particular, the saved paths are linked during the augmenting path search, work that is amortized over the search for augmenting paths. Flow augmentation is performed using dynamic tree operations, at the cost of the logarithm of the corresponding augmenting path length. This application of dynamic trees reduces the running time of the blocking flow algorithm from O(nm) to O(m log n). By restricting the maximum tree size and using additional data structures, this bound can be further improved to O(m log(n2/m)),19 yielding an O(nm log(n2/m)) maximum flow algorithm.
Although dynamic trees yield the best worst-case bounds, they have so far not been used in practical implementations because most practical instances are relatively easy, and the constant factors in dynamic tree implementations are relatively large.
Push-Relabel Method
The blocking flow algorithm uses global operations (such as building the auxiliary network and augmenting along a path). The push-relabel method uses local operations. These fine-grain operations give the method more flexibility, which can be used to make the method faster in practice.
Following Karzanov,23 the push-relabel method uses preflows. Preflows are like flows, but the conservation constraints are relaxed: Σ(u, v)∈Af(u, v) ≥ Σ(v,w)∈Af(v, w) for all v ∈ V – {s, t} (the incoming flow is at least the outgoing flow). We define excess by ef(v) =Σ(u,v)∈Af(u, v)–Σ(v,w)∈Af(v, w). A vertex with excess can push some of it to its residual neighbor. Intuitively, we want to push only to a neighbor that is closer to the sink. Karzanov23 uses distances in the auxiliary network to determine where to push flow.
The push-relabel method replaces the distances by a valid labeling, a relaxation of distances that can be updated locally. Given a flow f, we say a function d: V → N is a valid labeling if d(t) = 0 and for every (v, w) ∈ Af, we have d(v) ≤ d(w) + 1. One can show a valid labeling gives lower bounds on distances to t. In particular, if d(v) ≥ n, then there is no path from v to t in Gf, meaning v is on the source side of some minimum cut.
The push-relabel method maintains a preflow f and a valid distance labeling d and updates them using operations push and relabel, respectively. We describe these operations next; a full description of the algorithm can be found in Goldberg and Tarjan.18
One way to start the push-relabel method is to saturate all arcs out of the source by setting their flow values to the corresponding capacity values, and to set d(s) = n (t is not reachable from s) and d(v) = 0 for v ≠ s. This creates initial flow excesses on vertices adjacent to the source. Intuitively, the method pushes flow excesses toward the sink and relabels vertices with excess if the excess cannot be pushed toward the sink. We say a vertex v is active if v ≠ t and ef (v) > 0.
The push operation applies to an arc (v, w) if v is active and d(w) < d(v), or w is closer to t according to d. The operation determines the maximum amount of flow that can be pushed, δ = min(ef (v); uf (v, w)) and pushes this amount of flow along (v, w) by setting uf (v, w) = uf (v, w) – δ, uf (w, v) = uf (w, v) + δ, ef (v) = ef (v) – δ, and ef (w) = ef (w)+ δ. We say a push is saturating if δ = uf (v, w) and non-saturating otherwise. Note that a non-saturating push gets rid of all the excess of v; see Figure 5 for an example of a non-saturating push operation.
The relabel operation applies to an active vertex v such that no push operation applies to an arc (v, w), or for all (v, w) ∈ Af, d(v) ≤ d(w). The operation sets d(v)=min{n, 1 + min d(w)}. (A vertex with excess always has an outgoing residual arc.) Note the relabel operation always increases d(v); see Figure 6 for an example of a relabel operation.
The time complexity of the push-relabel method is as follows. The total time for relabeling operations is O(nm), and the time for saturating pushes is O(nm) as well. The time for non-saturating pushes is O(n2m); these operations dominate the running time bound, which is also O(n2m).
Note our description of the push-relabel method is generic; we have not specified the rule to select the next active vertex to process. Some operation orderings lead to better bounds. In particular, for the highest label push-relabel algorithm, which always selects an active vertex with the highest distance label to process next, the time for non-saturating pushes and the overall time bound are O(n2 ).6 Using dynamic trees, one can get an O(nm log(n2/m)) bound18 more simply than through the blocking flow method.
An interesting special case of the maximum flow problem involves all arcs having unit capacities.
The highest-label algorithm is also one of the most practical variants of the push-relabel method. However, robust practical performance requires additional heuristics. The push relabel method is very flexible, making it easy for the algorithm designer to add heuristics. For example, one can restrict active vertices to those with d(v) < n and do post-processing to compute the final flow. One can also do periodic backward breadth-first searches to maximize d(v) values. See, for example, Cherkassky and Goldberg7 and Goldberg.16
Unit Capacities
Now consider the special case of the maximum flow problem in which all input arc capacities are one. Since merging parallel arcs results in non-unit capacities, we cannot assume the graph has no parallel arcs, so we consider two cases—parallel arcs and no parallel arcs—in both of which one obtains better bounds for Dinic’s algorithm.
First, note that after an augmentation, all arcs on the augmenting path are saturated. Therefore, an arc participates in at most one augmentation per blocking flow, and the blocking flow algorithm runs in O(m) time.
Moreover, one can show the number of blocking flow computations is O( ). To prove this bound, we divide the maximum flow computation into two phases. In the first phase, the s-to-t distance is less than . Since an augmentation by a blocking flow increases the distance, the first phase consists of at most augmentations. One can show if the s-to-t distance is at least , the residual flow value is O( ). Since an augmentation decreases the value, the number of augmentations in the second phase is O( ).
This analysis implies an O(m3/2) bound for the unit capacity problem. If G has no parallel arcs, one can also obtain an O(n2/3m) bound, which is better for dense graphs.
Binary Blocking Flow Algorithm
The time bounds for the blocking flow and push-relabel algorithms are Ω(nm) in the general case, while for unit capacities, the algorithm of Dinic runs in O(min(n2/3, )m) time. Here, we discuss the intuition behind the binary blocking flow algorithm of Goldberg and Rao,17 which narrows the gap for the integral capacity case.
Instead of assigning unit length to every residual arc, the binary blocking flow algorithm uses a zero-one length function, assigning zero length to the arcs with large residual capacity and unit length to the arcs with small residual capacity. The fact that arcs with unit length have small residual capacity allows the algorithm to come close to the unit capacity time bound.
The algorithm maintains a flow f and an upper bound F on the difference between the maximum flow value and the current flow value |f|. The algorithm proceeds in phases; each phase decreases F by a factor of two. In a phase, the value of F remains constant except for the very end of the phase, when it is decreased. A threshold parameter Δ, which is a function of F and thus remains constant during a phase, determines whether residual arcs are large or small; large arcs have a residual capacity of at least 3Δ, and the remaining ones are small.
As in the case of the unit length function, we define the auxiliary network A′f to be the graph induced by the arcs on shortest s-to-t paths. The algorithm repeatedly computes a blocking flow in A′f updating A′f before each computation, until F decreases by a factor of two. The decrease in F happens if one either increases the flow by F/2 or the s-to-t distance becomes sufficiently large. As in the unit capacity case, a large s-t distance implies a bound on the residual flow value.
This use of the binary length function leads to two problems that must be addressed: First, G′f need not be acyclic (it can contain cycles of zero-length arcs), and, second, an arc length can decrease from one to zero, and, as a side effect, the s-to-t distance may fail to increase after a blocking flow augmentation.
To deal with the first problem, the algorithm contracts strongly connected components of A′f, looks for a blocking flow in the resulting acyclic graph, and stops the blocking flow computation if the value of the flow being computed reaches Δ. One can show that since each strongly connected component is induced by large-capacity arcs, one can always route a flow of value Δ through it. At the end of each iteration, we expand A′f and extend the flow we found to Gf. One can show that a blocking flow in A′f extends to a blocking flow in Gf. This gives us two types of iterations: ones that find a blocking flow and ones that find a flow of value Δ. The s-to-t distance increases in the former case and does not decrease in the latter case.
The maximum flow problem is far from being completely understood, and new and improved algorithms continue to be discovered.
To deal with the second problem, one can show the arcs (v, w) that may have their length decrease (the special arcs) have the property that the residual capacity of (v, w) is at least 3Δ and the residual capacity of (w, v) at least 2Δ. The algorithm contracts such arcs, assuring that after an augmentation by a blocking flow, the s-to-t distance increases even if these arc lengths decrease.
Using the dynamic-tree data structure, the binary flow algorithm runs in O(min(n2/3, )m log(n2/m)logu time, which is within a log(n2/m) log U factor of the best known upper bound for the unit capacity problem with no parallel arcs.
Conclusion
As mentioned here, a 2013 algorithm of Orlin27 achieves an O(nm) strongly polynomial bound for the maximum flow problem, as well as an O(n2/log n) bound for m = O(n). This result is quite sophisticated and uses a combination of ideas from maximum flow, minimum-cost flow, and dynamic connectivity algorithms. In particular, Orlin uses the binary blocking flow algorithm as a subroutine. His result closes a longstanding open problem of the existence of an O(nm) maximum flow algorithm. However, the binary blocking flow algorithm bounds suggest an O(nm/n∈) strongly polynomial algorithm may exist.
The maximum flow problem is far from being completely understood, and new and improved algorithms continue to be discovered. We would like to mention four intriguing directions that have yielded new results: The first is to generalize the push-relabel approach to allow the flow excess (incoming minus outgoing flow) at a vertex to be arbitrary—positive, negative, or zero. Given a residual arc (v, w) such that the excess at v exceeds the excess at w, one can balance the arc by increasing its flow to either saturate the arc or equalize the excesses at v and w. The flow balancing algorithm31 starts with some initial flow (such as zero flow), dummy excesses of plus infinity at s and minus infinity at t, and repeats arc-balancing steps until all such steps move a sufficiently small amount of flow, then rounds the flow to obtain an exact maximum flow. Although the running time of this algorithm (O(n2m log U)) is not competitive with that of the best algorithms, the method is simple and extends to give a very simple and practical algorithm for a parametric version of the maximum flow algorithm.2,31
Another approach that yields a fast practical algorithm for maximum flow problems in computer-vision applications is that of Boykov and Kolmogorov,5 improving the basic augmenting path method by using bidirectional search to find augmenting paths, in combination with a clever method for retaining information from previous searches to speed up future ones. The Boykov-Kolmogorov method does not augment on shortest paths and has not been proved to be polynomial but can be modified to find exact shortest paths and to be polynomial without sacrificing its practical performance, indeed improving it in many cases. The resulting algorithm20 computes shortest augmenting paths incrementally, using information from previous searches. Special techniques have yielded fast maximum flow algorithms for planar graphs and for undirected graphs; for the latest results, see Borradaile and Klein,3 Borradaile et al.,4 and Karger and Levine.21
A recent series of papers, including Christiano et al.,8 Kelner et al.,24 Lee et al.,25 and Sherman,28 have studied the problem of finding an approximately maximum flow (within a factor of 1 + ∈ of maximum) in undirected graphs and culminates in a near-linear time algorithm. These papers used linear algebraic techniques and electrical flows. Building on this work, Madry26 in 2013 obtained a breakthrough result, an exact algorithm for unit capacity flows in directed graphs running in Õ(m10/7) time. This improves the classical O(min(n2/3, m1/2)m) bound for the problem, suggesting that better bounds for the exact capacitated maximum flow problem in directed graphs may be possible. Whether these ideas can be used to find exact maximum flows in directed graphs with integral capacities is an intriguing open question. In summary, progress on maximum flow algorithms has been made for more than half a century, and continues.
Acknowledgment
Robert E. Tarjan was supported by National Science Foundation grant CCF-0830676 and U.S.-Israel Binational Science Foundation Grant 2006204. Some of his work was done while visiting Stanford University and supported by an Air Force Office of Scientific Research Multidisciplinary University Research Initiative grant.
Figures
Figure 2. Maximum flow (capacity/flow) and minimum cut.
Figure 3. Residual graph and residual capacities corresponding to Figure 2.
Figure 4. Example of a blocking flow that is not a maximum flow.
Figure 5. Push operation example: before (left) and after (right); zero excesses and non-residual arcs not shown.
Figure 6. Relabel operation example: before (left) and after (right).
Join the Discussion (0)
Become a Member or Sign In to Post a Comment