Research and Advances
Artificial Intelligence and Machine Learning Research highlights

Local Laplacian Filters: Edge-Aware Image Processing with a Laplacian Pyramid

Posted
  1. Abstract
  2. 1. Introduction
  3. 2. Related Work
  4. 3. Dealing with Edges in Laplacian Pyramids
  5. 4. Local Laplacian Filtering
  6. 5. Applications and Results
  7. 6. Conclusion
  8. Acknowledgments
  9. References
  10. Authors
  11. Footnotes
  12. Figures
Read the related Technical Perspective
Local Laplacian Filters, illustration

The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. However, because it is constructed with spatially invariant Gaussian kernels, the Laplacian pyramid is widely believed to be ill-suited for representing edges, as well as for edge-aware operations such as edge-preserving smoothing and tone mapping. To tackle these tasks, a wealth of alternative techniques and representations have been proposed, for example, anisotropic diffusion, neighborhood filtering, and specialized wavelet bases. While these methods have demonstrated successful results, they come at the price of additional complexity, often accompanied by higher computational cost or the need to postprocess the generated results. In this paper, we show state-of-the-art edge-aware processing using standard Laplacian pyramids. We characterize edges with a simple threshold on pixel values that allow us to differentiate large-scale edges from small-scale details. Building upon this result, we propose a set of image filters to achieve edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping. The advantage of our approach is its simplicity and flexibility, relying only on simple point-wise nonlinearities and small Gaussian convolutions; no optimization or postprocessing is required. As we demonstrate, our method produces consistently high-quality results, without degrading edges or introducing halos.

Back to Top

1. Introduction

Laplacian pyramids have been used to analyze images at multiple scales for a broad range of applications such as compression,6 texture synthesis,18 and harmonization.32 However, these pyramids are commonly regarded as a poor choice for applications in which image edges play an important role, for example, edge-preserving smoothing or tone mapping. The isotropic, spatially invariant, smooth Gaussian kernels on which the pyramids are built are considered almost antithetical to edge discontinuities, which are precisely located and anisotropic by nature. Further, the decimation of the levels, that is, the successive reduction by factor 2 of the resolution, is often criticized for introducing aliasing artifacts, leading some researchers (e.g., Li et al.21) to recommend its omission. These arguments are often cited as a motivation for more sophisticated schemes such as anisotropic diffusion,1, 29 neighborhood filters,19, 34 edge-preserving optimization,4, 11 and edge-aware wavelets.12

While Laplacian pyramids can be implemented using simple image-resizing routines, other methods rely on more sophisticated techniques. For instance, the bilateral filter relies on a spatially varying kernel,34 optimization-based methods (e.g., Fattal et al.,13 Farbman et al.,11 Subr et al.,31 and Bhat et al.4) minimize a spatially inhomogeneous energy, and other approaches build dedicated basis functions for each new image (e.g., Szeliski,33 Fattal,12 and Fattal et al.15). This additional level of sophistication is also often associated with practical shortcomings. The parameters of anisotropic diffusion are difficult to set because of the iterative nature of the process, neighborhood filters tend to oversharpen edges,5 and methods based on optimization do not scale well due to the algorithmic complexity of the solvers. While some of these shortcomings can be alleviated in postprocessing, for example, bilateral filtered edges can be smoothed,3, 10, 19 this induces additional computation and parameter setting, and a method producing good results directly is preferable. In this paper, we demonstrate that state-of-the-art edge-aware filters can be achieved with standard Laplacian pyramids. We formulate our approach as the construction of the Laplacian pyramid of the filtered output. For each output pyramid coefficient, we render a filtered version of the full-resolution image, processed to have the desired properties according to the corresponding local image value at the same scale, build a new Laplacian pyramid from the filtered image, and then copy the corresponding coefficient to the output pyramid. The advantage of this approach is that while it may be nontrivial to produce an image with the desired property everywhere, it is often easier to obtain the property locally. For instance, global detail enhancement typically requires a nonlinear image decomposition (e.g., Fattal et al.,14 Farbman et al.,11 and Subr et al.31), but enhancing details in the vicinity of a pixel can be done with a simple S-shaped contrast curve centered on the pixel intensity. This local transformation only achieves the desired effect in the neighborhood of a pixel, but is sufficient to estimate the fine-scale Laplacian coefficient of the output. We repeat this process for each coefficient independently and collapse the pyramid to produce the final output.

We motivate this approach by analyzing its effect on step edges and show that edges can be differentiated from small-scale details with a simple threshold on color differences. We propose an algorithm that has a O(N log N) complexity for an image with N pixels. While our algorithm is not as fast as other techniques, it can achieve visually compelling results hard to obtain with previous work. We demonstrate our approach by implementing a series of edge-aware filters such as edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping. We provide numerous results, including large-amplitude image transformations. None of them exhibit halos, thereby showing that high-quality halo-free results can be indeed obtained using only the Laplacian pyramid, which was previously thought impossible.

Contributions. The main contribution of this work is a flexible approach to achieve edge-aware image processing through simple point-wise manipulation of Laplacian pyramids. Our approach builds upon a new understanding of how image edges are represented in Laplacian pyramids and how to manipulate them in a local fashion. Based on this, we design a set of edge-aware filters that produce high-quality halo-free results (Figure 1).

Back to Top

2. Related Work

Edge-aware image processing. Edge-aware image manipulation has already received a great deal of attention and we refer to books and surveys for an in-depth presentation.1, 20, 27 Recently, several methods have demonstrated satisfying results with good performance (e.g., Chen et al.,7 Farbman et al.,11 Fattal,12 Subr et al.,31 Criminisi et al.,8 He et al.,17 and Kass and Solomon19). Our practical contribution is to provide filters that consistently achieve results at least as good, have easy-to-set parameters, can be implemented with only basic image-resizing routines, are noniterative, and do not rely on optimization or postprocessing. In particular, unlike gradient-domain methods (e.g., Fattal et al.13), we do not need to solve the Poisson equation which may introduce artifacts with nonintegrable gradient fields. From a conceptual standpoint, our approach is based on image pyramids and is inherently multiscale, which differentiates it from methods that are expressed as a two-scale decomposition (e.g., Chen et al.,7 Subr et al.,31 and He et al.17).

Pyramid-based edge-aware filtering. As described earlier, pyramids are not the typical representation of choice for filtering an image in an edge-preserving way, and only a few techniques along these lines have been proposed. A first approach is to directly rescale the coefficients of a Laplacian pyramid; however, this typically produces halos.21 While halos may be tolerable in the context of medical imaging (e.g., Vuylsteke and Schoeters,36 and Dippel et al.9), they are unacceptable in photography.

Fattal et al.13 avoid halos by using a Gaussian pyramid to compute scaling factors applied to the image gradients. They reconstruct the final image by solving the Poisson equation. In comparison, our approach directly manipulates the Laplacian pyramid of the image and does not require global optimization. Fattal et al.14 use a multiscale image decomposition to combine several images for detail enhancement. Their decomposition is based on repeated applications of the bilateral filter. Their approach is akin to building a Laplacian pyramid but without decimating the levels and with a spatially varying kernel instead of a Gaussian kernel. However, their study is significantly different from ours because it focuses on multi-image combination and speed. In a similar spirit, Farbman et al.11 compute a multiscale edge-preserving decomposition with a least-squares scheme instead of bilateral filtering. This work also differs from ours since its main concern is the definition and application of a new optimization-based filter. In the context of tone mapping, Mantiuk et al.23 model human perception with a Gaussian pyramid. The final image is the result of an optimization process, which departs from our goal of working only with pyramids.

Fattal12 describes wavelet bases that are specific to each image. He takes edges explicitly into account to define the basis functions, thereby reducing the correlation between pyramid levels. From a conceptual point of view, our work and Fattal’s are complementary. Whereas he designed pyramids in which edges do not generate correlated coefficients, we seek to better understand this correlation to preserve it during filtering.

Li et al.21 demonstrate a tone-mapping operator based on a generic set of spatially invariant wavelets, countering the popular belief that such wavelets are not appropriate for edge-aware processing. Their method relies on a corrective scheme to preserve the spatial and intrascale correlation between coefficients, and they also advocate computing each level of the pyramid at full resolution to prevent aliasing. However, when applied to Laplacian pyramids, strong corrections are required to avoid halos, which prevents a large increase of the local contrast. In comparison, in this work, we show that Laplacian pyramids can produce a wide range of edge-aware effects, including extreme detail amplification, without introducing halos.

Gaussian pyramids are closely related to the concept of Gaussian scale-space defined by filtering an image with a series of Gaussian kernels of increasing size. While these approaches are also concerned with the correlation between scales created by edges, they are used mostly for purposes of analysis (e.g., Witkin37 and Witkin et al.38).

Background on Gaussian and Laplacian pyramids. Our approach is based on standard image pyramids, whose construction we summarize briefly (for more detail, see Burt and Adelson6). Given an image I, its Gaussian pyramid is a set of images {Gl} called levels, representing progressively lower resolution versions of the image, in which high-frequency details progressively disappear. In the Gaussian pyramid, the bottom-most level is the original image, G0 = I, and Gl+1 = downsample(Gl) is a low-pass version of Gl with half the width and height. The filtering and decimation process is iterated n times, typically until the level Gn has only a few pixels. The Laplacian pyramid is a closely related construct, whose levels {Ll} represent details at different spatial scales, decomposing the image into roughly separate frequency bands. Levels of the Laplacian pyramid are defined by the details that distinguish successive levels of the Gaussian pyramid, Ll = Gl – upsample(Gl + 1), where upsample(·) is an operator that doubles the image size in each dimension using a smooth kernel. The top-most level of the Laplacian pyramid, also called the residual, is defined as Ln = Gn and corresponds to a tiny version of the image. A Laplacian pyramid can be collapsed to reconstruct the original image by recursively applying Gl = Ll + upsample(Gl+1) until G0 = I is recovered.

Back to Top

3. Dealing with Edges in Laplacian Pyramids

The goal of edge-aware processing is to modify an input signal I to create an output I‘, such that the large discontinuities of I, that is, its edges, remain in place, and such that their profiles retain the same overall shape. For example, the amplitude of significant edges may be increased or reduced, but the edge transitions should not become smoother or sharper. The ability to process images in this edge-aware fashion is particularly important for techniques that manipulate the image in a spatially varying way, such as image enhancement or tone mapping. Failure to account for edges in these applications leads to distracting visual artifacts such as halos, shifted edges, or reversals of gradients. In the following discussion, for the sake of illustration, we focus on the case where we seek to reduce the edge amplitude—the argument when increasing the edge amplitude is symmetric.

In this work, we characterize edges by the magnitude of the corresponding discontinuity in a color space that depends on the application; we assume that variations due to edges are larger than those produced by texture. This model is similar to many existing edge-aware filtering techniques (e.g., Aubert and Kornprobst1 and Paris et al.27); we will discuss later the influence that this assumption has on our results. Because of this difference in magnitude, Laplacian coefficients representing an edge also tend to be larger than those due to texture. A naive approach to decrease the edge amplitude while preserving the texture is to truncate these large coefficients. While this creates an edge of smaller amplitude, it ignores the actual “shape” of these large coefficients and assigns the same lower value to all of them. This produces an overly smooth edge, as shown in Figure 2.

Intuitively, a better solution is to scale down the coefficients that correspond to edges, to preserve their profile, and to keep the other coefficients unchanged, so that only the edges are altered. However, it is unclear how to separate these two kinds of coefficients since edges with different profiles generate different coefficients across scales. On the other hand, according to our model, edges are easy to identify in image space; a threshold on color differences suffices to differentiate edges from variations due to texture. This is a key aspect of our approach: we generate new pyramid coefficients by working primarily on the input image itself, rather than altering the pyramid coefficients directly.

The overall design of our algorithm derives from this insight: we build an approximation of the desired output image specific to each pyramid coefficient. This is a major difference with the existing literature. Whereas previous techniques are formulated in terms of optimization (e.g., Farbman et al.11), PDEs (e.g., Perona and Malik29), or local averaging (e.g., Tomasi and Manduchi34), we express our filter through the computation of these local image approximations together with standard image pyramid manipulations. In practice, we use locally processed versions of the input to recompute values for each pyramid coefficient, and combine all of these new coefficient values into the final result. For each coefficient at location (x, y) and level l, we first determine the region in the input image on which this coefficient depends. To reduce the amplitude of edges, for example, we clamp all the pixels values in that region so that the difference to the average value does not exceed a user-provided threshold. This processed image has the desired property that edges are now limited in amplitude, to at most twice the threshold. This also has the side effect of flattening the details across the edge. As we discuss below, these details are not lost, they are actually captured by pyramid coefficients centered on the other side of the edge as illustrated in Figure 3. Then, we compute the Laplacian pyramid of this processed image to create coefficients that capture this property. In particular, this gives us the value of the coefficient (x, y, l) that we seek. Another way of interpreting our method is that we locally filter the image, for example, through a local contrast decrease, and then determine the corresponding coefficient in the Laplacian pyramid. We repeat this process, such that each coefficient in the pyramid is computed.

Detail preservation. As mentioned earlier, a reasonable concern at this point is that the clamped image has lost details in the thresholded regions, which in turn could induce a loss in the final output. However, the loss of details does not transfer to our final result. Intuitively, the clamped details are on “the other side of the edge” and are represented by other coefficients. Applying this scheme to all pyramid coefficients accurately represents the texture on each side of the edge, while capturing the reduction in edge amplitude (Figure 3). Further, clamping affects only half of the edge and, by combining coefficients on “both sides of the edge,” our approach reconstructs an edge profile that closely resembles the input image, that is, the output profiles do not suffer from oversmoothing. Examining the pyramid coefficients reveals that our scheme fulfills our initial objective, that is, that the edge coefficients are scaled down while the other coefficients representing the texture are preserved (Figure 2).

Back to Top

4. Local Laplacian Filtering

We now formalize the intuition gained in the previous section and introduce Local Laplacian Filtering, our new method for edge-aware image processing based on the Laplacian pyramid. A visual overview is given in Figure 4 and the pseudo-code is provided in Algorithm 1.

In Local Laplacian Filtering, an input image is processed by constructing the Laplacian pyramid {L[I‘]} of the output, one coefficient at a time. For each coefficient (x, y, l), we generate an intermediate image cacm5803_r.gif by applying a point-wise monotonic remapping function rg(·) to the original full-resolution image. This remapping function, whose design we discuss later, depends on the local image value from the Gaussian pyramid g = Gl(x, y) and the user parameter σ which is used to distinguish edges from details. We compute the pyramid for the intermediate image {L[ cacm5803_r.gif ]} and copy the corresponding coefficient to the output {L[I‘]}. After all coefficients of the output pyramid have been computed, we collapse the output pyramid to get the final result.

A direct implementation of this algorithm yields a complexity in O(N2) with N being the number of pixels in the image, since each coefficient entails the construction of another pyramid with O(N) pixels. However, this cost can be reduced in a straightforward way by processing only the subpyramid needed to evaluate Ll[ cacm5803_r.gif ](x, y), illustrated in Figure 4. The base of this subpyramid lies within a K × K subregion R of the input image I, where K = O(2l); for Laplacian pyramids built using a standard 5-tap interpolation filter, it can be shown that K = 3(2l+2 – 1). Put together with the fact that level l contains O(N/2l) coefficients, each level requires the manipulation of O(N) coefficients in total. Since there are O(log N) levels in the pyramid, the overall complexity of our algorithm is O(N log N). Later we will see that some applications only require a fixed number of levels to be processed or limit the depth of the subpyramids to a fixed value, reducing the complexity of our algorithm further.

Remapping function for gray-scale images. We assume the user has provided a parameter σ such that intensity variations smaller than σ should be considered fine-scale details and larger variations are edges. As a center point for this function we use g = Gl(x, y), which represents the image intensity at the location and scale where we compute the output pyramid coefficient. Intuitively, pixels closer than σ to g should be processed as details and those farther than σ away should be processed as edges. We differentiate their treatment by defining two functions rd and re, such that r(i) = rd(i) if |ig| ≤ σ and r(i) = re(i) otherwise. Since we require r to be monotonically increasing, rd and re must have this property as well. Furthermore, to avoid the creation of spurious discontinuities, we constrain rd and re to be continuous by requiring that rd(g ± σ) = re(g ± σ).

ins01.gif

The function rd modifies the fine-scale details by altering the oscillations around the value g. In our applications we process positive and negative details symmetrically, letting us write:

eq01.gif

where fd is a smooth function mapping from [0, 1] to [0, 1] that controls how details are modified. The advantage of this formulation is that it depends only on the amplitude of the detail |ig| relative to the parameter σ, that is, |ig|/σ = 1 corresponds to a detail of maximum amplitude according to the user-defined parameter. Analogously, re is a function that modifies the amplitude of edges that we again formulate in a symmetric way:

eq02.gif

where fe is a smooth nonnegative function defined over [0, ∞). In this formulation, re depends only on the amplitude above the user threshold σ, that is, |ig| – σ. The function fe controls how the edge amplitude is modified since an edge of amplitude σ + fe(a) becomes an edge with amplitude σ + fe(a). For our previous 1D range compression example, clipping edges corresponds to fe = 0, which limits the amplitude of all edges to σ. Useful specific choices for rd and re are described in the next section and are illustrated in Figure 5.

The advantage of the functional forms defined in Equations (1) and (2) is that they ensure that r is continuous and increasing, and the design of a specific filter boils down to defining the two point-wise functions fd and fe that each have clear roles: fd controls the amplification or attenuation of details while fe controls the amplification or attenuation of edges.

Extension to color images. To handle color, it is possible to treat only the luminance channel and reintroduce chrominance after image processing (Section 5.3). However, our approach extends naturally to color images as well, letting us deal directly with 3D vectors representing, for example, the RGB or CIE-Lab channels. Algorithm 1 still applies, and we need only to update rd and re, using bold typeface to indicate vectors:

eq03.gif

eq04.gif

with unit(v) = v/||v|| if v0 and 0 otherwise. These equations define details as colors within a ball of radius σ centered at g and edges as the colors outside it. They also do not change the roles of fd and fe, letting the same 1D functions that modify detail and edges in the gray-scale case be applied to generate similar effects in color. For images whose color channels are all equal, these formulas reduce to the gray-scale formulas of Equations (1) and (2).

Back to Top

5. Applications and Results

We now demonstrate how to realize practical image processing applications using our approach and discuss implementation details. First we address edge-preserving smoothing and detail enhancement, followed by tone mapping and related tools. We validate our method with images used previously in the literature1013, 27 and demonstrate that our method produces artifact-free results.

*  5.1. Implementation

We use the pyramids defined by Burt and Adelson,6 based on 5 × 5 kernels. On a 2.26 GHz Intel Xeon CPU, we process a one-megapixel image in about a minute using a single thread. This can be halved by limiting the depth of the intermediate pyramid to at most five levels, by applying the remapping to level max(0, l – 3) rather than always starting at the full-resolution image. This amounts to applying the remapping to a downsampled version of the image when processing coarse pyramid levels. The resulting images are visually indistinguishable from the full-pyramid process with a PSNR on the order of 30 to 40 dB. While this performance is slower than previous work, our algorithm is highly data parallel and can easily exploit a multicore architecture. Using OpenMP, we obtain an 8× speed-up on an 8-core machine, bringing the running time down to 4 seconds.

*  5.2. Detail manipulation

To modify the details of an image, we define an S-shaped point-wise function as is classically used for the local manipulation of contrast. For this purpose, we use a power curve fd(Δ) = Δα, where α > 0 is a user-defined parameter. Values larger than 1 smooth the details out, while values smaller than 1 increase their contrast (Figures 5 and 6). To restrict our attention to the details of an image, we set the edge-modifying function to the identity fe(a) = a.

In the context of detail manipulation, the parameter σ controls how at what magnitude signal variations should be considered edges and therefore be preserved. Large values allow the filter to alter larger portions of the signal and yield larger visual changes (Figure 7). In its basic form, detail manipulation is applied at all scales, but one can also control which scales are affected by limiting processing to a subset of the pyramid levels (Figure 6c, d, e). While this control is discrete, the changes are gradual, and one can interpolate between the results from two subsets of levels if continuous control is desired. Our results from Figures 6 and 7 are comparable to results of Farbman et al.11; however, we do not require the complex machinery of a multiresolution preconditioned conjugate gradient solver. Note that our particular extension to color images allows us to boost the color contrast as well (Figures 6, 7, and 8).

Reducing noise amplification. As in other techniques for texture enhancement, increasing the contrast of the details may make noise and artifacts from lossy image compression more visible. We mitigate this issue by limiting the smallest Δ amplified. In our implementation, when α < 1, we compute fd(Δ) = τΔα + (1 – τ)Δ, where τ is a smooth step function equal to 0 if Δ is less than 1% of the maximum intensity, 1 if it is more than 2%, with a smooth transition in between. All the results in this paper and supplemental material are computed with this function.

*  5.3. Tone manipulation

Our approach can also be used for reducing the intensity range of a high-dynamic-range (HDR) image, according to the standard tone mapping strategy of compressing the large-scale variations while preserving (or enhancing) the details.35 In our framework, we manipulate large-scale variations by defining a point-wise function modifying the edge amplitude, fe(a) = βa, where β ≥ 0 is a user-defined parameter (Figure 5).

In our implementation of tone manipulation, we process the image intensity channel only and keep the color unchanged.10 We compute an intensity image cacm5803_s.gif and color ratios cacm5803_t.gif , where Ir, Ig, and Ib are the RGB channels. We apply our filter on the log intensities log(Ii),35 using the natural logarithm. For tone mapping, we set our filter with α ≤ 1 so that details are preserved or enhanced, and β < 1 so that edges are compressed. This produces new values log(Ii), which we must then map to the displayable range of [0, 1]. We remap the result log(Ii) by first offsetting its values to make its maximum 0, then scaling them so that they cover a user-defined range.10, 21 In our implementation, we estimate a robust maximum and minimum with the 99.5th and 0.5th percentiles, and we set the scale factor so that the output dynamic range is 100:1 for the linear intensities. Finally, we multiply the intensity by the color ratios (ρr, ρg, ρb) to obtain the output RGB channels, then gamma correct with an exponent of 1/2.2 for display. We found that fixing the output dynamic range not only makes it easy to achieve a consistent look but also constrains the system. As a result, the σ and β parameters have similar effects, both controlling the balance between local and global contrast in the rendered image (Figure 9). From a practical standpoint, we advise keeping σ fixed and varying the slope β between 0, where the local contrast is responsible for most of the dynamic range, and 1, where the global contrast dominates. Unless otherwise specified, we use σ = log(2.5), which gave consistently good results in our experiments. Since we work in the log domain, this value corresponds to a ratio between pixel intensities. It does not depend on the dynamic range of the scene, and assumes only that the input HDR image measures radiance up to scale.

Our tone mapping operator builds upon standard elements from previous work that could be substituted for others. For instance, one could instead use a sigmoid to remap the intensities to the display range30 or use a different color management method (e.g., Mantiuk et al.24). Also, we did not apply any additional “beautifying curve” or increased saturation as is commonly done in photo editing software. Our approach produces a clean output image that can be post-processed in this way if desired.

Range compression is a good test case to demonstrate the abilities of our pyramid-based filters because of the large modification involved. For high compression, even subtle inaccuracies can become visible, especially at high-contrast edges. In our experiments, we did not observe aliasing or oversharpening artifacts even on cases where other methods suffer from them (Figures 10 and 11). We also stress-tested our operator by producing results with a low global contrast (β = 0) and high local details (α = 0.25). In general, the results produced by our method did not exhibit any particular problems (Figure 12). We compare exaggerated renditions of our method with Farbman et al.11 and Li et al.21 Our method produces consistent results without halos, whereas the other methods either create halos or fail to exaggerate detail (Figure 13).

One typical difficulty we encountered is that sometimes the sky interacts with other elements to form high-frequency textures that undesirably get amplified by our detail-enhancing filter (Figures 8b and 14). Such “misinterpretation” is common to all low-level filters without semantic understanding of the scene, and typically requires user feedback to correct.22

We also experimented with inverse tone mapping, using slope values β larger than 1 to increase the dynamic range of a normal image. Since we operate on log intensities, roughly speaking, the linear dynamic range gets exponentiated by β. Applying our tone-mapping operator on these range-expanded results gives images close to the originals, typically with a PSNR between 25 and 30 dB for β = 2.5. This shows that our inverse tone mapping preserves the image content well. While a full-scale study on an HDR monitor is beyond the scope of this paper, we believe that our simple approach can complement other relevant techniques (e.g., Masia et al.25). Sample HDR results are provided in supplemental material.

*  5.4. Discussion

While our method can fail in the presence of excessive noise or when extreme parameter settings are used (e.g., the lenna picture in supplemental material has a high level of noise), we found that our filters are very robust and behave well over a broad range of settings. Figure 15 shows a variety of parameters values applied to the same image and the results are consistently satisfying, high-quality, and halo-free; many more such examples are provided in supplemental material. While the goal of edge-aware processing can be ill-defined, the results that we obtain show that our approach allows us to realize many edge-aware effects with intuitive parameters and a simple implementation. The current shortcoming of our approach is its running time. We can mitigate this issue, thanks to the multiscale nature of our algorithm, allowing us to generate quick previews that are faithful to the full-resolution results (Figure 16). Furthermore, the algorithm is highly parallelizable and should lend itself to a fast GPU implementation. Beyond these practical aspects, our main contribution is a better characterization of the multiscale properties of images. Many problems related to photo editing are grounded in these properties of images and we believe that a better understanding can have benefits beyond the applications demonstrated in this paper.

Back to Top

6. Conclusion

Link to recent work. We first presented this work at the ACM SIGGRAPH conference in 2011. The main difference with our original article is Section 3 that now focuses on qualitative properties of edges. A formal discussion of these properties can be found in Paris et al.28 Since then, we also extended this work with a fast algorithm that makes Local Laplacian Filters practical, an analysis that shows their relationship to the Bilateral Filter, an application to the transfer of gradient histograms applied to photographic style transfer, and additional comparisons with existing techniques such as the Guided Filter.17 These results are described in Aubry et al.2

Although Local Laplacian Filters can reduce image details, Xu et al.39, 40 have shown that they do not fully remove them and have proposed filters that completely suppress details for applications such as cartoon rendering and mosaic texture removal. By addressing the extreme detail removal problem, this work is complementary to Local Laplacian Filters that perform well at extreme detail increase. Hadwiger et al.16 have introduced a dedicated data structure to process very large images efficiently and have demonstrated its application to Local Laplacian Filtering.

Closing note. We have presented a new technique for edge-aware image processing based solely on the Laplacian pyramid. It is conceptually simple, allows for a wide range of edge-aware filters, and consistently produces artifact-free images. We demonstrate high-quality results over a large variety of images and parameter settings, confirming the method’s robustness. Our results open new perspectives on multiscale image analysis and editing since Laplacian pyramids were previously considered as ill-suited for manipulating edges. Given the wide use of pyramids and the need for edge-aware processing, we believe our new insights can have a broad impact in the domain of image editing and its related applications.

Back to Top

Acknowledgments

We thank Ted Adelson, Bill Freeman, and Frédo Durand for inspiring discussions and encouragement; Alan Erickson for the Orion image; and the anonymous reviewers for their constructive comments. This work was supported in part by an NSERC Postdoctoral Fellowship, the Quanta T-Party, NGA NEGI-1582-04-0004, MURI Grant N00014-06-1-0734, and gifts from Microsoft, Google, and Adobe. We thank Farbman et al. and Li et al. for their help with comparisons.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. We demonstrate edge-aware image filters based on the manipulation of Laplacian pyramids. Our approach produces high-quality results, without degrading edges or introducing halos, even at extreme settings. Our approach builds upon standard image pyramids and enables a broad range of effects via simple point-wise nonlinearities (shown in corners). For an example image (a), we show results of tone mapping using our method, creating a natural rendition (b) and a more exaggerated look that enhances details as well (c). Laplacian pyramids have previously been considered unsuitable for such tasks, but our approach shows otherwise.

F2 Figure 2. Range compression applied to a step edge with fine details (a). The different versions of the edge are offset vertically so that their profiles are clearly visible. Truncating the Laplacian coefficients smooths the edge (red), an issue which Li et al.21 have identified as a source of artifacts in tone mapping. In comparison, our approach (blue) preserves the edge sharpness and very closely reproduces the desired result (black). Observing the shape of the first two levels (b, c) shows that clipping the coefficients significantly alters the shape of the signal (red vs. orange). The truncated coefficients form wider lobes whereas our approach produces profiles nearly identical to the input (blue vs. orange).

F3 Figure 3. Simple view of our range compression approach, which is based on thresholding and local processing. For a step-like signal similar to the one in Figure 2, our method effectively builds two Laplacian pyramids, corresponding to clipping the input based on the signal value to the left and right of the step edge, then merging their coefficients as indicated by the color coding.

F4 Figure 4. Overview of the basic idea of our approach. For each pixel in the Gaussian pyramid of the input (red dot), we look up its value g. Based on g, we remap the input image using a point-wise function, build a Laplacian pyramid from this intermediate result, then copy the appropriate pixel into the output Laplacian pyramid. This process is repeated for each pixel over all scales until the output pyramid is filled, which is then collapsed to give the final result. For more efficient computation, only parts of the intermediate pyramid need to be generated.

F5 Figure 5. Family of point-wise functions for edge-aware manipulation described in Sections 5.2 and 5.3. The parameters α and β let us control how detail and tone are processed respectively. To compute a given Laplacian coefficient in the output, we filter the original image point-wise using a nonlinear function r(i) of the form shown. This remapping function is parametrized by the Gaussian pyramid coefficient g, describing the local image content, and a threshold σ used to distinguish fine details (red) from larger edges (blue).

F6 Figure 6. Smoothing and enhancement of detail, while preserving edges (σ = 0.3). Processing only a subset of the levels controls the frequency of the details that are manipulated (c, d, e). The images have been cropped to make the flower bigger and its details more visible.

F7 Figure 7. Effect of the σ parameter for detail enhancement (α = 0.25). Same input as Figure 6.

F8 Figure 8. Filtering only the luminance (b) preserves the original colors in (a), while filtering the RGB channels (c) also modifies the color contrast (α = 0.25, β = 1, σ = 0.4).

F9 Figure 9. β and σ have similar effects on tone mapping results, they control the balance between global and local contrast. α is set to 1 in all three images.

F10 Figure 10. The extreme contrast near the light bulb is particularly challenging. Images (a) and (b) are reproduced from Fattal.12 The edge-aware wavelets suffer from aliasing and generate an irregular edge (b). In comparison, our approach (d) produces a clean edge. We set our method to approximately achieve the same level of details (σ = log(3.5), α = 0.5, β = 0).

F11 Figure 11. The bilateral filter sometimes oversharpens edges, which can leads to artifacts (b). We used code provided by Paris and Durand26 and multiplied the detail layer by 2.5 to generate these results. Although such artifacts can be fixed in postprocessing, this introduces more complexity to the system and requires new parameters. Our approach produces clean edges directly (d). We set our method to achieve approximately the same visual result (σ = log(2.5), α = 0.5, β = 0).

F12 Figure 12. We stressed our approach by applying a strong range compression coupled with a large detail increase (α = 0.25, β = 0, σ = log(2.5)). The results are dominated by local contrast and are reminiscent of the popular, exaggerated “HDR look” but without the unsightly halos associated with it. In terms of image quality, our results remain artifact-free in most cases. We explore further parameter variations in the supplemental material.

F13 Figure 13. We compare exaggerated, tone-mapped renditions of an HDR image. The wavelet-based method by Li et al.21 is best suited for neutral renditions and generates halos when one increases the level of detail (a). The multiscale method by Farbman et al.11 performs better and produces satisfying results for intermediate levels of detail (b), but halos and edge artifacts sometimes appear for a larger increase, as in this image for instance; see the edge of the white square on the blue book cover and the edge of the open book (c). In comparison, our approach achieves highly detailed renditions without artifacts (d). These results as well as many others may be better seen in the supplemental material.

F14 Figure 14. Our approach is purely signal-based and its ignorance of scene semantics can lead to artifacts. For a large increase in local contrast (b), at a level similar to Figure 12, the sky gets locally darker behind clouds, because it forms a blue-0white texture amplified by our filter. Our result for this example is good elsewhere, and this issue does not appear with a more classical rendition (a).

F15 Figure 15. Our filter to enhance and reduce details covers a large space of possible outputs without creating halos.

F16 Figure 16. Our approach generates faithful previews when applied to a low-resolution version of an image (β = 0, σ = log(2.5)).

UF1 Figure. Watch the authors discuss this work in this exclusive Communications video.

Back to top

    1. Aubert, G. and Kornprobst, P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. Vol. 147 of Applied Mathematical Sciences. Springer, 2002.

    2. Aubry, M., Paris, S., Hasinoff, S.W., Kautz, J., and Durand, F. Fast and Robust Pyramid-based Image Processing. Tech. Rep. MIT-CSAILTR-2011-049. MIT, 2011.

    3. Bae, S., Paris, S., and Durand, F. Two-scale tone management for photographic look. ACM Trans. Graph. (Proc. SIGGRAPH) 25, 3 (2006), 637–645.

    4. Bhat, P., Zitnick, C.L., Cohen, M., and Curless, B. Gradientshop: A gradient-domain optimization framework for image and video filtering. ACM Trans. Graph. 29 (2010), 2.

    5. Buades, A., Coll, B., and Morel, J.-M. The staircasing effect in neighborhood filters and its solution. IEEE Trans. Image Process. 15 (2006), 6.

    6. Burt, P.J. and Adelson, E.H. The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31 (1983), 4.

    7. Chen, J., Paris, S., and Durand, F. Real-time edge-aware image processing with the bilateral grid. ACM Trans. Graph. (Proc. SIGGRAPH) 26 (2007), 3.

    8. Criminisi, A., Sharp, T., Rother, C., and Perez, P. Geodesic image and video editing. ACM Trans. Graph. 29 (2010), 5.

    9. Dippel, S., Stahl, M., Wiemker, R., and Blaffert, T. Multiscale contrast enhancement for radiographies: Laplacian pyramid versus fast wavelet transform. IEEE Trans. Med. Imaging 21 (2002), 4.

    10. Durand, F. and Dorsey, J. Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Graph. (Proc. SIGGRAPH) 21 (2002), 3.

    11. Farbman, Z., Fattal, R., Lischinski, D., and Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. (Proc. SIGGRAPH) 27 (2008), 3.

    12. Fattal, R. Edge-avoiding wavelets and their applications. ACM Trans. Graph. (Proc. SIGGRAPH) 28 (2009), 3.

    13. Fattal, R., Lischinski, D., and Werman, M. Gradient domain high dynamic range compression. ACM Trans. Graph. (Proc. SIGGRAPH) 21 (2002), 3.

    14. Fattal, R., Agrawala, M., and Rusinkiewicz, S. Multiscale shape and detail enhancement from multi-light image collections. ACM Trans. Graph. (Proc. SIGGRAPH) 26 (2007), 3.

    15. Fattal, R., Carroll, R., and Agrawala, M. Edge-based image coarsening. ACM Trans. Graph. 29 (2009), 1.

    16. Hadwiger, M., Sicat, R., Beyer, J., Krüger, J., and Möller, T. Sparse PDF maps for non-linear multiresolution image operations. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 31 (2012), 5.

    17. He, K., Sun, J., and Tang, X. Guided image filtering. In Proceedings of European Conference on Computer Vision (Proc. ECCV) (2010).

    18. Heeger, D.J. and Bergen, J.R. Pyramid-based texture analysis/synthesis. In Proceedings of the ACM SIGGRAPH Conference (Proc. SIGGRAPH) (1995).

    19. Kass, M. and Solomon, J. Smoothed local histogram filters. ACM Tran. Graph. (Proc. SIGGRAPH) 29 (2010), 3.

    20. Kimmel, R. Numerical Geometry of Images: Theory, Algorithms, and Applications. Springer, 2003.

    21. Li, Y., Sharan, L., and Adelson, E.H. Compressing and companding high dynamic range images with subband architectures. ACM Trans. Graph. (Proc. SIGGRAPH) 24 (2005), 3.

    22. Lischinski, D., Farbman, Z., Uyttendaele, M., and Szeliski, R. Interactive local adjustment of tonal values. ACM Trans. Graph. (Proc. SIGGRAPH) 25 (2006), 3.

    23. Mantiuk, R., Myszkowski, K., and Seidel, H.-P. A perceptual framework for contrast processing of high dynamic range images. ACM Trans. Appl. Percept. 3 (2006), 3.

    24. Mantiuk, R., Mantiuk, R., Tomaszewska, A., and Heidrich, W. Color correction for tone mapping. Comput. Graph. Forum (Proc. Eurographics) 28 (2009), 2.

    25. Masia, B., Agustin, S., Fleming, R.W., Sorkine, O., and Gutierrez, D. Evaluation of reverse tone mapping through varying exposure conditions. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 28 (2009), 5.

    26. Paris, S. and Durand, F. Tone-mapping code. http://people.csail.mit.edu/sparis/code/src/tone_mapping.zip.

    27. Paris, S., Kornprobst, P., Tumblin, J., and Durand, F. Bilateral filtering: Theory and applications. Found. Trends Comput. Graph. Vision 4, 1 (2009), 1–74.

    28. Paris, S., Hasinoff, S.W., and Kautz, J. Local Laplacian Filters: Edge-aware image processing with a Laplacian pyramid. ACM Trans. Graph. (Proc. SIGGRAPH) 30 (2011), 4.

    29. Perona, P. and Malik, J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12 (1990), 7.

    30. Reinhard, E., Stark, M., Shirley, P., and Ferwerda, J. Photographic tone reproduction for digital images. ACM Trans. Graph. (Proc. SIGGRAPH) 21 (2002), 3.

    31. Subr, K., Soler, C., and Durand, F. Edge-preserving multiscale image decomposition based on local extrema. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 28 (2009), 5.

    32. Sunkavalli, K., Johnson, M.K., Matusik, W., and Pfister, H. Multiscale image harmonization. ACM Trans. Graph. (Proc. SIGGRAPH) 29 (2010), 3.

    33. Szeliski, R. Locally adapted hierarchical basis preconditioning. ACM Trans. Graph. (Proc. SIGGRAPH) 25 (2006), 3.

    34. Tomasi, C. and Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the IEEE International Conference on Computer Vision (Bombay, India, 1998).

    35. Tumblin, J. and Turk, G. Low curvature image simplifiers (LCIS): A boundary hierarchy for detail-preserving contrast reduction. In Proc. SIGGRAPH (1999).

    36. Vuylsteke, P. and Schoeters, E.P. Multiscale image contrast amplification (MUSICA). In Proc. SPIE, Volume 2167 (1994).

    37. Witkin, A. Scale-space filtering. In Proceedings of the International Joint Conference on Artificial Intelligence, Volume 2 (Karlsruhe, Federal Republic of Germany (a.k.a. West Germany), 1983.

    38. Witkin, A., Terzopoulos, D., and Kass, M. Signal matching through scale space. Int. J. Comput. Vision 1 (1987), 2.

    39. Xu, L., Lu, C., Xu, Y., and Jia, J. Image smoothing via L0 gradient minimization. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 30 (2011), 5.

    40. Xu, L., Yan, Q., Xia, Y., and Jia, J. Structure extraction from texture via relative total variation. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 31 (2012), 5.

    The original version of this paper was published in ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2011) 30, 4 (Aug. 2011), 68:1–68:12.

    Images credits: Martin Čadík, Paul Debevec, Frédéric Drago, Frédo Durand, Mark Fairchild, Dani Lischinski, Byong Mok Oh, Erik Reinhard, and Gregory J. Ward.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More