Clustering-based methods for fast epitome generation

Abstract

This paper deals with epitome generation, mainly dedicated here to image coding applications. Existing approaches are known to be memory and time consuming due to exhaustive self-similarities search within the image for each non-overlapping block. We propose here a novel approach for epitome construction that first groups close patches together. In a second time the self-similarities search is performed for each group. By limiting the number of exhaustive searches we limit the memory occupation and the processing time. Results show that interesting complexity reduction can be achieved while keeping a good epitome quality (down to 18.08 % of the original memory occupation and 41.39%of the original processing time).

Publication
European Signal Processing Conference
Date

Context and Goal

The concept of epitome was first introduced by Jojic et al. as the condensed representation (meaning its size is only a fraction of the original size) of an image containing the essence of its the textural properties. This original epitomic model is based on a patch-based probabilistic approach, and has different applications in segmentation, denoising, recognition, indexing or texture synthesis.
Several epitomic models have been since proposed, such as the factorized representation of Wang et al. dedicated to texture mapping, or its extension designed for image coding purposes by Chérigui et al. The epitome is in this case the union of epitome charts which are pieces of repeatable textures found in the image. The search for self-similar or repeatable texture patterns, based on the KLT or a block matching (BM) algorithm, is known to be memory and time consuming.
In this work, we propose a clustering-based technique to reduce the self-similarities search complexity.

Approach

Epitome generation

We summarize here the epitome generation process as proposed by Chérigui et al, in which the input image $I$ is factored in an epitome $E$ and an assignation map $\phi$. The input image is divided into a regular grid of non-overlapping blocks $B_i$ (block-grid) and each block will be reconstructed from an epitome patch. The epitome itself is composed of disjoint texture pieces called “epitome chart” (see Fig. 1). The assignation map links the patches from the epitome to the input image blocks. For more detailed explanations, see the papers listed at the end of this page.

Epitome of a Foreman frame (left) and the corresponding reconstruction (right).

Epitome of a Foreman frame (left) and the corresponding reconstruction (right).

The method first proceeds by finding the self-similarities among the image. The goal of this step is to find for each non-overlapping block in the input image a list of matching patches such that the mean square error (MSE) between a block and its matching patches is below a matching threshold $\varepsilon_M$. This parameter eventually determines the size of the epitome. The matching patches are found through exhaustive search in the pixel-grid (see Fig. 2), and the assignation map $\phi$ is composed of translation parameters. $$ \forall B_i \in I, ML(B_i) = {M_{i,0},M_{i,1},\dots}, s.t. \forall j, MSE(B_i,M_{i,j}) \leq \varepsilon_M$$

Exhaustive self-similarities search.

Exhaustive self-similarities search.

Epitome generation

The epitome charts are generated based on an iterative process. candidate patches used to initialize or grow the epitome charts are found by reversing the lists of matching patches obtained at the previous step. The selection of the best candidate is then performed based on a rate-distortion optimization (RDO) criterion, which minimizes the error between the input image $I$ and the image $I’$ as well as the size of the epitome. This criterion is summarized in the equation below, where $R(E)$ represents the size of the epitome estimated as the number of pixels. $$ (E, \phi) = argmin(MSE(I,I’) + \lambda R(E))$$

Iterative epitome chart extension.

Iterative epitome chart extension.

We propose two methods to reduce the complexity of the self-similarities search compared to the exhaustive search. The main idea is to group together similar non-overlapping blocks. The lists of matchnig patches are then computed for the group of blocks instead of each block separately. The similarity for grouping non-overlapping blocks is characterized by an assignation threshold $\varepsilon_A$, which is smaller than $\varepsilon_M$. Thus we define $\varepsilon_A$ via a coefficient $\alpha_A$ such that $\varepsilon_A = \alpha_A * \varepsilon_M, 0 \leq \alpha_A < 1$.

List-based method

This method was designed to obtain a simple grouping of similar non-overlapping blocks and follows the steps below:

Finally for each actual list $AL(B_i)$ we determine a match list $ML(B_i) = {M_{i,0}, M_{i,1}, \dots}$ obtained through an exhaustive search in the pixel-grid. The list is computed with respect to $B_i$ but is then used by all blocks in $AL(B_i)$ for the epitome generation step. To satisfy the constraint that all the blocks have a reconstruction error inferior to $\varepsilon_M$, the blocks different from $B_i$ in the list $AL(B_i)$ only use a subset of $ML(B_i)$ defined as: $$ {M \in ML(B_i) | D(B_i,M) \leq \varepsilon_M - \varepsilon_A}$$

Threshold-based clustering

This method proceeds as follows:

For each cluster $C_i$ a match list $ML(B_i)$ is then computed through an exhaustive search in the pixel-grid, with respect to the block $B_i$ closest to the centroid. Except for the block $B_i$, all blocks in $C_i$ will use approximate matches. As for the previous grouping method, we want to ensure that all blocks have a reconstruction error inferior to $\varepsilon_M$. The blocks different from $B_i$ thus only use a subset of $ML(B_i)$, as defined in in the equation above.

Two steps clusterring-based self-similarities search.

Two steps clusterring-based self-similarities search.

Experimental Results

Experiments were conducted on a set of 4 images: a frame extracted from the Foreman sequence (CIF), Lena ($512 \times 512$), City ($1280 \times 720$) and Calendar ($1280 \times 720$). The size of the blocks is set to $8 \times 8$.
First tests were carried out with $\varepsilon_M = {3.0, 5.0, 10.0, 15.0}$. The parameter $\alpha_A$ was here set to $0.5$. Results are displayed in Table 1. Best results between list-based or cluster-based methods are displayed in bold font. The complexity reduction is assessed by the percentage of the optimized memory occupation over the original one, and the optimized method processing time over the original one. Two processing times are displayed: the self-similarities search time, which is the algorithm step actually optimized, and the complete epitome generation time.
The memory occupation is prohibitive for images City and Calendar when $\varepsilon_M = 15.00$ with the original method. This shows a very important limitation of the full search method for high resolution images. Because of this, comparative results can not be displayed, but the epitome can be still generated when using optimized methods.

Table 1 - Memory occupation and computation time % with respect to original method, depending on $\varepsilon_M$, with $\alpha_A = 0.5$
Image $\varepsilon_M$ List-based method Cluster-based method
Memory load (%) Search time (%) Total time (%) Memory load (%) Search time (%) Total time (%)
Foreman 3.0 50.18 68.62 73.93 47.85 67.31 70.15
5.0 49.54 60.72 60.12 45.22 55.80 61.88
10.0 25.00 32.65 41.39 19.06 28.58 50.36
15.0 18.08 21.61 42.37 14.59 18.00 55.25
Lena 3.0 98.18 75.54 78.92 96.81 69.05 70.13
5.0 63.61 61.00 65.24 48.28 56.12 66.19
10.0 29.98 37.49 51.92 23.66 31.60 55.74
15.0 19.96 23.09 59.69 21.64 23.56 64.48
City 3.0 60.54 66.75 65.27 55.85 68.62 69.25
5.0 60.71 64.165 64.160 56.47 62.48 62.80
10.0 51.47 48.07 60.07 48.18 47.06 59.75
Calendar 3.0 84.66 68.36 66.44 80.69 70.36 68.02
5.0 48.43 54.76 58.83 42.08 53.25 59.94
10.0 28.33 32.58 52.97 53.98 42.43 57.91

The quality of the epitome produced is assessed by the reconstructed image quality. The graphs representing the reconstructed image PSNR as a function of the epitome size are displayed in Fig. 5. For all images, whatever the epitome generation method, the curves are almost identical. The two proposed methods can thus reduce complexity while keeping the same epitome quality.

Reconstruction performances depending on $\varepsilon_M$.

Reconstruction performances depending on $\varepsilon_M$.

Complexity results for different values of $\alpha_A$ are displayed in Table 2, with $\varepsilon_M = 10.00$. These results illustrates the behavior of the proposed methods, which can reduce drastically the complexity when allowing important approximations. However, when evaluating average reconstruction performances (see Fig. 6), we can see that it has a negative impact, as it can increase the epitome size while decreasing the reconstruction PSNR.

Table 2 - Memory occupation and computation time % with respect to original method, depending on $\alpha_A$, with $\varepsilon_M = 10.0$
Image $\alpha_A$ List-based method Cluster-based method
Memory load (%) Search time (%) Total time (%) Memory load (%) Search time (%) Total time (%)
Foreman 0.25 70.66 53.17 61.51 67.86 51.42 58.41
0.50 25.00 32.65 41.39 19.06 28.58 50.36
0.75 9.58 21.94 24.74 7.32 19.96 46.74
Lena 0.25 79.99 60.93 72.73 70.13 56.41 65.69
0.50 29.98 37.49 51.92 23.66 31.60 55.74
0.75 9.03 23.27 26.57 15.91 25.48 53.73
City 0.25 86.30 64.06 67.54 84.09 63.30 66.99
0.50 51.47 48.07 60.07 48.18 47.06 59.75
0.75 19.46 33.84 42.41 38.17 40.90 58.83
Calendar 0.25 69.76 51.43 65.30 65.85 49.95 66.65
0.50 28.33 32.58 52.97 53.98 42.43 57.91
0.75 10.13 22.69 29.61 48.35 36.86 57.94
Reconstruction performances depending on $\alpha_A$.

Reconstruction performances depending on $\alpha_A$.

Applications

The concept of epitome presented here was used in different applications such as still image coding and scalable image coding.

References