Image Compression Using Lifting Scheme

This paper introduces, firstly, a proposed met hod of computing one and two-dimensional wavelet transform . The proposed method reduces heavily processing time for decomposition of image keeping or overcoming the quality of reconstructed images. Also, the inverse procedures of all the transformations for multi-dimensio nal cases verified. Secondly, computes quant ization and run length enc oder. Different types of quantization are presented in this paper with effects of these differences on Compression Ratio (CR). Thirdly, compute PSNR, RMSE, CR, and size. The ef fect noted this difference in levels of FLWT on same picture, where PSNR, MSRE, SIZE, and CR different from one level to another.


Introduction
Data compression is the process of converting data files into smaller files for efficiency of storage and transmission.It is the key in the rapid progress being made in information technology.Simply, it would not be practical to put images, audio, and/or video alone on websites without compression.
Image coding consists of mapping images to strings of binary digits.A good image coder is one that produces binary strings whose lengths 5455 https://doi.org/10.30684/etj.28.17.5 2412-0758/University of Technology-Iraq, Baghdad, Iraq This is an open access article under the CC BY 4.0 license http://creativecommons.org/licenses/by/4.0 are on average much smaller than the original canonical representation of the image.In many imaging applications, exact reproduction of the image bits is not necessary.In this case, one can perturb the image slightly to obtain a shorter representation [1].Image compression is one of the most important and successful applications of the wavelet transform.Mature wavelet based image coders like the JPEG2000 standard is available, gaining in popularity, and easily to perform the traditional coders.Unlike image compression, based the Discrete Cosine Transform (DCT) like JPEG, the performance of a wavelet-based image coder depends to a large degree on the choice of the wavelet.This problem is usually handled by using standard wavelets that are not specially adapted to a given image, but that are known to perform well on photographic images.Application of digital images often is not viable due to high storage or transmission costs.Image compression technology offers a possible solution.The basic goal of image compression is to reduce the bit rate of an image then to minimize the communication channel capacity or digital storage memory requirements with maintaining necessary fidelity in the image, or, equivalently, to obtain the best possible fidelity for a given bit rate [2].Two major features should be available for good and successful data compression, low complexity(e.g., ease of decoding) and the efficient implementation (in terms of memory requirements).In addition to these features, the process time delay and the quality after compression are among the major concerns for a good compression technique [3].
In order to be useful, a compression algorithm has a corresponding decompression algorithm that, given the compressed file, reproduces the original file.There have been many types of compression algorithms developed .These algorithms fall into two broad types, lossless algorithms and lossy algorithms.A lossless algorithm reproduces the original data exactly.A lossy algorithm, as its name implies, loses some redundant data.Data loss may be unacceptable in many applications.For example, text compression must be lossless because a very small difference can result in statements with totally different meanings.There are also many situations where loss may be either unnoticeable or acceptable.In image compression, for example, the exact reconstructed value of each sample of the image is not necessary.Depending on the quality requirements of the reconstructed image, some losses of information can be accepted [4].

Wavelet-based Compression System
The basic structure of a wavelet-based compression system is summarized in Fig. (1)[5]: The basic functions of each block in the encoder are: 1. Forward Transform: Decorrelation of image pixels is performed here via a multiresolution wavelet transform.This block is equivalent to the forward DCT block in a classic DCTbased compression system.2. Quantization: Quantization of wavelet coefficients can produce a PDF created with pdfFactory Pro trial version www.pdffactory.comgreat compression rate in the lossy mode.In the lossless mode an error residual has to be stored in order to achieve perfect reconstruction of the image.Special coding schemes are generally used here to obtain an embedded bit stream output.3. Entropy coding: Arithmetic coding or Huffman coding can be applied to the quantized coefficients to get a better compression ratio.Nevertheless this final step can be skipped in order to obtain a faster compression system.
Through the application of the wavelet transform a component is split into numerous frequency bands (i.e., subbands).Due to the statistical properties of these subband signals, the transformed data can usually be coded more efficiently than the original untransformed data.

Wavelets
based on dilations and translations of a mother wavelet are referred to as first generation wavelets or classical wavelets.
Second generation wavelets, i.e., wavelets which are not necessarily translations and dilations of one function, are much more flexible and can be used to define wavelet bases for bound intervals, irregular sample grids or even for solving equations or analyzing data on curves or surfaces.Second generation wavelets retain the powerful properties of first generation wavelets, like fast transform, localization and good approximation.Lifting scheme is a rather new method for constructing wavelets.The main difference with the classical constructions is that; it does not rely on the Fourier transform.In this way, Lifting can be used to construct second generation wavelets Lifting scheme, and efficiently implement classical wavelet transforms.Existing classical wavelets can be implemented with Lifting Scheme by factorization them into Lifting steps [6].

Forward Transform:
The forward transform of Fast Lifting scheme is composed of three steps, depicted as follows [7]: • Split.
• Update.Split: This stage does not do much except for splitting the signal into two disjoint sets of samples.In this case one group consists of the even indexed samples l s 2 and the other group consists of the odd indexed samples 1 2 + l s .Each group contains half as many samples as the original signal.The splitting into even and odds is called the Lazy wavelet transform.Thus an operator is built so that [7] ( Predict: The even and odd subsets are interspersed.If the signal has a local correlation structure, the even and odd subsets will be highly correlated.
In other words given one of the two sets, it should be possible to predict the other one with reasonable accuracy.In the Haar case the prediction is particularly simple.An odd sample 1 2 , + l j s will use its left neighboring even sample The even locations can be overwritten with the averages and the odd ones with the details.An abstract implementation is given by [7]: To compute a signal level FLWT for 1-D signal, the following steps should be followed: 1. Checking input dimensions: Input vector should be of length N, where N must be power of two. 2. Split: divides the input data into odd and even elements.3. Predict: predicts the odd elements from the even elements [odd new = odd -even].4. Update: replaces the even elements with an average [even new = even + odd new/2].

Computation of Fast Lifting Wavelet Transform (FLWT) for 2-D Signal
To compute a signal level FLWT for 2-D signal, the following steps should be followed: 1. Checking input dimensions: Input vector should be of length NxN, where N must be power of two. 2. Split row: divides the input data into odd and even elements of each column.

Inverse Transform:
It is easy to verify, that the perfect reconstruction in case of lossless compression is guaranteed, since that same value of the modified predictor and update operator is added and subtracted.The inverse transform of fast lifting scheme is composed of three steps, depicted as follows [7]: 1-Undo update.
2-Undo predict.3-Merge.Undo update: Given j d and j s , by subtracting the update information the even samples can be recovered .In the case of Haar: Assuming that the even slots contain the averages and the odd ones contain the difference, the implementation of the inverse transform is:

Computation of Inverse Fast Lifting Wavelet Transform (IFLWT) for 1-D Signal
To compute the signal level IFLWT for 1-D signal, the following steps should be followed: 1. Undo update: [even = even new -odd new/2].2. Undo predict: [odd = odd new + even].3. Merge: solder odd and even elements into input data.PDF created with pdfFactory Pro trial version www.pdffactory.com(NxN).After a single-level of wavelet decomposition using Haar filter, image dimensions will be a matrix of 512x512 (NxN), as shown in Fig. (4b).The upper-left most, LL, subband of 256x256 dimensions, is zoomed in as in Fig. (4c) An example test is applied to the decomposed Lena image to reconstruct the original "Lena" image by using a general computer program computing a single-level 2-D IFLWT and the result is shown in Fig. (4d).

Quantization
Quantization is the process of selecting the discarded visual information without a significant loss in the visual effect.Quantization reduces the number of bits needed to store an integer value by reducing the precision of the integer.A uniform quantizer does not depend on the data.It divides the range of the values into quantization intervals of equal length [8].
Subjective experiments involving the human visual system have resulted in the JPEG standard quantization matrix.With a quality level of 50, this matrix renders both high compression and excellent decompressed image quality [9].
If, another level of quality and compression is desired, scalar multiples of the JPEG standard quantization matrix may be used.For a quality level greater than 50 (less compression, higher image quality), the standard quantization matrix is multiplied by (100-quality level)/50.For a quality level less than 50 (more compression, lower image quality), the standard quantization matrix is multiplied by 50/quality level.The scaled quantization matrix is then rounded and clipped to have positive integer values ranging from 1 to 255.For example, the following quantization matrices yield quality levels of 10 and 90 [9].

Run Length Encoding (RLE)
RLE works by reducing the physical size of a repeating string of characters.This repeating string, called a run, is typically encoded into PDF created with pdfFactory Pro trial version www.pdffactory.com5461 two bytes.The first byte represents the number of characters in the run and is called the run count.In practice, an encoded run may contain 1 to 128 or 256 characters; the run count usually contains the number of characters minus one (a value in the range of 0 to 127 or 255).The second byte is the value of the character in the run, which is in the range of 0 to 255, and is called the run value.Uncompressed, a character run of 15A characters would normally require 15 bytes to store: AAAAAAAAAAAAAAA.
The same string after RLE encoding would require only two bytes: 15A.The 15A code generated to represent the character string is called an RLE packet.Here, the first byte, 15, is the run count and contains the number of repetitions.The second byte, A, is the run value and contains the actual repeated value in the run [10].The flow chart shown in fig.
(5) explains the implementation of Run Length Encoder.

Compression Ratio
It is defined as the ratio between he size of the original image data (file) and the size of overall compressed data file [11].

Result
This section presents the results that were performed on the design of the wavelet based Image Compression using MATLAB R2008a program.Table (1) shows the effect of quantization on compression ratio.Table (2) shows computed PSNR, RMSE, and CR.
the odd sample and its prediction: PDF created with pdfFactory Pro trial version www.pdffactory.compossible to represent the detail more efficiently.Note that if the original signal is constant, then all details are exactly zero [7].Update: One of the key properties of the coarse signals is that they have the same average value as the original signal, i.e., the quantity 3. Predict row: predicts the odd elements from the even elements [odd new = odd -even].4. Update row: replaces the even elements with an average [even new = even + odd new/2].5. Split column: divides the (odd new and even new) data into odd and even elements of each row.6. Predict column: predicts the odd elements from the even elements [odd1 new= odd1 -even1].7. Update column: replaces the even elements with an average PDF created with pdfFactory Pro trial version www.pdffactory.com[even1 new = even1 + odd1 new/2].
column: solder the (odd1 and even1) of each row into odd and even elements.4.Undo update row:[even = even new -odd new/2].5.Undo predict row:[odd = odd new + even].6. Merge row: solder odd and even elements of each column into the input data.
x, y) is the original image data and f '(x, y) is the reconstructed image value[13].PSNR: Another quantitative measure is the peak signal-to-noise ratio (PSNR), based on the root mean square error of the reconstructed image.The formula for PSNR is given as follow: images directly to determine their quality.Subjective evaluation by viewers is still a method commonly used in measuring image quality.The subjective test emphatically examines fidelity and at the same time considers image PDF created with pdfFactory Pro trial version www.pdffactory.comintelligibility.When taking subjective test, viewer's focus on the difference between reconstructed image and the original image, they notice such details where information loss cannot be accepted[12].

Figure
Figure (2) The Lifting Scheme ,Forward Transform