which Wavelet is used for Jpeg2000 Image Compression? - image-processing

which Wavelet is used for Jpeg2000 Image Compression ?
I was reading the Book on Wavelet Transform to find out how wavelet works for image compression but there are so many types of Wavelets mentioned in the book and i am confused which one is used for Jpeg2000 compression ?
These are the types i've found in the book.
Biorthogonal wavelet, Shannon or “Sinc” wavelets , Haar wavelets (the shortest), Coiflets wavelets.
P.S: I've no knowledge of image compression , just starting it with this project.

According to the Wikipedia article on JPEG 2000, two different varieties of Cohen-Daubechies-Feauveau wavelet are used: the CDF 5/3 wavelet for lossless compression, and a CDF 9/7 wavelet for lossy compression. (Both sorts are biorthogonal.) See here: http://en.wikipedia.org/wiki/Cohen-Daubechies-Feauveau_wavelet .

Gareth
is correct that
CDF 9/7 wavelets
are used for lossy compression, but LGT (not CDF) 5/3 wavelets are used for lossless compression. See
this Wikipedia article
for more information on JPEG 2000 and
this blog
for more background on LGT 5/3 wavelets.

Related

Can anyone suggest me a DFT or FFT tool for image analysis?

I would like to compare two video files for the file which is having a better quality (Non blurry) by a c programm.A friend told me to learn about DFT (Discrete Fourier Transform) for image analysis and to use a FFT or DFT tool to learn the difference between blurred vs detailed (non-blurry) copies of same image.So can anyone help me with this ??
You could also do this with SSIM or PSNR. Both can can be done in C and there are C++/C#/Cuda versions of these algorithms on the net.
Doug

In what precise respects are discrete images different from continuous images?

Please suggest me how to solution this question
In what precise respects are discrete images different from continuous images?
This is a very general question and I suggest reading about the details in any good textbook on digital image processing, e.g. "Digital Image Processing" by Gonzalez and Woods.
In the following I want to provide a rough overview. The best description of the relationship between a continuous a image and its discrete counterpart is sampling and quantization. Let f(x, y) be a continuous image. Then sampling means to take/sample values at discrete steps (x_1, y_1), (x_2, y_2), ... There is a vast body of literature on how to choose these samples. The most important is probably the Nyquist-Shannon sampling theorem. It is often seen as defining the bridge between continuous and discrete signals. After sampling, the taken values are still continuous, i.e. f(x_1, y_1), f(x_2, y_2) ... are continous. Therefore, the next step is quantization - in order to store the values digitally, they are quantized. The quantization strongly depends on the resolution used to store images. In general, 8 bit per color channel is used (e.g. RGB images have 24 bits per pixel). This means that every value f(x_i, y_i) is quantized into one of the 256 values provided by 8 bit quantization. Together, sampling and quanitzation transform a continuous image into a discrete or digital image.
Note that many image processing techniques originate from the continuous image model and can successfully be transferred to the discrete domain (these include simple principles concerning convolution, Fourier analysis, histograms etc.). However, often the discrete model introduces some difficulties one has to be aware of. Among these are quantization errors, sampling issues (e.g. aliasing etc.) and numerical stability.

OpenCV Face Verification

Is there way that I can implement Face Recognition using OpenCV? I tried to use LBPH, and train with one image. It gives a confidence score, but I am not sure how accurate this is to use for verification.
My question is how can I create a face recognition system that tells me how similar the two faces are/if they are the same person or not using OpenCV. It doesn't seem like the confidence score is an accurate measure, if I'm doing this correctly.
Also, is a higher confidence score better?
Thanks
OpenCV 3 currently support following algorithms for face recognition:
- Eigenfaces (see createEigenFaceRecognizer())
- Fisherfaces (see createFisherFaceRecognizer())
- Local Binary Patterns Histograms (see createLBPHFaceRecognizer())
Confidence score by these algorithms is the similarity measure between faces, but these methods are really old and perform poorly. I'd suggest you try this article : http://www.robots.ox.ac.uk/~vgg/publications/2015/Parkhi15/parkhi15.pdf
Basically you need to download trained caffe model from here: http://www.robots.ox.ac.uk/~vgg/software/vgg_face/src/vgg_face_caffe.tar.gz
Use opencv to run this classifier like shown is this example:
http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html#gsc.tab=0
Then collect fc8 feature layer of size 4096 floats from caffe network. And calculate your similarity as L2 norm between two fc8 layers calculated for your faces.

JPEG2000 Encoder and Decoder

I am working on the JPEG2000 compression as a part of my research work. I want some kind of open source libraries that can decode the JPEG2000 image and give me its DWT coefficients. And then I would like to encode these coefficients to get the image.
Thanks in advance

Is Dense SIFT better for Bag-Of-Words than SIFT?

I'm implementing a Bag-of-Words image classifier using OpenCV. Initially I've tested SURF descriptors extracted in SURF keypoints. I've heard that Dense SIFT (or PHOW) descriptors can work better for my purposes, so I tried them too.
To my surprise, they performed significantly worse, actually almost 10 times worse. What could I be doing wrong? I'm using DenseFeatureDetector from OpenCV to get keypoints. I'm extracting about 5000 descriptors per image from 9 layers and cluster them into 500 clusters.
Should I try PHOW descriptors from VLFeat library? Also I can't use chi square kernel in OpenCV's SVM implementation, which is recommended in many papers. Is this crucial to the classifier quality, should I try another library?
Another question is the scale invariance, I suspect that it can be affected by dense feature extraction. Am I right?
It depends on the problem. You should try different techniques in order to know what is the best technique to use on your problem. Usually using PHOW is very useful when you need to classify any kind of scene.
You should know that PHOW is a little bit different than just Dense SIFT. I used vlfeat PHOW a few years ago, and seeing the code, it is just calling dense sift with different sizes, and some smoothing. That could be one clue to be able to be invariant to scale.
Also in my experiments I used libsvm, and it resulted that histogram intersection was the best one for me. By default chi-square and histogram intersection kernels are not included in libsvm nor OpenCV SVM (based on libsvm). You are the one to decide if you should try them. I can tell you that RBF kernel achieved near 90% of accuracy, wheter histogram intersection 93%, and chi-square 91%. But those results were in my concrete experiments. You should start on RBF with autotuned params, and see if its enough.
Summarizing it all depends on your concrete experiments. But if you use Dense SIFT, maybe you could try to increase the number of clusters, and calling Dense SIFT with different scales (I recommend you the PHOW way).
EDIT: I was looking at OpenCV DenseSift, and maybe you could start with
m_detector=new DenseFeatureDetector(4, 4, 1.5);
Knowing thath vlfeat PHOW uses [4 6 8 10] as bin sizes.

Resources