Hi I have a problem implementing the following:
DPCM----> Entropy Coding
My DPCM predictor is
AB
CX
X=0.75A-0.5B+0.75C
public static int[][] predictor(int[][] copy, int wt, int ht)
{
int[][] error=new int[ht][wt];
//error[0][0]=copy[0][0];
for(int i=0;i<ht;i++)
{
for(int j=0;j<wt;j++)
{
if(j==0&&i==0)
error[i][j]=copy[0][0];
else if (j==0&&i!=0)
error[i][j]=copy[i][j]-copy[i-1][j];
else if(i==0&&j!=0)
error[i][j]=copy[i][j]-copy[i][j-1];
else
error[i][j]=copy[i][j]-(int)(0.75*copy[i][j-1]-0.5*copy[i-1] [j]+0.75*copy[i-1][j-1]);
}
}
I have implemented this and I store the errors in a 2d array.
After this I need to do Modified Huffman Coding (code book size of 128)
So what i dont get is modified huffman coding is to be done on the errors values? which can also be negative
What does code book size mean actually?
Your help would be greatly appreciated!
Modified Huffman Coding is used on fax machines to encode black images on white color. You will have two Huffman tables,one for black runs and one for white runs, which is not related with Mean-Square Error.Even though it may be related for it,purpose of using MSE is to eliminate negative values so using a simple error metric such as subtraction will not work. As you may have noticed, you will have less bit rate for black runs and more bit rate for white runs. Code word length means the size of each code vector,on the other hand, code book size means the number of vectors you have in your topology.
Related
First time posting, thanks for the great community!
I am using AudioKit and trying to add frequency weighting filters to the microphone input and so I am trying to understand the values that are coming out of the AudioKit AKFFTTap.
Currently I am trying to just print the FFT buffer converted into dB values
for i in 0..<self.bufferSize {
let db = 20 * log10((self.fft?.fftData[Int(i)])!)
print(db)
}
I was expecting values ranging in the range of about -128 to 0, but I am getting strange values of nearly -200dB and when I blow on the microphone to peg out the readings it only reaches about -60. Am I not approaching this correctly? I was assuming that the values being output from the EZAudioFFT engine would be plain amplitude values and that the normal dB conversion math would work. Anyone have any ideas?
Thanks in advance for any discussion about this issue!
You need to add all of the values from self.fft?.fftData (consider changing negative values to positive before adding) and then change that to decibels
The values in the array correspond to the values of the bins in the FFT. Having a single bin contain a magnitude value close to 1 would mean that a great amount of energy is in that narrow frequency band e.g. a very loud sinusoid (a signal with a single frequency).
Normal sounds, such as the one caused by you blowing on the mic, spread their energy across the entire spectrum, that is, in many bins instead of just one. For this reason, usually the magnitudes get lower as the FFT size increases.
Magnitude of -40dB on a single bin is quite loud. If you try to play a tone, you should see a clear peak in one of the bins.
Based on the arguments in this post: Performance of Built-in types, can I conclude that my custom implementation of a int based point structure is faster or more efficient than the float-based CGPoint? I have reviewed many posts concerning the type performance differences but have not found one that includes scenarios further wrapped by a structure.
Thanks.
// Coord
typedef struct {
int x;
int y;
} Coord;
CG_INLINE Coord CoordMake(int x, int y){
Coord coord; coord.x = x; coord.y = y; return coord;
}
CG_INLINE bool CoordEqualToCoord(Coord coord, Coord anotherCoord) {
return coord.x == anotherCoord.x && coord.y == anotherCoord.y;
}
CG_INLINE CGPoint CGPointForCoord(Coord coord) {
return CGPointMake(coord.x, coord.y);
}
EDIT: I have done purely arithmetical tests and the results are really negligible until millions of iterations, which my application will not come close to doing. I will continue to use the Coord typedef but will remove the struct for a few of the reasons #meaning-matters suggests. For the record the tests did show that the int based structure was about 30% faster, but 30% of 0.0001 seconds is not really something anyone should care about. I am still interested in the points and counter-points on which implementation is better.
It depends on what you are doing with it. For ordinary arithmetic, throughput can be similar. Integer latency is usually a bit less. On some processors, the latency to L1 is better for GPRs than FPR. So, for many tests, the results will come out the same or give a small edge for integer computation. The balance will flip the other way for double vs int64_t computation on 32-bit machines. (If you are writing CPU vector code and can get away with 16-bit computation then it would be much faster to use integer.)
However, in the case of calculating coordinates/addresses for purposes of loading or storing data into/from a register, integer is clearly better on a CPU. The reason is that the load or store instruction can take an integer operand as an index into an array, but not a floating point one. To use floating point coordinates, you at minimum have to convert to integer first, then load or store, so it should be always slower. Typically, there will also have to be some rounding mode set as well (e.g. a floor() operation) and maybe some non-trivial operation to account for edging modes, such as a CL_ADDRESS_REPEAT addressing mode. Contrast that to a simple AND operation, which may be all that is necessary to achieve the same thing on integer and it should be clear that integer is a much cheaper format.
On GPUs, which emphasize floating-point computation a bit more and may not invest much in integer computation (even though it is easier), the story is quite different. There you can expect texture unit hardware to use the floating point value directly to find the required data. The floating point arithmetic to find the right coordinate is built in to the hardware and therefore "free" (if we ignore energy consumption considerations) and graphics APIs like GL or CL are built around it.
Generally speaking, though ubiquitous in graphics APIs, floating-point itself is a numerically inferior choice for a coordinate system for sampled data. It lumps too much precision in one corner of the image and may cause quantization errors / inconsistencies at the far corners of the image, leading to reduced precision for linear sampling and unexpected rounding effects. For large enough images, some pixels in the image may become unaddressable by the coordinate system, because no floating-point number exists which references that position. It is probably the case that the default rounding mode, round to nearest ties to even is undesirable for coordinate systems because linear filtering will often place the coordinate half way between two integer values, resulting in a round up for even pixels and round down for odd. This causes pixel duplication rather than the expected result in the worst case where they are all hell ways cases and the stride is 1. It is nice in that it is somewhat easier to use.
A fixed-point coordinate system allows for consistent coordinate precision and rounding across the entire surface and will avoid these problems. Modulo overflow feeds nicely into some common edging modes. Precision is predictable.
Confirmed by a quick search 32-bit int and float operations seem equally fast on ARM processors (and take 1 CPU cycle each). Please look for yourself and do a simple test as Zev Eisenberg correctly suggests.
Then it's not a good idea to start writing your own CGPoint stuff using ints for the following reasons (to name a few):
Incorrect results: Rounding or truncating coordinates to integers will give all kinds of weird/horrible/side effects.
Incompatibility with the multitude of iOS libraries.
A big waste of time.
Not faster.
Creating a messy code base (Knuth is right as Zaph brings in).
As always when trying to optimise: Take a step back and investigate if your current method/algorithm is the best choice (for possibly different scenario's in your application). This is the way to commonly massive improvement of hundreds of percents.
I'm trying to use the PCA class in OpenCv to perform the principal component analysis operation in my C++ application . I'm new to OpenCV and I'm having a problem So I wish if someone could help.
I'm trying a demo Example on both Matlab and the PCA class to check the answers
when I'm using 2*10 data array, and the parameter (CV_PCA_DATA_AS_COL), here I'm having two dimensions so I'm expecting to have 2 Eigenvectors each has 2 elements, and this worked fine as expected with the same results as Matlab.
But while using 10*2 data array (generally when number of samples is less than number of dimension), I get (2*10) array of eiegnvectors. I.e: 10 eigenvectors with 2 elements each. This is not expected and it's not the result given by Matlab (Matlab give 10*10 matrix of eigenvectors).
I don't know why I'm having those results and due this I can't project the Data on principal components in my application, any help?
P.S : The code I used :
Mat Mean ;
Mat H(10, 2, CV_32F); // then the matrix is filled by data
PCA pca(H,Mean,CV_PCA_DATA_AS_COL,0) ;
pca.operator()(H,Mean,CV_PCA_DATA_AS_COL,0) ;
cout<<pca.eigenvectors.rows // gives 2 instead of 10
cout<<pca.eigenvectors.cols // gives 10
I'd state it as follows:
If the number of samples is less than the data dimension then the number of retained components will be clamped at the number of samples.
We did 3x3 PCA for mechanics subject at uni, also some non-linear control algorithms used similar approaches - my memory is foggy, but it may have something to do with assumptions regarding psuedo-inverses and non-square matrices...
Once you delve into the theory - websearch 'pca with less samples than dimensions' - it gets messy fast!
I'm able to read a wav files and its values. I need to find peaks and pits positions and their values. First time, i tried to smooth it by (i-1 + i + i +1) / 3 formula then searching on array as array[i-1] > array[i] & direction == 'up' --> pits style solution but because of noise and other reasons of future calculations of project, I'm tring to find better working area. Since couple days, I'm researching FFT. As my understanding, fft translates the audio files to series of sines and cosines. After fft operation the given values is a0's and a1's for a0 + ak * cos(k*x) + bk * sin(k*x) which k++ and x++ as this picture
http://zone.ni.com/images/reference/en-XX/help/371361E-01/loc_eps_sigadd3freqcomp.gif
My question is, does fft helps to me find peaks and pits on audio? Does anybody has a experience for this kind of problems?
It depends on exactly what you are trying to do, which you haven't really made clear. "finding the peaks and pits" is one thing, but since there might be various reasons for doing this there might be various methods. You already tried the straightforward thing of actually looking for the local maximum and minima, it sounds like. Here are some tips:
you do not need the FFT.
audio data usually swings above and below zero (there are exceptions, including 8-bit wavs, which are unsigned, but these are exceptions), so you must be aware of positive and negative values. Generally, large positive and large negative values carry large amounts of energy, though, so you want to count those as the same.
due to #2, if you want to average, you might want to take the average of the absolute value, or more commonly, the average of the square. Once you find the average of the squares, take the square root of that value and this gives the RMS, which is related to the power of the signal, so you might do something like this is you are trying to indicate signal loudness, intensity or approximate an analog meter. The average of absolutes may be more robust against extreme values, but is less commonly used.
another approach is to simply look for the peak of the absolute value over some number of samples, this is commonly done when drawing waveforms, and for digital "peak" meters. It makes less sense to look at the minimum absolute.
Once you've done something like the above, yes you may want to compute the log of the value you've found in order to display the signal in dB, but make sure you use the right formula. 10 * log_10( amplitude ) is not it. Rule of thumb: usually when computing logs from amplitude you will see a 20, not a 10. If you want to compute dBFS (the amount of "headroom" before clipping, which is the standard measurement for digital meters), the formula is -20 * log_10( |amplitude| ), where amplitude is normalize to +/- 1. Watch out for amplitude = 0, which gives an infinite headroom in dB.
If I understand you correctly, you just want to estimate the relative loudness/quietness of an audio digital sample at a given point.
For this estimation, you don't need to use FFT. However your method of averaging the signal does not produce the appropiate picture neither.
The digital signal is the value of the audio wave at a given moment. You need to find the overall amplitude of the signal at that given moment. You can somewhat see it as the local maximum value for a given interval around the moment you want to calculate. You may have a moving max for the signal and get your amplitude estimation.
At a 16 bit sound sample, the sound signal value can go from 0 up to 32767. At a 44.1 kHz sample rate, you can find peaks and pits of around 0.01 secs by finding the max value of 441 samples around a given t moment.
max=1;
for (i=0; i<441; i++) if (array[t*44100+i]>max) max=array[t*44100+i];
then for representing it on a 0 to 1 scale you (not really 0, because we used a minimum of 1)
amplitude = max / 32767;
or you might represent it in relative dB logarithmic scale (here you see why we used 1 for the minimum value)
dB = 20 * log10(amplitude);
all you need to do is take dy/dx, which can getapproximately by just scanning through the wave and and subtracting the previous value from the current one and look at where it goes to zero or changes from positive to negative
in this code I made it really brief and unintelligent for sake of brevity, of course you could handle cases of dy being zero better, find the 'centre' of a long section of a flat peak, that kind of thing. But if all you need is basic peaks and troughs, this will find them.
lastY=0;
bool goingup=true;
for( i=0; i < wave.length; i++ ) {
y = wave[i];
dy = y - lastY;
bool stillgoingup = (dy>0);
if( goingup != direction ) {
// changed direction - note value of i(place) and 'y'(height)
stillgoingup = goingup;
}
}
I need a simple and fast way to compare two images for similarity. I.e. I want to get a high value if they contain exactly the same thing but may have some slightly different background and may be moved / resized by a few pixel.
(More concrete, if that matters: The one picture is an icon and the other picture is a subarea of a screenshot and I want to know if that subarea is exactly the icon or not.)
I have OpenCV at hand but I am still not that used to it.
One possibility I thought about so far: Divide both pictures into 10x10 cells and for each of those 100 cells, compare the color histogram. Then I can set some made up threshold value and if the value I get is above that threshold, I assume that they are similar.
I haven't tried it yet how well that works but I guess it would be good enough. The images are already pretty much similar (in my use case), so I can use a pretty high threshold value.
I guess there are dozens of other possible solutions for this which would work more or less (as the task itself is quite simple as I only want to detect similarity if they are really very similar). What would you suggest?
There are a few very related / similar questions about obtaining a signature/fingerprint/hash from an image:
OpenCV / SURF How to generate a image hash / fingerprint / signature out of the descriptors?
Image fingerprint to compare similarity of many images
Near-Duplicate Image Detection
OpenCV: Fingerprint Image and Compare Against Database.
more, more, more, more, more, more, more
Also, I stumbled upon these implementations which have such functions to obtain a fingerprint:
pHash
imgSeek (GitHub repo) (GPL) based on the paper Fast Multiresolution Image Querying
image-match. Very similar to what I was searching for. Similar to pHash, based on An image signature for any kind of image, Goldberg et al. Uses Python and Elasticsearch.
iqdb
ImageHash. supports pHash.
Image Deduplicator (imagededup). Supports CNN, PHash, DHash, WHash, AHash.
Some discussions about perceptual image hashes: here
A bit offtopic: There exists many methods to create audio fingerprints. MusicBrainz, a web-service which provides fingerprint-based lookup for songs, has a good overview in their wiki. They are using AcoustID now. This is for finding exact (or mostly exact) matches. For finding similar matches (or if you only have some snippets or high noise), take a look at Echoprint. A related SO question is here. So it seems like this is solved for audio. All these solutions work quite good.
A somewhat more generic question about fuzzy search in general is here. E.g. there is locality-sensitive hashing and nearest neighbor search.
Can the screenshot or icon be transformed (scaled, rotated, skewed ...)? There are quite a few methods on top of my head that could possibly help you:
Simple euclidean distance as mentioned by #carlosdc (doesn't work with transformed images and you need a threshold).
(Normalized) Cross Correlation - a simple metrics which you can use for comparison of image areas. It's more robust than the simple euclidean distance but doesn't work on transformed images and you will again need a threshold.
Histogram comparison - if you use normalized histograms, this method works well and is not affected by affine transforms. The problem is determining the correct threshold. It is also very sensitive to color changes (brightness, contrast etc.). You can combine it with the previous two.
Detectors of salient points/areas - such as MSER (Maximally Stable Extremal Regions), SURF or SIFT. These are very robust algorithms and they might be too complicated for your simple task. Good thing is that you do not have to have an exact area with only one icon, these detectors are powerful enough to find the right match. A nice evaluation of these methods is in this paper: Local invariant feature detectors: a survey.
Most of these are already implemented in OpenCV - see for example the cvMatchTemplate method (uses histogram matching): http://dasl.mem.drexel.edu/~noahKuntz/openCVTut6.html. The salient point/area detectors are also available - see OpenCV Feature Detection.
I face the same issues recently, to solve this problem(simple and fast algorithm to compare two images) once and for all, I contribute an img_hash module to opencv_contrib, you can find the details from this link.
img_hash module provide six image hash algorithms, quite easy to use.
Codes example
origin lena
blur lena
resize lena
shift lena
#include <opencv2/core.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/img_hash.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
void compute(cv::Ptr<cv::img_hash::ImgHashBase> algo)
{
auto input = cv::imread("lena.png");
cv::Mat similar_img;
//detect similiar image after blur attack
cv::GaussianBlur(input, similar_img, {7,7}, 2, 2);
cv::imwrite("lena_blur.png", similar_img);
cv::Mat hash_input, hash_similar;
algo->compute(input, hash_input);
algo->compute(similar_img, hash_similar);
std::cout<<"gaussian blur attack : "<<
algo->compare(hash_input, hash_similar)<<std::endl;
//detect similar image after shift attack
similar_img.setTo(0);
input(cv::Rect(0,10, input.cols,input.rows-10)).
copyTo(similar_img(cv::Rect(0,0,input.cols,input.rows-10)));
cv::imwrite("lena_shift.png", similar_img);
algo->compute(similar_img, hash_similar);
std::cout<<"shift attack : "<<
algo->compare(hash_input, hash_similar)<<std::endl;
//detect similar image after resize
cv::resize(input, similar_img, {120, 40});
cv::imwrite("lena_resize.png", similar_img);
algo->compute(similar_img, hash_similar);
std::cout<<"resize attack : "<<
algo->compare(hash_input, hash_similar)<<std::endl;
}
int main()
{
using namespace cv::img_hash;
//disable opencl acceleration may(or may not) boost up speed of img_hash
cv::ocl::setUseOpenCL(false);
//if the value after compare <= 8, that means the images
//very similar to each other
compute(ColorMomentHash::create());
//there are other algorithms you can try out
//every algorithms have their pros and cons
compute(AverageHash::create());
compute(PHash::create());
compute(MarrHildrethHash::create());
compute(RadialVarianceHash::create());
//BlockMeanHash support mode 0 and mode 1, they associate to
//mode 1 and mode 2 of PHash library
compute(BlockMeanHash::create(0));
compute(BlockMeanHash::create(1));
}
In this case, ColorMomentHash give us best result
gaussian blur attack : 0.567521
shift attack : 0.229728
resize attack : 0.229358
Pros and cons of each algorithm
The performance of img_hash is good too
Speed comparison with PHash library(100 images from ukbench)
If you want to know the recommend thresholds for these algorithms, please check this post(http://qtandopencv.blogspot.my/2016/06/introduction-to-image-hash-module-of.html).
If you are interesting about how do I measure the performance of img_hash modules(include speed and different attacks), please check this link(http://qtandopencv.blogspot.my/2016/06/speed-up-image-hashing-of-opencvimghash.html).
Does the screenshot contain only the icon? If so, the L2 distance of the two images might suffice. If the L2 distance doesn't work, the next step is to try something simple and well established, like: Lucas-Kanade. Which I'm sure is available in OpenCV.
If you want to get an index about the similarity of the two pictures, I suggest you from the metrics the SSIM index. It is more consistent with the human eye. Here is an article about it: Structural Similarity Index
It is implemented in OpenCV too, and it can be accelerated with GPU: OpenCV SSIM with GPU
If you can be sure to have precise alignment of your template (the icon) to the testing region, then any old sum of pixel differences will work.
If the alignment is only going to be a tiny bit off, then you can low-pass both images with cv::GaussianBlur before finding the sum of pixel differences.
If the quality of the alignment is potentially poor then I would recommend either a Histogram of Oriented Gradients or one of OpenCV's convenient keypoint detection/descriptor algorithms (such as SIFT or SURF).
If for matching identical images - code for L2 distance
// Compare two images by getting the L2 error (square-root of sum of squared error).
double getSimilarity( const Mat A, const Mat B ) {
if ( A.rows > 0 && A.rows == B.rows && A.cols > 0 && A.cols == B.cols ) {
// Calculate the L2 relative error between images.
double errorL2 = norm( A, B, CV_L2 );
// Convert to a reasonable scale, since L2 error is summed across all pixels of the image.
double similarity = errorL2 / (double)( A.rows * A.cols );
return similarity;
}
else {
//Images have a different size
return 100000000.0; // Return a bad value
}
Fast. But not robust to changes in lighting/viewpoint etc.
Source
If you want to compare image for similarity,I suggest you to used OpenCV. In OpenCV, there are few feature matching and template matching. For feature matching, there are SURF, SIFT, FAST and so on detector. You can use this to detect, describe and then match the image. After that, you can use the specific index to find number of match between the two images.
Hu invariant moments is very powerful tool to compare two images
Hash functions are used in the undouble library to detect (near-)identical images (disclaimer: I am also the author). This is a simple and fast way to compare two or more images for similarity. It works using a multi-step process of pre-processing the images (grayscaling, normalizing, and scaling), computing the image hash, and the grouping of images based on a threshold value.