I'm reading through the following paper on how to implement CRC32 efficiently using the PCLMULQDQ instruction introduced in Intel Westmere and AMD Bulldozer:
V. Gopal et al. "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ Instruction." 2009. http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
I understand the algorithm, but one thing I'm not sure about is how to calculate the constants $k_i$. For example, they provide the constant values for the IEEE 802.3 polynomial:
k1 = x^(4*128+64) mod P(x) = 0x8833794C
k4 = x^128 mod P(x) = 0xE8A45605
mu = x^64 div P(x) = 0x104D101DF
and so on. I can just use these constants as I only need to support the one polynomial, but I'm interested: how did they calculate those numbers? I can't just use a typical bignum implementation (e.g. the one provided by Python) because the arithmetic must happen in GF(2).
It's just like regular division, except you exclusive-or instead of subtract. So start with the most significant 1 in the dividend. Exclusive-or the dividend by the polynomial, lining up the most significant 1 of the polynomial with that 1 in the dividend to turn it into a zero. Repeat until you have eliminated all of the 1's above the low n bits, where n is the order of the polynomial. The result is the remainder.
Make sure that your polynomial has the high term in the n+1th bit. I.e., use 0x104C11DB7, not 0x4C11DB7.
If you want the quotient (which you wrote as "div"), then keep track of the positions of the 1's you eliminated. That set, shifted down by n, is the quotient.
Here is how:
/* Placed in the public domain by Mark Adler, Jan 18, 2014. */
#include <stdio.h>
#include <inttypes.h>
/* Polynomial type -- must be an unsigned integer type. */
typedef uintmax_t poly_t;
#define PPOLY PRIxMAX
/* Return x^n mod p(x) over GF(2). x^deg is the highest power of x in p(x).
The positions of the bits set in poly represent the remaining powers of x in
p(x). In addition, returned in *div are as many of the least significant
quotient bits as will fit in a poly_t. */
static poly_t xnmodp(unsigned n, poly_t poly, unsigned deg, poly_t *div)
{
poly_t mod, mask, high;
if (n < deg) {
*div = 0;
return poly;
}
mask = ((poly_t)1 << deg) - 1;
poly &= mask;
mod = poly;
*div = 1;
deg--;
while (--n > deg) {
high = (mod >> deg) & 1;
*div = (*div << 1) | high; /* quotient bits may be lost off the top */
mod <<= 1;
if (high)
mod ^= poly;
}
return mod & mask;
}
/* Compute and show x^n modulo the IEEE 802.3 CRC-32 polynomial. If d is true,
also show the low bits of the quotient. */
static void show(unsigned n, int showdiv)
{
poly_t div;
printf("x^%u mod p(x) = %#" PPOLY "\n", n, xnmodp(n, 0x4C11DB7, 32, &div));
if (showdiv)
printf("x^%u div p(x) = %#" PPOLY "\n", n, div);
}
/* Compute the constants required to use PCLMULQDQ to compute the IEEE 802.3
32-bit CRC. These results appear on page 16 of the Intel paper "Fast CRC
Computation Using PCLMULQDQ Instruction". */
int main(void)
{
show(4*128+64, 0);
show(4*128, 0);
show(128+64, 0);
show(128, 0);
show(96, 0);
show(64, 1);
return 0;
}
Related
There are many decay-like physical events (for example body friction or charge leak), that are usually modelled in iterators like x' = x * 0.99, which is usually very easy to write in floating point arithmetics.
However, i have a demand to do this in 16-bit "8.8" signed fixed point manner, in sse. For efficient implementation on typical ALU mentioned formula can be rewritten as x = x - x/128; or x = x - (x>>7) where >> is "arithmetic", sign-extending right shift.
And i stuck here, because _mm_sra_epi16() produces totally counterintuitive behaviour, which is easily verifiable by following example:
#include <cstdint>
#include <iostream>
#include <emmintrin.h>
using namespace std;
int main(int argc, char** argv) {
cout << "required: ";
for (int i = -1; i < 7; ++i) {
cout << hex << (0x7fff >> i) << ", ";
}
cout << endl;
cout << "produced: ";
__m128i a = _mm_set1_epi16(0x7fff);
__m128i b = _mm_set_epi16(-1, 0, 1, 2, 3, 4, 5, 6);
auto c = _mm_sra_epi16(a, b);
for (auto i = 0; i < 8; ++i) {
cout << hex << c.m128i_i16[i] << ", ";
}
cout << endl;
return 0;
}
Output would be as follows:
required: 0, 7fff, 3fff, 1fff, fff, 7ff, 3ff, 1ff,
produced: 0, 0, 0, 0, 0, 0, 0, 0,
It only applies first shift to all, like it is actually _mm_sra1_epi16 function, accidentely named sra and given __m128i second argument bu a funny clause for no reason. So this cannot be used in SSE.
On other hand, i heard that division algorithm is enormously complex, thus _mm_div_epi16 is absent in SSE and also cannot be used.
What to do and how to implement/vectorize that popular "decay" technique?
x -= x>>7 is trivial to implement with SSE2, using a constant shift count for efficiency. This compiles to 2 instructions if AVX is available, otherwise a movdqa is needed to copy v before a destructive right-shift.
__m128i downscale(__m128i v){
__m128i dec = _mm_srai_epi16(v, 7);
return _mm_sub_epi16(v, dec);
}
GCC even auto-vectorizes it (Godbolt).
void foo(short *__restrict a) {
for (int i=0 ; i<10240 ; i++) {
a[i] -= a[i]>>7; // inner loop uses the same psraw / psubw
}
}
Unlike float, fixed-point has constant absolute precision over the full range, not constant relative precision. So for small positive numbers, v>>7 will be zero and your decrement will stall. (Negative inputs underflow to -1, because arithmetic right shift rounds towards -infinity.)
If small inputs where the shift can underflow to 0, you might want to OR with _mm_set1_epi16(1) to make sure the decrement is non-zero. Negligible effect on large-ish inputs. However, that will eventually make a downscale chain go from 0 to -1. (And then back up to 0, because -1 | 1 == -1 in 2's complement).
__m128i downscale_nonzero(__m128i v){
__m128i dec = _mm_srai_epi16(v, 7);
dec = _mm_or_si128(dec, _mm_set1_epi16(1));
return _mm_sub_epi16(v, dec);
}
If starting negative, the sequence would be -large, logarithmic until -128, linear until -4, -3, -2, -1, 0, -1, 0, -1, ...
Your code got all-zeros because _mm_sra_epi16 uses the low 64 bits of the 2nd source vector as a 64-bit shift count that applies to all elements. Read the manual. So you shifted all the bits out of each 16-bit element.
It's not idiotic, but per-element shift counts require AVX2 (for 32/64-bit elements) or AVX512BW for _mm_srav_epi16 or 64-bit arithmetic right shifts, which would make sense for the way you're trying to use it. (But the shift count is unsigned, so -1 also going to shift out all the bits).
Indeed, that instruction should be named _mm_sra1_epi16()
Yup, that would make sense. But remember that when these were named, AVX2 _mm_srav_* didn't exist yet. Also, that specific name would not be ideal because 1 and i are not the most visually distinct. (i for immediate, for the psraw xmm1, imm16 form instead of the psraw xmm1, xmm2/m128 form of the asm instruction: http://felixcloutier.com/x86/PSRAW:PSRAD:PSRAQ.html).
The other way it makes sense is that the MMX/SSE2 asm instruction has two forms: immediate (with the same count for all elements of course), and vector. Instead of forcing you to broadcast the count to all element, the vector version takes the scalar count in the bottom of a vector register. I think the intended use-case is after a movd xmm0, eax or something.
If you need per-element-variable shift counts without AVX512, see various Q&As about emulating it, e.g. Shifting 4 integers right by different values SIMD.
Some of the workarounds use multiplies by powers of 2 for variable left-shift, and then a right shift to put the data where needed. (But you need to somehow get the 1<<n SIMD vector prepared, so this works if the same set of counts is reused for many vectors, or especially if it's a compile-time constant).
With 16-bit elements, you can use just one _mm_mulhi_epi16 to do runtime-variable right shift counts with no precision loss or range limits. mulhi(x*y) is exactly like (x*(int)y) >> 16, so you can use y=1<<14 to right shift by 16-14 = 2 in that element.
There is a lot of information on the internet about the differences between YUV4:4:4 to YUV4:2:2 formats, however, I can not find anything that tells how to convert the YUV4:4:4 to YUV4:2:2. Since such conversion is performed using software, I was hoping that there should be some developers that have done it and could direct me to the sources that describe the conversion algorithm. Of course, the software code would be nice to have, but having the access to the theory would be sufficient enough to write my own software. Specifically, I would like to know pixel structure and how the bytes are managed during conversion.
I found several similar questions like this and this, however, could not get my question answered. Also, I posted this question on the Photography forum, and they considered it as a software question.
The reason why you can't find specific description, is that there are many ways to do it.
Lets start from Wikipedia: https://en.wikipedia.org/wiki/Chroma_subsampling#4:2:2
4:4:4:
Each of the three Y'CbCr components have the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post production.
and
4:2:2:
The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference.
Note: Terms YCbCr and YUV are used interchangeably.
https://en.wikipedia.org/wiki/YCbCr
Y′CbCr is often confused with the YUV color space, and typically the terms YCbCr and YUV are used interchangeably, leading to some confusion; when referring to signals in video or digital form, the term "YUV" mostly means "Y′CbCr".
Data memory ordering:
Again there is more than one format.
Intel IPP documentation defines two main categories: "Pixel-Order Image Formats" and "Planar Image Formats".
There is a nice documentation here: https://software.intel.com/en-us/node/503876
Refer here: http://www.fourcc.org/yuv.php#NV12 for YUV pixel arrangement formats.
Refer here: http://scc.ustc.edu.cn/zlsc/sugon/intel/ipp/ipp_manual/IPPI/ippi_ch6/ch6_image_downsampling.htm#ch6_image_downsampling for downsampling description.
Let's assume "Pixel-Order" format:
YUV 4:4:4 data order: Y0 U0 V0 Y1 U1 V1 Y2 U2 V2 Y3 U3 V3
YUV 4:2:2 data order: Y0 U0 Y1 V0 Y2 U1 Y3 V1
Each element is a single byte, and Y0 is the lower byte in memory.
The 4:2:2 data order described above is named UYVY or YUY2 pixel format.
Conversion algorithms:
"Naive sub-sampling":
"Throw" every second U/V component:
Take U0, and throw U1, take V0 and throw V1...
Source: Y0 U0 V0 Y1 U1 V1 Y2 U2 V2
Destination: Y0 U0 Y1 V0 Y2 U2 Y3 V2
I can't recommend it, since it causes aliasing artifacts.
Average each U/V pair:
Take Destination U0 equals source (U0+U1)/2, same for V0...
Source: Y0 U0 V0 Y1 U1 V1 Y2 U2 V2
Destination: Y0 (U0+U1)/2 Y1 (V0+V1)/2 Y2 (U2+U3)/2 Y3 (V2+V3)/2
Use other interpolation method for down-sampling U and V (cubic interpolation for example).
Usually you will not be able to see any differences compared to simple average.
C implementation:
The question is not tagged as C, but I think the following C implementation may be helpful.
The following code converts pixel-ordered YUV 4:4:4 to pixel-ordered YUV 4:2:2 by averaging each U/V pair:
//Convert single row I0 from pixel-ordered YUV 4:4:4 to pixel-ordered YUV 4:2:2.
//Save the result in J0.
//I0 size in bytes is image_width*3
//J0 size in bytes is image_width*2
static void ConvertRowYUV444ToYUV422(const unsigned char I0[],
const int image_width,
unsigned char J0[])
{
int x;
//Process two Y,U,V triples per iteration:
for (x = 0; x < image_width; x += 2)
{
//Load source elements
unsigned char y0 = I0[x*3]; //Load source Y element
unsigned int u0 = (unsigned int)I0[x*3+1]; //Load source U element (and convert from uint8 to uint32).
unsigned int v0 = (unsigned int)I0[x*3+2]; //Load source V element (and convert from uint8 to uint32).
//Load next source elements
unsigned char y1 = I0[x*3+3]; //Load source Y element
unsigned int u1 = (unsigned int)I0[x*3+4]; //Load source U element (and convert from uint8 to uint32).
unsigned int v1 = (unsigned int)I0[x*3+5]; //Load source V element (and convert from uint8 to uint32).
//Calculate destination U, and V elements.
//Use shift right by 1 for dividing by 2.
//Use plus 1 before shifting - round operation instead of floor operation.
unsigned int u01 = (u0 + u1 + 1) >> 1; //Destination U element equals average of two source U elements.
unsigned int v01 = (v0 + v1 + 1) >> 1; //Destination U element equals average of two source U elements.
J0[x*2] = y0; //Store Y element (unmodified).
J0[x*2+1] = (unsigned char)u01; //Store destination U element (and cast uint32 to uint8).
J0[x*2+2] = y1; //Store Y element (unmodified).
J0[x*2+3] = (unsigned char)v01; //Store destination V element (and cast uint32 to uint8).
}
}
//Convert image I from pixel-ordered YUV 4:4:4 to pixel-ordered YUV 4:2:2.
//I - Input image in pixel-order data YUV 4:4:4 format.
//image_width - Number of columns of image I.
//image_height - Number of rows of image I.
//J - Destination "image" in pixel-order data YUV 4:2:2 format.
//Note: The term "YUV" referees to "Y'CbCr".
//I is pixel ordered YUV 4:4:4 format (size in bytes is image_width*image_height*3):
//YUVYUVYUVYUV
//YUVYUVYUVYUV
//YUVYUVYUVYUV
//YUVYUVYUVYUV
//
//J is pixel ordered YUV 4:2:2 format (size in bytes is image_width*image_height*2):
//YUYVYUYV
//YUYVYUYV
//YUYVYUYV
//YUYVYUYV
//
//Conversion algorithm:
//Each element of destination U is average of 2 original U horizontal elements
//Each element of destination V is average of 2 original V horizontal elements
//
//Limitations:
//1. image_width must be a multiple of 2.
//2. I and J must be two separate arrays (in place computation is not supported).
static void ConvertYUV444ToYUV422(const unsigned char I[],
const int image_width,
const int image_height,
unsigned char J[])
{
//I0 points source row.
const unsigned char *I0; //I0 -> YUYVYUYV...
//J0 and points destination row.
unsigned char *J0; //J0 -> YUYVYUYV
int y; //Row index
//In each iteration process single row.
for (y = 0; y < image_height; y++)
{
I0 = &I[y*image_width*3]; //Input row width is image_width*3 bytes (each pixel is Y,U,V).
J0 = &J[y*image_width*2]; //Output row width is image_width*2 bytes (each two pixels are Y,U,Y,V).
//Process single source row into single destination row
ConvertRowYUV444ToYUV422(I0, image_width, J0);
}
}
Planar representation of YUV 4:2:2
Planar representation may be more intuitive than "Pixel-Order" format.
In planar representation each color channel is represented as a separate matrix, which can be displayed as an image.
Example:
Original image in RGB format (before converting to YUV):
Image channels in YUV 4:4:4 format:
(Left YUV triple is represented in gray levels, and right YUV triple is represented using false colors).
Image channels in YUV 4:2:2 format (after horizontal Chroma subsampling):
(Left YUV triple is represented in gray levels, and right YUV triple is represented using "false colors").
As you can see, in 4:2:2 format, the U an V channels are down-sampled (shrunk) in the horizontal axis.
Remark:
The "false colors" representation of U and V channels is used for emphasizing that Y is the Luma channel and U and V are the Chrominance channels.
Higher order interpolation and Anti-Aliasing filter:
Following MATLAB code sample shows how to perform down-sampling with higher order interpolation and Anti-Aliasing filter.
The sample also shows the down-sampling method used by FFMPEG.
Note: you don't need to know MATLAB programming in order to understand the samples.
You do need some knowledge of image filtering by convolution between a Kernel and an image.
%Prepare the input:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
load('mandrill.mat', 'X', 'map'); %Load input image
RGB = im2uint8(ind2rgb(X, map)); %Convert to RGB (the mandrill sample image is an indexed image)
YUV = rgb2ycbcr(RGB); %Convert from RGB to YUV (MATLAB function rgb2ycbcr uses BT.601 conversion formula)
%Separate YUV to 3 planes (Y plane, U plane and V plane)
Y = YUV(:, :, 1);
U = YUV(:, :, 2);
V = YUV(:, :, 3);
U = double(U); %Work in double precision instead of uint8.
[M, N] = size(Y); %Image size is N columns by M rows.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Linear interpolation without Anti-Aliasing filter:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Horizontal down-sampling U plane using Linear interpolation (without Anti-Aliasing filter).
%Simple averaging is equivalent to linear interpolation.
U2 = (U(:, 1:2:end) + U(:, 2:2:end))/2;
refU2 = imresize(U, [M, N/2], 'bilinear', 'Antialiasing', false); %Use MATLAB imresize function as reference
disp(['Linear interpolation max diff = ' num2str(max(abs(double(U2(:)) - double(refU2(:)))))]); %Print maximum difference.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Cubic interpolation without Anti-Aliasing filter:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Horizontal down-sampling U plane using Cubic interpolation (without Anti-Aliasing filter).
%Following operations are equivalent to cubic interpolation:
%1. Convolution with filter kernel [-0.125, 1.25, -0.125]
%2. Averaging pair elements
fU = imfilter(U, [-0.125, 1.25, -0.125], 'symmetric');
U2 = (fU(:, 1:2:end) + fU(:, 2:2:end))/2;
U2 = max(min(U2, 240), 16); %Limit to valid range of U elements (valid range of U elements in uint8 format is [16, 240])
refU2 = imresize(U, [M, N/2], 'cubic', 'Antialiasing', false); %Use MATLAB imresize function as reference
refU2 = max(min(refU2, 240), 16); %Limit to valid range of U elements
disp(['Cubic interpolation max diff = ' num2str(max(abs(double(U2(:)) - double(refU2(:)))))]); %Print maximum difference.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Linear interpolation with Anti-Aliasing filter:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Horizontal down-sampling U plane using Linear interpolation with Anti-Aliasing filter.
%Remark: The Anti-Aliasing filter is the filter used by MATLAB specific implementation of 'bilinear' imresize.
%Following operations are equivalent to Linear interpolation with Anti-Aliasing filter:
%1. Convolution with filter kernel [0.25, 0.5, 0.25]
%2. Averaging pair elements
fU = imfilter(U, [0.25, 0.5, 0.25], 'symmetric');
U2 = (fU(:, 1:2:end) + fU(:, 2:2:end))/2;
refU2 = imresize(U, [M, N/2], 'bilinear', 'Antialiasing', true); %Use MATLAB imresize function as reference
disp(['Linear interpolation with Anti-Aliasing max diff = ' num2str(max(abs(double(U2(:)) - double(refU2(:)))))]); %Print maximum difference.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Cubic interpolation with Anti-Aliasing filter:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Horizontal down-sampling U plane using Cubic interpolation with Anti-Aliasing filter.
%Remark: The Anti-Aliasing filter is the filter used by MATLAB specific implementation of 'cubic' imresize.
%Following operations are equivalent to Linear interpolation with Anti-Aliasing filter:
%1. Convolution with filter kernel [-0.0234375, -0.046875, 0.2734375, 0.59375, 0.2734375, -0.046875, -0.0234375]
%2. Averaging pair elements
h = [-0.0234375, -0.046875, 0.2734375, 0.59375, 0.2734375, -0.046875, -0.0234375];
fU = imfilter(U, h, 'symmetric');
U2 = (fU(:, 1:2:end) + fU(:, 2:2:end))/2;
U2 = max(min(U2, 240), 16); %Limit to valid range of U elements
refU2 = imresize(U, [M, N/2], 'cubic', 'Antialiasing', true); %Use MATLAB imresize function as reference
refU2 = max(min(refU2, 240), 16); %Limit to valid range of U elements
disp(['Cubic interpolation with Anti-Aliasing max diff = ' num2str(max(abs(double(U2(:)) - double(refU2(:)))))]); %Print maximum difference.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%FFMPEG implementation of horizontal down-sampling U plane.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%FFMPEG uses cubic interpolation with Anti-Aliasing filter (different filter kernel):
%Remark: I didn't check the source code of FFMPEG to verify the values of the filter kernel.
%I can't tell how FFMPEG actually implements the conversion.
%Following operations are equivalent to FFMPEG implementation (with minor differences):
%1. Convolution with filter kernel [-115, -231, 1217, 2354, 1217, -231, -115]/4096
%2. Averaging pair elements
h = [-115, -231, 1217, 2354, 1217, -231, -115]/4096;
fU = imfilter(U, h, 'symmetric');
U2 = (fU(:, 1:2:end) + fU(:, 2:2:end))/2;
U2 = max(min(U2, 240), 16); %Limit to valid range of U elements (FFMPEG actually doesn't limit the result)
%Save Y,U,V planes to file in format supported by FFMPEG
f = fopen('yuv444.yuv', 'w');
fwrite(f, Y', 'uint8');
fwrite(f, U', 'uint8');
fwrite(f, V', 'uint8');
fclose(f);
%For executing FFMPEG within MATLAB, download FFMPEG and place the executable in working directory (ffmpeg.exe for Windows)
%FFMPEG converts source file in YUV444 format to destination file in YUV422 format.
if isunix
[status, cmdout] = system(['./ffmpeg -y -s ', num2str(N), 'x', num2str(M), ' -pix_fmt yuv444p -i yuv444.yuv -pix_fmt yuv422p yuv422.yuv']);
else
[status, cmdout] = system(['ffmpeg.exe -y -s ', num2str(N), 'x', num2str(M), ' -pix_fmt yuv444p -i yuv444.yuv -pix_fmt yuv422p yuv422.yuv']);
end
f = fopen('yuv422.yuv', 'r');
refY = (fread(f, [N, M], '*uint8'))';
refU2 = (fread(f, [N/2, M], '*uint8'))'; %Read down-sampled U plane (FFMPEG result from file).
refV2 = (fread(f, [N/2, M], '*uint8'))';
fclose(f);
%Limit to valid range of U elements.
%In FFMPEG down-sampled U and V may exceed valid range (there is probably a way to tell FFMPEG to limit the result).
refU2 = max(min(refU2, 240), 16);
%Difference exclude first column and last column (FFMPEG treats the margins different than MATLAB)
%Remark: There are minor differences due to rounding (I guess).
disp(['FFMPEG Cubic interpolation with Anti-Aliasing max diff = ' num2str(max(max(abs(double(U2(:, 2:end-1)) - double(refU2(:, 2:end-1))))))]);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Examples for different kind of down-sampling methods.
Linear interpolation versus Cubic interpolation with Anti-Aliasing filter:
In the first example (mandrill) there are no visible differences.
In the second example (circle and rectangle) there are minor visible differences.
The third example (lines) demonstrates aliasing artifacts.
Remark: displayed images where up-sampled from YUV422 to YUV444 using Cubic interpolation and converted from YUV444 to RGB.
Linear interpolation versus Cubic with Anti-Aliasing (mandrill):
Linear interpolation versus Cubic with Anti-Aliasing (circle and rectangle):
Linear interpolation versus Cubic with Anti-Aliasing (demonstrates Aliasing artifacts):
I've noticed that using convertTo to convert a matrix from 32-bit to 16-bit "rounds" number to the upper boud. So, values bigger than 0x0000FFFF in the source matrix will be set as 0xFFFF in the destination matrix.
What I want for my application is instead to mask the values, setting in the destination just the 2 LSB of the values.
Here is an example:
Mat mat32;
Mat mat16;
mat32 = Mat(2,2,CV_32SC1);
for(int y = 0; y < 2; y++)
for(int x = 0; x < 2; x++)
mat32.at<unsigned int>(cv::Point(x,y)) = 0x0000FFFE + (y*2+x);
mat32.convertTo(mat16, CV_16UC1);
The matrixes have these values:
32 bits matrix:
0000FFFE 0000FFFF
00010000 00010001
16 bits matrix:
0000FFFE 0000FFFF
0000FFFF 0000FFFF
In the second row of 16-bit matrix I want to have
00000000 00000001
I can do this by scanning the source matrix value-by-value and masking the values, but the performances are low.
Is there an OpenCV function that does this?
Thanks to everyone!
MIX
This can be done, but this requires a somewhat dirty trick, so it is up to you to use this approach or not. So this is how it can be done:
For this example lets create 1000x1000 32-bit matrix and set all its values to 65541 (=256*256+5). So after the conversion we expect to have a matrix filled with fives.
Mat M1(1000, 1000, CV_32S, Scalar(65541));
And here is the trick:
Mat M2(1000, 1000, CV_16SC2, M1.data);
We created matrix M2 over the same memory buffer as M1, but M2 'think' that this is a buffer of 2-channel 16-bit image. Now the last thing to do is to copy the channel you need to the place you need. This can be done by split() or mixChannels() functions. For example:
Mat M3(1000, 1000, CV_16S);
int fromto[] = {0,0};
Mat inpu[] = {M2}, outpu[] = {M3};
mixChannels(inpu, 1, outpu, 1, fromto, 1);
cout << M3.at<short>(10,10) << endl;
Ye I know that the format of mixChannels looks weird and makes the code even less readable, but it works... If you prefer split() function:
vector<Mat> v;
split(M2,v);
cout << v[0].at<short>(10,10) << " " << v[1].at<short>(10,10) << endl;
There is no OpenCV function (that I know of) which does the conversion like you want, so either you code it yourself or like you said you go through a masking step first to remove the 16 high bits.
The mask can be applied using the bitwise_and in C++ or cvAndS in C. See here.
You could also have made your hand-written code more efficient. In general, you should avoid OpenCV pixel accessors in loops because they have bad performance. I don't have an OpenCV install at hand so this could be slighlty off -- the idea is to use the data field directly, and step which is the number of bytes per row:
for(int y = 0; y < mat32.height; ++) {
int* row = (int*)( (char*)mat32.data + y * mat32.step);
for(int x = 0; x < mat32.step/ 4)
row[x] &= 0xffff;
Then, once the mask is applied, all values fit in 16 bits, and convertTo will just truncate the 16 upper bits.
The other solution is to code the conversion by hand:
mat16.resize( mat32.size() );
for(int y = 0; y < mat32.height; ++) {
const int* row32 = (const int*)( (char*)mat32.data + y * mat32.step);
short* row16 = (short*) ( (char*)mat16.data + y * mat16.step);
for(int x = 0; x < mat32.step/ 4)
row16[x] = short(row32[x]);
I am trying to port an existing FFT based low-pass filter to iOS using the Accelerate vDSP framework.
It seems like the FFT works as expected for about the first 1/4 of the sample. But then after that the results seem wrong, and even more odd are mirrored (with the last half of the signal mirroring most of the first half).
You can see the results from a test application below. First is plotted the original sampled data, then an example of the expected filtered results (filtering out signal higher than 15Hz), then finally the results of my current FFT code (note that the desired results and example FFT result are at a different scale than the original data):
The actual code for my low-pass filter is as follows:
double *lowpassFilterVector(double *accell, uint32_t sampleCount, double lowPassFreq, double sampleRate )
{
double stride = 1;
int ln = log2f(sampleCount);
int n = 1 << ln;
// So that we get an FFT of the whole data set, we pad out the array to the next highest power of 2.
int fullPadN = n * 2;
double *padAccell = malloc(sizeof(double) * fullPadN);
memset(padAccell, 0, sizeof(double) * fullPadN);
memcpy(padAccell, accell, sizeof(double) * sampleCount);
ln = log2f(fullPadN);
n = 1 << ln;
int nOver2 = n/2;
DSPDoubleSplitComplex A;
A.realp = (double *)malloc(sizeof(double) * nOver2);
A.imagp = (double *)malloc(sizeof(double) * nOver2);
// This can be reused, just including it here for simplicity.
FFTSetupD setupReal = vDSP_create_fftsetupD(ln, FFT_RADIX2);
vDSP_ctozD((DSPDoubleComplex*)padAccell,2,&A,1,nOver2);
// Use the FFT to get frequency counts
vDSP_fft_zripD(setupReal, &A, stride, ln, FFT_FORWARD);
const double factor = 0.5f;
vDSP_vsmulD(A.realp, 1, &factor, A.realp, 1, nOver2);
vDSP_vsmulD(A.imagp, 1, &factor, A.imagp, 1, nOver2);
A.realp[nOver2] = A.imagp[0];
A.imagp[0] = 0.0f;
A.imagp[nOver2] = 0.0f;
// Set frequencies above target to 0.
// This tells us which bin the frequencies over the minimum desired correspond to
NSInteger binLocation = (lowPassFreq * n) / sampleRate;
// We add 2 because bin 0 holds special FFT meta data, so bins really start at "1" - and we want to filter out anything OVER the target frequency
for ( NSInteger i = binLocation+2; i < nOver2; i++ )
{
A.realp[i] = 0;
}
// Clear out all imaginary parts
bzero(A.imagp, (nOver2) * sizeof(double));
//A.imagp[0] = A.realp[nOver2];
// Now shift back all of the values
vDSP_fft_zripD(setupReal, &A, stride, ln, FFT_INVERSE);
double *filteredAccell = (double *)malloc(sizeof(double) * fullPadN);
// Converts complex vector back into 2D array
vDSP_ztocD(&A, stride, (DSPDoubleComplex*)filteredAccell, 2, nOver2);
// Have to scale results to account for Apple's FFT library algorithm, see:
// http://developer.apple.com/library/ios/#documentation/Performance/Conceptual/vDSP_Programming_Guide/UsingFourierTransforms/UsingFourierTransforms.html#//apple_ref/doc/uid/TP40005147-CH202-15952
double scale = (float)1.0f / fullPadN;//(2.0f * (float)n);
vDSP_vsmulD(filteredAccell, 1, &scale, filteredAccell, 1, fullPadN);
// Tracks results of conversion
printf("\nInput & output:\n");
for (int k = 0; k < sampleCount; k++)
{
printf("%3d\t%6.2f\t%6.2f\t%6.2f\n", k, accell[k], padAccell[k], filteredAccell[k]);
}
// Acceleration data will be replaced in-place.
return filteredAccell;
}
In the original code the library was handling non power-of-two sizes of input data; in my Accelerate code I am padding out the input to the nearest power of two. In the case of the sample test below the original sample data is 1000 samples so it's padded to 1024. I don't think that would affect results but I include that for the sake of possible differences.
If you want to experiment with a solution, you can download the sample project that generates the graphs here (in the FFTTest folder):
FFT Example Project code
Thanks for any insight, I've not worked with FFT's before so I feel like I am missing something critical.
If you want a strictly real (not complex) result, then the data before the IFFT must be conjugate symmetric. If you don't want the result to be mirror symmetric, then don't zero the imaginary component before the IFFT. Merely zeroing bins before the IFFT creates a filter with a huge amount of ripple in the passband.
The Accelerate framework also supports more FFT lengths than just powers of 2.
I'm trying to get frequency from iPhone / iPod music library for a spectrum app on iPod library, helping myself with reading-audio-samples-via-avassetreader to get audio samples and then with using-the-apple-fft-and-accelerate-framework and Apple vDSP Samples, but somehow I'm wrong somewhere and unable to calculate the frequency.
So step by step:
read audio sample
Hanning window
calculate fft
Is this the correct way to get frequencies from an iPod mp3 library?
Here is my code:
static COMPLEX_SPLIT A;
static FFTSetup setupReal;
static uint32_t log2n, n, nOver2;
static int32_t stride;
static float *obtainedReal;
static float scale;
+ (void)initialize
{
log2n = 10;
n = 1 << log2n;
stride = 1;
nOver2 = n / 2;
A.realp = (float *) malloc(nOver2 * sizeof(float));
A.imagp = (float *) malloc(nOver2 * sizeof(float));
obtainedReal = (float *) malloc(n * sizeof(float));
setupReal = vDSP_create_fftsetup(log2n, FFT_RADIX2);
}
- (float) performAcceleratedFastFourierTransForAudioBuffer:(AudioBufferList)ioData
{
NSUInteger * sampleIn = (NSUInteger *)ioData.mBuffers[0].mData;
for (int i = 0; i < nOver2; i++) {
double multiplier = 0.5 * (1 - cos(2*M_PI*i/nOver2-1));
A.realp[i] = multiplier * sampleIn[i];
A.imagp[i] = 0;
}
memset(ioData.mBuffers[0].mData, 0, ioData.mBuffers[0].mDataByteSize);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
vDSP_zvmags(&A, 1, A.realp, 1, nOver2);
scale = (float) 1.0 / (2 * n);
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
vDSP_ztoc(&A, 1, (COMPLEX *)obtainedReal, 2, nOver2);
int peakIndex = 0;
for (size_t i=1; i < nOver2-1; ++i) {
if ((obtainedReal[i] > obtainedReal[i-1]) && (obtainedReal[i] > obtainedReal[i+1]))
{
peakIndex = i;
break;
}
}
//here I don't know how to calculate frequency with my data
float frequency = obtainedReal[peakIndex-1] / 44100 / n;
vDSP_destroy_fftsetup(setupReal);
free(obtainedReal);
free(A.realp);
free(A.imagp);
return frequency;
}
I got 1.485757 and 1.332233 as my first frequencies
It looks to me like there is a problem in the conversion to complex input for the FFT. vDSP_ctoz() splits a buffer where real and imaginary components are interleaved into two buffers, one real and one imaginary. Your input to that function appears to be just real data that has been casted to COMPLEX. This means that your input buffer to vDSP_ctoz() is only half as long as it needs to be and some garbage data beyond the buffer size is getting converted.
You either need to create sampleOut to be 2*n in length and set every other value (the real parts) or better yet, you can bypass the vDSP_ctoz() and directly copy your input data into A.realp and set A.imagp to zeros. vDSP_ctoz() should only be needed when interfacing to a source that produces interleaved complex data.
Edit
Ok, I think I was wrong on my first suggestion since the vDSP documentation says that the real input of the real-to-complex in-place fft should be formatted into the split complex format such that imagp contains even samples and realp contains the odd samples. I have not actually used the vDSP library, but I am familiar with a lot of other FFT libraries and I missed that detail.
You should be able to find the peaks using A.realp after the call to vDSP_zvmags(&A, 1, A.realp, 1, nOver2); At that point, A.realp should contain the magnitude squared of the FFT output, which is scalar. If you are going to do the scaling, it should be done before the mag2 operation, but it may not be needed if you are just looking for the peaks.
To get the real frequencies represented by the FFT output, use this formula:
F = (i * Fs) / N, i=0,1,...,N/2
where
i is the index of the FFT output buffer
Fs is the audio sampling rate
N is the FFT length
so your calculation might look like this:
float frequency = (peakIndex * 44100) / n;
Keep in mind that vDSP only returns the first half of the input spectrum for real input since the second half is redundant. So the FFT output represents frequencies from 0 to Fs/2.
One other note is that I don't know if your peak finding algorithm will work very well since FFT output will not be smooth and there will often be a lot of oscillation. You are simply taking the first sample where the two adjacent samples are lower. If you just want to find a single peak, it would be better just to find the max magnitude across the entire output. If you want to find multiple peaks, you will have to do something more sophisticated.