I have a sinewave at 20hz - 1 amplitude that I have created using Audacity software. It is also only 500ms.
I am using following algorithm to detect the frequency.
All I want to detect if tone amplitude passes a threshold and gives me positive result at 20 hz frequency cycles.
static float goertzel_mag(int numSamples,int TARGET_FREQUENCY,int SAMPLING_RATE, float* data)
{
int k,i;
float floatnumSamples;
float omega,sine,cosine,coeff,q0,q1,q2,magnitude,real,imag;
float scalingFactor = numSamples / 2.0;
floatnumSamples = (float) numSamples;
k = (int) (0.5 + ((floatnumSamples * TARGET_FREQUENCY) / SAMPLING_RATE));
omega = (2.0 * M_PI * k) / floatnumSamples;
sine = sin(omega);
cosine = cos(omega);
coeff = 2.0 * cosine;
q0=0;
q1=0;
q2=0;
for(i=0; i<numSamples; i++)
{
q0 = coeff * q1 - q2 + data[i];
q2 = q1;
q1 = q0;
}
// calculate the real and imaginary results
// scaling appropriately
real = (q1 - q2 * cosine) / scalingFactor;
imag = (q2 * sine) / scalingFactor;
magnitude = sqrtf(real*real + imag*imag);
return magnitude;
}
call the function
// If there's more packets, read them
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
CheckError(AudioQueueEnqueueBuffer(inAQ,
inCompleteAQBuffer,
(sound->packetDescs?nPackets:0),
sound->packetDescs),
"couldn't enqueue buffer");
sound->packetPosition += nPackets;
NSLog(#"number of packets %i",nPackets);
float *data=(float*)inCompleteAQBuffer->mAudioData;
int nn = sizeof(data)/sizeof(float);
float gort = goertzel_mag(nn, 20, 44100, data);
NSLog(#"gort:%f", gort);
if (gort == INFINITY)
NSLog(#"positive infinity");
break point inside of the function
output
number of packets 8192
gort:36029896530591744.000000
number of packets 8192
gort:inf
positive infinity
number of packets 5666
gort:inf
positive infinity
Why I am getting inf result? I don't know how to read the return value, I understand magnitude always has to be positive value but I create the file with 1 amplitude, shouldn't I be getting 0 to 1 results?
EDIT;
Aduio info
afinfo 500ms.aiff
File: 500ms.aiff
File type ID: AIFF
Num Tracks: 1
----
Data format: 1 ch, 44100 Hz, 'lpcm' (0x0000000E) 16-bit big-endian signed integer
no channel layout.
estimated duration: 0.500000 sec
audio bytes: 44100
audio packets: 22050
bit rate: 705600 bits per second
packet size upper bound: 2
maximum packet size: 2
audio data file offset: 54
optimized
source bit depth: I16
I think one problem is with these lines:
float *data=(float*)inCompleteAQBuffer->mAudioData;
int nn = sizeof(data)/sizeof(float);
which I believe is intended to tell you the number of samples. I don't have the information or resources to reproduce your code, but can reproduce the bug with this:
#include <stdio.h>
#include <stdlib.h>
int main(void) {
float *data=malloc(sizeof(float) * 10);
printf ("Sizeof 'data' = %d\n", sizeof(data));
return 0;
}
Program output
Sizeof 'data' = 4
which on my 32-bit compilation is the size of the array pointer, not the array. And using sizeof(*data) won't get you anywhere since that just tells you the size of the data type float, not the array.
There is no way you can ascertain the size of the array, or number of elements, from its pointer, so my answer is, sadly, you need more information, perhaps numBytes? Or numBytes/sizeof(float)?
Related
I am trying to make a colored waveform using the output of the following code. But when I run it, I only get certain numbers (see the freq variable, it uses the bin size, frame rate and index to make these frequencies) as output frequencies. I'm no math expert, even though I cobbled this together from existing code and answers.
//
// colored_waveform.c
// MixDJ
//
// Created by Jonathan Silverman on 3/14/19.
// Copyright © 2019 Jonathan Silverman. All rights reserved.
//
#include "colored_waveform.h"
#include "fftw3.h"
#include <math.h>
#include "sndfile.h"
//int N = 1024;
// helper function to apply a windowing function to a frame of samples
void calcWindow(double* in, double* out, int size) {
for (int i = 0; i < size; i++) {
double multiplier = 0.5 * (1 - cos(2*M_PI*i/(size - 1)));
out[i] = multiplier * in[i];
}
}
// helper function to compute FFT
void fft(double* samples, fftw_complex* out, int size) {
fftw_plan p;
p = fftw_plan_dft_r2c_1d(size, samples, out, FFTW_ESTIMATE);
fftw_execute(p);
fftw_destroy_plan(p);
}
// find the index of array element with the highest absolute value
// probably want to take some kind of moving average of buf[i]^2
// and return the maximum found
double maxFreqIndex(fftw_complex* buf, int size, float fS) {
double max_freq = 0;
double last_magnitude = 0;
for(int i = 0; i < (size / 2) - 1; i++) {
double freq = i * fS / size;
// printf("freq: %f\n", freq);
double magnitude = sqrt(buf[i][0]*buf[i][0] + buf[i][1]*buf[i][1]);
if(magnitude > last_magnitude)
max_freq = freq;
last_magnitude = magnitude;
}
return max_freq;
}
//
//// map a frequency to a color, red = lower freq -> violet = high freq
//int freqToColor(int i) {
//
//}
void generateWaveformColors(const char path[]) {
printf("Generating waveform colors\n");
SNDFILE *infile = NULL;
SF_INFO sfinfo;
infile = sf_open(path, SFM_READ, &sfinfo);
sf_count_t numSamples = sfinfo.frames;
// sample rate
float fS = 44100;
// float songLengLengthSeconds = numSamples / fS;
// printf("seconds: %f", songLengLengthSeconds);
// size of frame for analysis, you may want to play with this
float frameMsec = 5;
// samples in a frame
int frameSamples = (int)(fS / (frameMsec * 1000));
// how much overlap each frame, you may want to play with this one too
int frameOverlap = (frameSamples / 2);
// color to use for each frame
// int outColors[(numSamples / frameOverlap) + 1];
// scratch buffers
double* tmpWindow;
fftw_complex* tmpFFT;
tmpWindow = (double*) fftw_malloc(sizeof(double) * frameSamples);
tmpFFT = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * frameSamples);
printf("Processing waveform for colors\n");
for (int i = 0, outptr = 0; i < numSamples; i += frameOverlap, outptr++)
{
double inSamples[frameSamples];
sf_read_double(infile, inSamples, frameSamples);
// window another frame for FFT
calcWindow(inSamples, tmpWindow, frameSamples);
// compute the FFT on the next frame
fft(tmpWindow, tmpFFT, frameSamples);
// which frequency is the highest?
double freqIndex = maxFreqIndex(tmpFFT, frameSamples, fS);
printf("%i: ", i);
printf("Max freq: %f\n", freqIndex);
// map to color
// outColors[outptr] = freqToColor(freqIndex);
}
printf("Done.");
sf_close (infile);
}
Here is some of the output:
2094216: Max freq: 5512.500000
2094220: Max freq: 0.000000
2094224: Max freq: 0.000000
2094228: Max freq: 0.000000
2094232: Max freq: 5512.500000
2094236: Max freq: 5512.500000
It only shows certain numbers, not a wide variety of frequencies like it maybe should. Or am I wrong? Is there anything wrong with my code you guys can see? The color stuff is commented out because I haven't done it yet.
The frequency resolution of an FFT is limited by the length of the data sample you have. The more samples you have, the higher the frequency resolution.
In your specific case you chose frames of 5 milliseconds, which is then transformed to a number of samples on the following line:
// samples in a frame
int frameSamples = (int)(fS / (frameMsec * 1000));
This corresponds to only 8 samples at the specified 44100Hz sampling rate. The frequency resolution with such a small frame size can be computed to be
44100 / 8
or 5512.5Hz, a rather poor resolution. Correspondingly, the observed frequencies will always be one of 0, 5512.5, 11025, 16537.5 or 22050Hz.
To get a higher resolution you should increase the number of samples used for analysis by increasing frameMsec (as suggested by the comment "size of frame for analysis, you may want to play with this").
What is the most efficient way of generating a sine wave for a device running IOS. For the purposes of the exercise assume a frequency of 440Hz and a sampling rate of 44100Hz and 1024 samples.
A vanilla C implementation looks something like.
#define SAMPLES 1024
#define TWO_PI (3.14159 * 2)
#define FREQUENCY 440
#define SAMPLING_RATE 44100
int main(int argc, const char * argv[]) {
float samples[SAMPLES];
float phaseIncrement = TWO_PI * FREQUENCY / SAMPLING_RATE;
float currentPhase = 0.0;
for (int i = 0; i < SAMPLES; i ++){
samples[i] = sin(currentPhase);
currentPhase += phaseIncrement;
}
return 0;
}
To take advantage of the Accelerate Framework and the vecLib vvsinf function the loop can be changed to only do the addition.
#define SAMPLES 1024
#define TWO_PI (3.14159 * 2)
#define FREQUENCY 440
#define SAMPLING_RATE 44100
int main(int argc, const char * argv[]) {
float samples[SAMPLES] __attribute__ ((aligned));
float results[SAMPLES] __attribute__ ((aligned));
float phaseIncrement = TWO_PI * FREQUENCY / SAMPLING_RATE;
float currentPhase = 0.0;
for (int i = 0; i < SAMPLES; i ++){
samples[i] = currentPhase;
currentPhase += phaseIncrement;
}
vvsinf(results, samples, SAMPLES);
return 0;
}
But is just applying the vvsinf function as far as I should go in terms of efficiency?
I don't really understand the Accelerate framework well enough to know if I can also replace the loop. Is there a vecLib or vDSP function I can use?
For that matter is it possible to use an entirely different alogrithm to fill a buffer with a sine wave?
Given that you are computing the sine of a phase argument which increases in fixed increments, it is generally much faster to implement the signal generation with a recurrence equation as described in this "How to Create Oscillators in Software" post and some more in this "DSP Trick: Sinusoidal Tone Generator" post, both on dspguru:
y[n] = 2*cos(w)*y[n-1] - y[n-2]
Note that this recurrence equation can be subject to numerical roundoff error accumulation, you should avoid computing too many samples at a time (your selection of SAMPLES == 1024 should be fine). This recurrence equation can be used after you have obtained the first two values y[0] and y[1] (the initial conditions). Since you are generating with an initial phase of 0, those are simply:
samples[0] = 0;
samples[1] = sin(phaseIncrement);
or more generally with an arbitrary initial phase (particularly useful to reinitialize the recurrence equation every so often to avoid the numerical roundoff error accumulation I mentioned earlier):
samples[0] = sin(initialPhase);
samples[1] = sin(initialPhase+phaseIncrement);
The recurrence equation can then be implemented directly with:
float scale = 2*cos(phaseIncrement);
// initialize first 2 samples for the 0 initial phase case
samples[0] = 0;
samples[1] = sin(phaseIncrement);
for (int i = 2; i < SAMPLES; i ++){
samples[i] = scale * samples[i-1] - samples[i-2];
}
Note that this implementation could be vectorized by computing multiple tones (each with the same frequency, but with larger phase increments between samples) with appropriate relative phase shifts, then interleaving the results to obtain the original tone (e.g. computing sin(4*w*n), sin(4*w*n+w), sin(4*w*n+2*w) and sin(4*w*n+3*w)). This would however make the implementation a lot more obscure, for a relatively small gain.
Alternatively the equation can be implemented by making use of vDsp_deq22:
// setup dummy array which will hold zeros as input
float nullInput[SAMPLES];
memset(nullInput, 0, SAMPLES * sizeof(float));
// setup filter coefficients
float coefficients[5];
coefficients[0] = 0;
coefficients[1] = 0;
coefficients[2] = 0;
coefficients[3] = -2*cos(phaseIncrement);
coefficients[4] = 1.0;
// initialize first 2 samples for the 0 initial phase case
samples[0] = 0;
samples[1] = sin(phaseIncrement);
vDsp_deq22(nullInput, 1, coefficients, samples, 1, SAMPLES-2);
If efficiency is required, you could pre-load a 440hz (44100 / 440) sine waveform look-up table and loop around it without further mapping or pre-load a 1hz (44100 / 44100) sine waveform look-up table and loop around by skipping samples to reach 440hz just as you did by incrementing a phase counter. Using look-up tables should be faster than computing sin().
Method A (using 440hz sine waveform):
#define SAMPLES 1024
#define FREQUENCY 440
#define SAMPLING_RATE 44100
#define WAVEFORM_LENGTH (SAMPLING / FREQUENCY)
int main(int argc, const char * argv[]) {
float waveform[WAVEFORM_LENGTH];
LoadSinWaveForm(waveform);
float samples[SAMPLES] __attribute__ ((aligned));
float results[SAMPLES] __attribute__ ((aligned));
for (int i = 0; i < SAMPLES; i ++){
samples[i] = waveform[i % WAVEFORM_LENGTH];
}
vvsinf(results, samples, SAMPLES);
return 0;
}
Method B (using 1hz sine waveform):
#define SAMPLES 1024
#define FREQUENCY 440
#define TWO_PI (3.14159 * 2)
#define SAMPLING_RATE 44100
#define WAVEFORM_LENGTH SAMPLING_RATE // since it's 1hz
int main(int argc, const char * argv[]) {
float waveform[WAVEFORM_LENGTH];
LoadSinWaveForm(waveform);
float samples[SAMPLES] __attribute__ ((aligned));
float results[SAMPLES] __attribute__ ((aligned));
float phaseIncrement = TWO_PI * FREQUENCY / SAMPLING_RATE;
float currentPhase = 0.0;
for (int i = 0; i < SAMPLES; i ++){
samples[i] = waveform[floor(currentPhase) % WAVEFORM_LENGTH];
currentPhase += phaseIncrement;
}
vvsinf(results, samples, SAMPLES);
return 0;
}
Please note that:
Method A is susceptible to frequency inaccuracy due to assuming that your frequency always divides correctly the sampling rate, which is not true. That means you may get 441hz or 440hz with a glitch.
Method B is susceptible to aliasing as the frequency goes up an gets closer to the Nyquist frequency, but it's a good trade-off between performance, quality and memory consumption if synthesizing reasonable low frequencies such as the one in your example.
Background:
I have a section of memory, 1024 bytes. The last 1020 bytes will always be the same. The first 4 bytes will change (serial number of a product). I need to calculate the CRC-16 CCITT (0xFFFF starting, 0x1021 mask) for the entire section of memory, CRC_WHOLE.
Question:
Is it possible to calculate the CRC for only the first 4 bytes, CRC_A, then apply a function such as the one below to calculate the full CRC? We can assume that the checksum for the last 1020 bytes, CRC_B, is already known.
CRC_WHOLE = XOR(CRC_A, CRC_B)
I know that this formula does not work (tried it), but I am hoping that something similar exists.
Yes. You can see how in zlib's crc32_combine(). If you have two sequences A and B, then the pure CRC of AB is the exclusive-or of the CRC of A0 and the CRC of 0B, where the 0's represent a series of zero bytes with the length of the corresponding sequence, i.e. B and A respectively.
For your application, you can pre-compute a single operator that applies 1020 zeros to the CRC of your first four bytes very rapidly. Then you can exclusive-or that with the pre-computed CRC of the 1020 bytes.
Update:
Here is a post of mine from 2008 with a detailed explanation that #ArtemB discovered (that I had forgotten about):
crc32_combine() in zlib is based on two key tricks. For what follows,
we set aside the fact that the standard 32-bit CRC is pre and post-
conditioned. We can deal with that later. Assume for now a CRC that
has no such conditioning, and so starts with the register filled with
zeros.
Trick #1: CRCs are linear. So if you have stream X and stream Y of
the same length and exclusive-or the two streams bit-by-bit to get Z,
i.e. Z = X ^ Y (using the C notation for exclusive-or), then CRC(Z) =
CRC(X) ^ CRC(Y). For the problem at hand we have two streams A and B
of differing length that we want to concatenate into stream Z. What
we have available are CRC(A) and CRC(B). What we want is a quick way
to compute CRC(Z). The trick is to construct X = A concatenated with
length(B) zero bits, and Y = length(A) zero bits concatenated with B.
So if we represent concatenation simply by juxtaposition of the
symbols, X = A0, Y = 0B, then X^Y = Z = AB. Then we have CRC(Z) =
CRC(A0) ^ CRC(0B).
Now we need to know CRC(A0) and CRC(0B). CRC(0B) is easy. If we feed
a bunch of zeros to the CRC machine starting with zero, the register
is still filled with zeros. So it's as if we did nothing at all.
Therefore CRC(0B) = CRC(B).
CRC(A0) requires more work however. Taking a non-zero CRC and feeding
zeros to the CRC machine doesn't leave it alone. Every zero changes
the register contents. So to get CRC(A0), we need to set the register
to CRC(A), and then run length(B) zeros through it. Then we can
exclusive-or the result of that with CRC(B) = CRC(0B), and we get what
we want, which is CRC(Z) = CRC(AB). Voila!
Well, actually the voila is premature. I wasn't at all satisfied with
that answer. I didn't want a calculation that took a time
proportional to the length of B. That wouldn't save any time compared
to simply setting the register to CRC(A) and running the B stream
through. I figured there must be a faster way to compute the effect
of feeding n zeros into the CRC machine (where n = length(B)). So
that leads us to:
Trick #2: The CRC machine is a linear state machine. If we know the
linear transformation that occurs when we feed a zero to the machine,
then we can do operations on that transformation to more efficiently
find the transformation that results from feeding n zeros into the
machine.
The transformation of feeding a single zero bit into the CRC machine
is completely represented by a 32x32 binary matrix. To apply the
transformation we multiply the matrix by the register, taking the
register as a 32 bit column vector. For the matrix multiplication in
binary (i.e. over the Galois Field of 2), the role of multiplication
is played by and'ing, and the role of addition is played by exclusive-
or'ing.
There are a few different ways to construct the magic matrix that
represents the transformation caused by feeding the CRC machine a
single zero bit. One way is to observe that each column of the matrix
is what you get when your register starts off with a single one in
it. So the first column is what you get when the register is 100...
and then feed a zero, the second column comes from starting with
0100..., etc. (Those are referred to as basis vectors.) You can see
this simply by doing the matrix multiplication with those vectors.
The matrix multiplication selects the column of the matrix
corresponding to the location of the single one.
Now for the trick. Once we have the magic matrix, we can set aside
the initial register contents for a while, and instead use the
transformation for one zero to compute the transformation for n
zeros. We could just multiply n copies of the matrix together to get
the matrix for n zeros. But that's even worse than just running the n
zeros through the machine. However there's an easy way to avoid most
of those matrix multiplications to get the same answer. Suppose we
want to know the transformation for running eight zero bits, or one
byte through. Let's call the magic matrix that represents running one
zero through: M. We could do seven matrix multiplications to get R =
MxMxMxMxMxMxMxM. Instead, let's start with MxM and call that P. Then
PxP is MxMxMxM. Let's call that Q. Then QxQ is R. So now we've
reduced the seven multiplications to three. P = MxM, Q = PxP, and R =
QxQ.
Now I'm sure you get the idea for an arbitrary n number of zeros. We
can very rapidly generate transformation matrices Mk, where Mk is the
transformation for running 2k zeros through. (In the
paragraph above M3 is R.) We can make M1 through Mk with only k
matrix multiplications, starting with M0 = M. k only has to be as
large as the number of bits in the binary representation of n. We can
then pick those matrices where there are ones in the binary
representation of n and multiply them together to get the
transformation of running n zeros through the CRC machine. So if n =
13, compute M0 x M2 x M3.
If j is the number of one's in the binary representation of n, then we
just have j - 1 more matrix multiplications. So we have a total of k
j - 1 matrix multiplications, where j <= k = floor(logbase2(n)).
Now we take our rapidly constructed matrix for n zeros, and multiply
that by CRC(A) to get CRC(A0). We can compute CRC(A0) in O(log(n))
time, instead of O(n) time. We exclusive or that with CRC(B) and
Voila! (really this time), we have CRC(Z).
That's what zlib's crc32_combine() does.
I will leave it as an exercise for the reader as to how to deal with
the pre and post conditioning of the CRC register. You just need to
apply the linearity observations above. Hint: You don't need to know
length(A). In fact crc32_combine() only takes three arguments:
CRC(A), CRC(B), and length(B) (in bytes).
Below is example C code for an alternative approach for CRC(A0). Rather than working with a matrix, a CRC can be cycled forward n bits by muliplying (CRC · ((2^n)%POLY)%POLY . So the repeated squaring is performed on an integer rather than a matrix. If n is constant, then (2^n)%POLY can be pre-computed.
/* crcpad.c - crc - data has a large number of trailing zeroes */
#include <stdio.h>
#include <stdlib.h>
typedef unsigned char uint8_t;
typedef unsigned int uint32_t;
#define POLY (0x04c11db7u)
static uint32_t crctbl[256];
void GenTbl(void) /* generate crc table */
{
uint32_t crc;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c<<24;
for(i = 0; i < 8; i++)
/* assumes twos complement */
crc = (crc<<1)^((0-(crc>>31))&POLY);
crctbl[c] = crc;
}
}
uint32_t GenCrc(uint8_t * bfr, size_t size) /* generate crc */
{
uint32_t crc = 0u;
while(size--)
crc = (crc<<8)^crctbl[(crc>>24)^*bfr++];
return(crc);
}
/* carryless multiply modulo crc */
uint32_t MpyModCrc(uint32_t a, uint32_t b) /* (a*b)%crc */
{
uint32_t pd = 0;
uint32_t i;
for(i = 0; i < 32; i++){
/* assumes twos complement */
pd = (pd<<1)^((0-(pd>>31))&POLY);
pd ^= (0-(b>>31))&a;
b <<= 1;
}
return pd;
}
/* exponentiate by repeated squaring modulo crc */
uint32_t PowModCrc(uint32_t p) /* pow(2,p)%crc */
{
uint32_t prd = 0x1u; /* current product */
uint32_t sqr = 0x2u; /* current square */
while(p){
if(p&1)
prd = MpyModCrc(prd, sqr);
sqr = MpyModCrc(sqr, sqr);
p >>= 1;
}
return prd;
}
/* # data bytes */
#define DAT ( 32)
/* # zero bytes */
#define PAD (992)
/* DATA+PAD */
#define CNT (1024)
int main()
{
uint32_t pmc;
uint32_t crc;
uint32_t crf;
uint32_t i;
uint8_t *msg = malloc(CNT);
for(i = 0; i < DAT; i++) /* generate msg */
msg[i] = (uint8_t)rand();
for( ; i < CNT; i++)
msg[i] = 0;
GenTbl(); /* generate crc table */
crc = GenCrc(msg, CNT); /* generate crc normally */
crf = GenCrc(msg, DAT); /* generate crc for data */
pmc = PowModCrc(PAD*8); /* pmc = pow(2,PAD*8)%crc */
crf = MpyModCrc(crf, pmc); /* crf = (crf*pmc)%crc */
printf("%08x %08x\n", crc, crf);
free(msg);
return 0;
}
Example C code using intrinsic for carryless multiply, pclmulqdq == _mm_clmulepi64_si128:
/* crcpadm.c - crc - data has a large number of trailing zeroes */
/* pclmulqdq intrinsic version */
#include <stdio.h>
#include <stdlib.h>
#include <intrin.h>
typedef unsigned char uint8_t;
typedef unsigned int uint32_t;
typedef unsigned long long uint64_t;
#define POLY (0x104c11db7ull)
#define POLYM ( 0x04c11db7u)
static uint32_t crctbl[256];
static __m128i poly; /* poly */
static __m128i invpoly; /* 2^64 / POLY */
void GenMPoly(void) /* generate __m12i8 poly info */
{
uint64_t N = 0x100000000ull;
uint64_t Q = 0;
for(size_t i = 0; i < 33; i++){
Q <<= 1;
if(N&0x100000000ull){
Q |= 1;
N ^= POLY;
}
N <<= 1;
}
poly.m128i_u64[0] = POLY;
invpoly.m128i_u64[0] = Q;
}
void GenTbl(void) /* generate crc table */
{
uint32_t crc;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c<<24;
for(i = 0; i < 8; i++)
/* assumes twos complement */
crc = (crc<<1)^((0-(crc>>31))&POLYM);
crctbl[c] = crc;
}
}
uint32_t GenCrc(uint8_t * bfr, size_t size) /* generate crc */
{
uint32_t crc = 0u;
while(size--)
crc = (crc<<8)^crctbl[(crc>>24)^*bfr++];
return(crc);
}
/* carryless multiply modulo crc */
uint32_t MpyModCrc(uint32_t a, uint32_t b) /* (a*b)%crc */
{
__m128i ma, mb, mp, mt;
ma.m128i_u64[0] = a;
mb.m128i_u64[0] = b;
mp = _mm_clmulepi64_si128(ma, mb, 0x00); /* p[0] = a*b */
mt = _mm_clmulepi64_si128(mp, invpoly, 0x00); /* t[1] = (p[0]*((2^64)/POLY))>>64 */
mt = _mm_clmulepi64_si128(mt, poly, 0x01); /* t[0] = t[1]*POLY */
return mp.m128i_u32[0] ^ mt.m128i_u32[0]; /* ret = p[0] ^ t[0] */
}
/* exponentiate by repeated squaring modulo crc */
uint32_t PowModCrc(uint32_t p) /* pow(2,p)%crc */
{
uint32_t prd = 0x1u; /* current product */
uint32_t sqr = 0x2u; /* current square */
while(p){
if(p&1)
prd = MpyModCrc(prd, sqr);
sqr = MpyModCrc(sqr, sqr);
p >>= 1;
}
return prd;
}
/* # data bytes */
#define DAT ( 32)
/* # zero bytes */
#define PAD (992)
/* DATA+PAD */
#define CNT (1024)
int main()
{
uint32_t pmc;
uint32_t crc;
uint32_t crf;
uint32_t i;
uint8_t *msg = malloc(CNT);
GenMPoly(); /* generate __m128 polys */
GenTbl(); /* generate crc table */
for(i = 0; i < DAT; i++) /* generate msg */
msg[i] = (uint8_t)rand();
for( ; i < CNT; i++)
msg[i] = 0;
crc = GenCrc(msg, CNT); /* generate crc normally */
crf = GenCrc(msg, DAT); /* generate crc for data */
pmc = PowModCrc(PAD*8); /* pmc = pow(2,PAD*8)%crc */
crf = MpyModCrc(crf, pmc); /* crf = (crf*pmc)%crc */
printf("%08x %08x\n", crc, crf);
free(msg);
return 0;
}
I want to filter a pix with a convolution kernel but with a bias and i don't see how to "emulate" the Bias using Leptonica API.
So far i have:
PIX* pixs = pixRead("file.png");
L_KERNEL* kel = kernelCreatFromString( 7, 7, 3, 3, "..." );
PIX* pixd = pixConvolve( pixs, kel, 8, 1 );
Any ideas how to emulate the classical "Bias"? I tried to add it's value it to each pixel of the image before or after the pixConvolve but the result is not the one observed with most image processing software.
By "bias", I am assuming that you want to shift the result so that all pixel values are non-negative.
In the notes for pixConvolve(), it says that the absolute value is taken to avoid negative output. It also says that if you wish to keep the negative values, use fpixConvolve() instead, which operates on an FPix and generates an FPix.
If you want a biased result without clipping, it is in general necessary to do the following:
(1) pixConvertToFpix() -- convert to an FPix
(2) fpixConvolve() -- do the convolution on the FPix, producing an FPix
(3) fpixGetMin() -- determine the bias required to make all values nonzero
(4) fpixAddMultConstant() -- add the bias to the FPix
(5) fpixGetMax() -- find the max value; if > 255, you need a 16 bpp Pix to represent it
(6) fpixConvertToPix -- convert back to a pix
Perhaps the leptonica maintainer (me) should bundle this up into a simple interface ;-)
OK, here's a function, following the outline that I wrote above, that should give enough flexibility to do these convolutions.
/*!
* pixConvolveWithBias()
* Input: pixs (8 bpp; no colormap)
* kel1
* kel2 (can be null; use if separable)
* force8 (if 1, force output to 8 bpp; otherwise, determine
* output depth by the dynamic range of pixel values)
* &bias (<return> applied bias)
* Return: pixd (8 or 16 bpp)
*
* Notes:
* (1) This does a convolution with either a single kernel or
* a pair of separable kernels, and automatically applies whatever
* bias (shift) is required so that the resulting pixel values
* are non-negative.
* (2) If there are no negative values in the kernel, a normalized
* convolution is performed, with 8 bpp output.
* (3) If there are negative values in the kernel, the pix is
* converted to an fpix, the convolution is done on the fpix, and
* a bias (shift) may need to be applied.
* (4) If force8 == TRUE and the range of values after the convolution
* is > 255, the output values will be scaled to fit in
* [0 ... 255].
* If force8 == FALSE, the output will be either 8 or 16 bpp,
* to accommodate the dynamic range of output values without
* scaling.
*/
PIX *
pixConvolveWithBias(PIX *pixs,
L_KERNEL *kel1,
L_KERNEL *kel2,
l_int32 force8,
l_int32 *pbias)
{
l_int32 outdepth;
l_float32 min1, min2, min, minval, maxval, range;
FPIX *fpix1, *fpix2;
PIX *pixd;
PROCNAME("pixConvolveWithBias");
if (!pixs || pixGetDepth(pixs) != 8)
return (PIX *)ERROR_PTR("pixs undefined or not 8 bpp", procName, NULL);
if (pixGetColormap(pixs))
return (PIX *)ERROR_PTR("pixs has colormap", procName, NULL);
if (!kel1)
return (PIX *)ERROR_PTR("kel1 not defined", procName, NULL);
/* Determine if negative values can be produced in convolution */
kernelGetMinMax(kel1, &min1, NULL);
min2 = 0.0;
if (kel2)
kernelGetMinMax(kel2, &min2, NULL);
min = L_MIN(min1, min2);
if (min >= 0.0) {
if (!kel2)
return pixConvolve(pixs, kel1, 8, 1);
else
return pixConvolveSep(pixs, kel1, kel2, 8, 1);
}
/* Bias may need to be applied; convert to fpix and convolve */
fpix1 = pixConvertToFPix(pixs, 1);
if (!kel2)
fpix2 = fpixConvolve(fpix1, kel1, 1);
else
fpix2 = fpixConvolveSep(fpix1, kel1, kel2, 1);
fpixDestroy(&fpix1);
/* Determine the bias and the dynamic range.
* If the dynamic range is <= 255, just shift the values by the
* bias, if any.
* If the dynamic range is > 255, there are two cases:
* (1) the output depth is not forced to 8 bpp ==> outdepth = 16
* (2) the output depth is forced to 8 ==> linearly map the
* pixel values to [0 ... 255]. */
fpixGetMin(fpix2, &minval, NULL, NULL);
fpixGetMax(fpix2, &maxval, NULL, NULL);
range = maxval - minval;
*pbias = (minval < 0.0) ? -minval : 0.0;
fpixAddMultConstant(fpix2, *pbias, 1.0); /* shift: min val ==> 0 */
if (range <= 255 || !force8) { /* no scaling of output values */
outdepth = (range > 255) ? 16 : 8;
} else { /* scale output values to fit in 8 bpp */
fpixAddMultConstant(fpix2, 0.0, (255.0 / range));
outdepth = 8;
}
/* Convert back to pix; it won't do any clipping */
pixd = fpixConvertToPix(fpix2, outdepth, L_CLIP_TO_ZERO, 0);
fpixDestroy(&fpix2);
return pixd;
}
Here is the solution as i needed it based on the Dan input.
/*!
* pixConvolveWithBias()
* Input: pixs (8 bpp; no colormap)
* kel1
* kel2 (can be null; use if separable)
* outdepth (of pixd: 8, 16 or 32)
* normflag (1 to normalize kernel to unit sum; 0 otherwise)
* bias
* Return: pixd
*
* Notes:
* (1) This does a convolution with either a single kernel or
* a pair of separable kernels, and automatically applies whatever
* bias (shift) is required so that the resulting pixel values
* are non-negative.
* (2) If there are no negative values in the kernel, a convolution
* is performed and bias added.
* (3) If there are negative values in the kernel, the pix is
* converted to an fpix, the convolution is done on the fpix, and
* a bias (shift) is applied.
*/
PIX *
pixConvolveWithBias(PIX *pixs,
L_KERNEL *kel1,
L_KERNEL *kel2,
l_int32 outdepth,
l_int32 normflag,
l_int32 bias)
{
l_float32 min1, min2, min, minval, maxval, range;
FPIX *fpix1, *fpix2;
PIX *pixd;
PROCNAME("pixConvolveWithBias");
if (!pixs || pixGetDepth(pixs) != 8)
return (PIX *)ERROR_PTR("pixs undefined or not 8 bpp", procName, NULL);
if (pixGetColormap(pixs))
return (PIX *)ERROR_PTR("pixs has colormap", procName, NULL);
if (!kel1)
return (PIX *)ERROR_PTR("kel1 not defined", procName, NULL);
/* Determine if negative values can be produced in convolution */
kernelGetMinMax(kel1, &min1, NULL);
min2 = 0.0;
if (kel2)
kernelGetMinMax(kel2, &min2, NULL);
min = L_MIN(min1, min2);
if (min >= 0.0) {
if (!kel2)
pixd = pixConvolve(pixs, kel1, outdepth, normflag);
else
pixd = pixConvolveSep(pixs, kel1, kel2, outdepth, normflag);
pixAddConstantGray(pixd, bias);
} else {
/* Bias may need to be applied; convert to fpix and convolve */
fpix1 = pixConvertToFPix(pixs, 1);
if (!kel2)
fpix2 = fpixConvolve(fpix1, kel1, normflag);
else
fpix2 = fpixConvolveSep(fpix1, kel1, kel2, normflag);
fpixDestroy(&fpix1);
fpixAddMultConstant(fpix2, bias, 1.0);
pixd = fpixConvertToPix(fpix2, outdepth, L_CLIP_TO_ZERO, 0);
fpixDestroy(&fpix2);
}
return pixd;
}
UPDATE 2016-03-15
Please take a look at this project: https://github.com/ooper-shlab/aurioTouch2.0-Swift. It has been ported to Swift and contains every answer you're looking for, if you cam here.
I did a lot of research and learned a lot about FFT and the Accelerate Framework. But after days of experiments I'm kind of frustrated.
I want to display the frequency spectrum of an audio file during playback in a diagram. For every time interval it should show the magnitude in db on the Y-axis (displayed by a red bar) for every frequency (in my case 512 values) calculated by a FFT on the X-Axis.
The output should look like this:
I fill a buffer with 1024 samples extracting only the left channel for the beginning. Then I do all this FFT stuff.
Here is my code so far:
Setting up some variables
- (void)setupVars
{
maxSamples = 1024;
log2n = log2f(maxSamples);
n = 1 << log2n;
stride = 1;
nOver2 = maxSamples/2;
A.realp = (float *) malloc(nOver2 * sizeof(float));
A.imagp = (float *) malloc(nOver2 * sizeof(float));
memset(A.imagp, 0, nOver2 * sizeof(float));
obtainedReal = (float *) malloc(n * sizeof(float));
originalReal = (float *) malloc(n * sizeof(float));
setupReal = vDSP_create_fftsetup(log2n, FFT_RADIX2);
}
Doing the FFT. FrequencyArray is just a data structure that holds 512 float values.
- (FrequencyArry)performFastFourierTransformForSampleData:(SInt16*)sampleData andSampleRate:(UInt16)sampleRate
{
NSLog(#"log2n %i n %i, nOver2 %i", log2n, n, nOver2);
// n = 1024
// log2n 10
// nOver2 = 512
for (int i = 0; i < n; i++) {
originalReal[i] = (float) sampleData[i];
}
vDSP_ctoz((COMPLEX *) originalReal, 2, &A, 1, nOver2);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
float scale = (float) 1.0 / (2 * n);
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
vDSP_ztoc(&A, 1, (COMPLEX *) obtainedReal, 2, nOver2);
FrequencyArry frequencyArray;
for (int i = 0; i < nOver2; i++) {
frequencyArray.frequency[i] = log10f(obtainedReal[i]); // Magnitude in db???
}
return frequencyArray;
}
The output looks always kind of weird although it some how seems to move according to the music.
I'm happy that I came so far thanks to some very good posts here like this:
Using the apple FFT and accelerate Framework
But now I don't know what to do. What am I missing?
Firstly you're not applying a window function prior to the FFT - this will result in smearing of the spectrum due to spectral leakage.
Secondly, you're just using the real component of the FFT output bins to calculate dB magnitude - you need to use the complex magnitude:
magnitude_dB = 10 * log10(re * re + im * im);