I'm currently using FFT code from here:
https://github.com/syedhali/EZAudio/tree/master/EZAudioExamples/iOS/EZAudioFFTExample
Here's the code from the 2 relevant methods:
-(void)createFFTWithBufferSize:(float)bufferSize withAudioData:(float*)data {
// Setup the length
_log2n = log2f(bufferSize);
// Calculate the weights array. This is a one-off operation.
_FFTSetup = vDSP_create_fftsetup(_log2n, FFT_RADIX2);
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = bufferSize/2;
// Populate *window with the values for a hamming window function
float *window = (float *)malloc(sizeof(float)*bufferSize);
vDSP_hamm_window(window, bufferSize, 0);
// Window the samples
vDSP_vmul(data, 1, window, 1, data, 1, bufferSize);
free(window);
// Define complex buffer
_A.realp = (float *) malloc(nOver2*sizeof(float));
_A.imagp = (float *) malloc(nOver2*sizeof(float));
}
-(void)updateFFTWithBufferSize:(float)bufferSize withAudioData:(float*)data {
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = bufferSize/2;
// Pack samples:
// C(re) -> A[n], C(im) -> A[n+1]
vDSP_ctoz((COMPLEX*)data, 2, &_A, 1, nOver2);
// Perform a forward FFT using fftSetup and A
// Results are returned in A
vDSP_fft_zrip(_FFTSetup, &_A, 1, _log2n, FFT_FORWARD);
// Convert COMPLEX_SPLIT A result to magnitudes
float amp[nOver2];
float maxMag = 0;
for(int i=0; i<nOver2; i++) {
// Calculate the magnitude
float mag = _A.realp[i]*_A.realp[i]+_A.imagp[i]*_A.imagp[i];
maxMag = mag > maxMag ? mag : maxMag;
}
for(int i=0; i<nOver2; i++) {
// Calculate the magnitude
float mag = _A.realp[i]*_A.realp[i]+_A.imagp[i]*_A.imagp[i];
// Bind the value to be less than 1.0 to fit in the graph
amp[i] = [EZAudio MAP:mag leftMin:0.0 leftMax:maxMag rightMin:0.0 rightMax:1.0];
}
I've modified the updateFFTWithBufferSize method above so that I could get the frequency in Hz like this:
for(int i=0; i<nOver2; i++) {
// Calculate the magnitude
float mag = _A.realp[i]*_A.realp[i]+_A.imagp[i]*_A.imagp[i];
if(maxMag < mag) {
_i_max = i;
}
maxMag = mag > maxMag ? mag : maxMag;
}
float frequency = _i_max / bufferSize * 44100;
NSLog(#"FREQUENCY: %f", frequency);
I've generated a few pure sine tones with Audacity at different frequencies to test with. The issue I'm seeing is that the code is returning the same frequency for two different sine tones that are relatively close in value.
For example:
A sine tone generated at 19255Hz will show up from FFT as 19293.750000Hz. So will a sine tone generated at 19330Hz. Something must be off in the calculations.
Any assistance in how I can modify the above code to get a more precise FFT frequency reading for pure sine tones is greatly appreciated.
Thank you!
You can get a rough frequency estimate by fitting a parabolic curve to the 3 FFT bin magnitudes around the peak magnitude bin, and then finding the extrema of that parabola.
A better estimate can be created by using the transform of your FFT window as an interpolation kernel, and doing successive approximation to refine an estimate of the maxima of the interpolated points. (Zero padding and using a much longer FFT will give you a similar type of interpolated estimate.)
The easy way for a stationary signal is, if possible, to just use a longer FFT with more samples that span a longer time interval.
You've got a number of problems going on here:
1) Your frequency axis spacing is fmax/N, or about 80Hz, so you're not going to get a resolution much better than that.
2) You're signal is very close to the Nyquist frequency (ie, 20KHz/44.1KHz is almost 0.5), and when you're this close to the Nyquist limit you need to be very careful if you want accurate results. (That is, at 20KHz, you're only recording about two data points for each full oscillation cycle.)
3) Since 20KHz is at the edge of human hearing (and higher for most people), many microphones don't really worry about it. Here's a measurement for the iPhone.
Perhaps your sampling frequency isn't high enough?
The FFT is a very good method to get a spectrum if you don't know anything about the input. If you know that the input is a pure sine wave, you can do much better. Start off by calculating the FFT to get a rough idea where the sine is. Get the minimum and maximum to estimate the amplitude [or get that from the FFT - square all inputs, add them, take square root] , get the phase at the begin and end given the estimated frequency and amplitude.
In general, you'll find that the phase does not match. That's because the phase at the end is off by 2*Δf * N. f - Δf is a better estimate of the frequency. Keep in mind that such a method is super noise sensitive. The method works because the input is a pure sine wave, and noise is everything but that. Using this method iteratively blows up quickly; you even hit rounding errors (not sinusoidal either)
Another similar trick is subtracting the estimated wave. The difference between two sines is the product of two sines, one with the frequencies added (in your case, ±38.5 kHz) and one with the frequencies subtracted (Δ_f_, less than 100 Hz). See also Heterodyne detection
Related
I have decomposed some time series data using a custom FFT implementation. By design my FFT implementation gives me a set of cos and sine waves that I can then sum together to regenerate the original signal. This works well without issue, so I know that the extracted sine and cos waves are correct in terms of amplitude, period and phase.
The data I am using has 1024 samples which gives me the properties of 512 cos waves and 512 sine waves (eg the amplitude, phase and period data for each wave).
To save on data storage I am trying to find/understand the mathematical relationship between the amplitudes of the waves. Instead of having to save every amplitude for every sine and cos wave I would like to simply save some coefficients that I can later use to rebuild the amplitudes in code.
FFT Sine Waves with Amplitudes
From the above image you can see that there is a set of Power curve coefficients that roughly fit the amplitude data, however for my use case this is not accurate enough.
As I have all the source data along with the generated properties of each wave, is there a simple formula that I can use or a transform I can perform to generate the amplitudes in code after I have performed the FFT? I know that the amplitudes are related to the real and imaginary values however I cannot store all the real and imaginary values either due to space requirements.
As an example of how I am saving this issue for the period data, I have found that the period of each wave is simply Math.Power(waveIndex, -1). So for the wave periods I do not have to store the data, I can simply regenerate in code.
I cannot currently find a relationship between the amplitudes within the sine wave or even a relationship between cos and sine amplitudes, however the theory and math behind FFT is beyond me so I am hoping that there is a simply formula or concept I can implement.
Following the replies I have added the below code that I use to get the sine and cos wave values, this code snippet may help those replying.
internal void GetSineAndCosWavesBasic(double[] outReal, double[] outImag, int numWaves, out double[,] sineValues, out double[,] cosValues)
{
// the real and imag values from Cooley-Tukey decimation-in-time radix-2 FFT are passed in
// and we want to generate the cos and sine values for each sample for each wave
var length = outReal.Length;
var lengthDouble = (double)length;
var halfLength = lengthDouble / 2.0;
sineValues = new double[numWaves, length];
cosValues = new double[numWaves, length];
var Pi2 = 2 * Math.PI;
for (var waveIdx = 0; waveIdx < numWaves; waveIdx++)
{
for (var sampleIdx = 0; sampleIdx < length; sampleIdx++)
{
// first value case and middle value case
var reX = outReal[waveIdx] / halfLength;
if (sampleIdx == 0)
{
reX = outReal[waveIdx] / lengthDouble;
}
else if (sampleIdx == halfLength)
{
reX = outReal[waveIdx] / lengthDouble;
}
// precompute the value that gets sine/cos applied
var tmp = (Pi2 * waveIdx * sampleIdx) / lengthDouble;
// get the instant cos and sine values
var valueCos = Math.Cos(tmp) * reX;
var valueSin = Math.Sin(tmp) * (-outImag[waveIdx] / halfLength);
// update the sine and cos values for this wave for this sample
cosValues[waveIdx, sampleIdx] = valueCos;
sineValues[waveIdx, sampleIdx] = valueSin;
}
}
}
And the below is how I get the magnitude and phase values, although I do not currently use those anywhere.
internal void CalculateMagAndPhaseBasic(double[] outReal, double[] outImag, out double[] mag, out double[] phase)
{
// the real and imag values from Cooley-Tukey decimation-in-time radix-2 FFT are passed in
// and we want to generate the magnitude and phase values
var length = outReal.Length;
mag = new double[(length / 2) +1];
phase = new double[(length / 2) + 1];
for (var i = 0; i <= length / 2; i++)
{
mag[i] = Math.Pow((outReal[i] * outReal[i]) + (outImag[i] * outImag[i]), 0.5);
phase[i] = Math.Atan2(outImag[i], outReal[i]);
}
}
Actually the fft just returns you complex coefficients S(w)=a+jb
For an N point fft, abs(S(w)) * 2/N will be (close to) the amplitude of the sinusoidal component at frequency w.
This assumes that the sinusoidal component has a frequency close to the center of the fft bin, otherwise the power will be "split" between two adjacent bins.
And that the frequency you're interested in is present through all the fft window.
The output of an FFT has the same number of degrees of freedom as the input. There is no simple formula (other than the FFT itself) that relates the FFT results to just each other, as all of the FFT outputs can change if any of the FFT inputs changes.
The relationship between the sine and cosine of each FFT complex bin result is related to the phase of the sinusoidal input component at that frequency (of the bin center), circularly relative to the start and end. If the phase changes, so can both the sine and cosine component. See: atan2()
I have a task - to multiply big row vector (10 000 elements) via big column-major matrix (10 000 rows, 400 columns). I decided to go with ARM NEON since I'm curious about this technology and would like to learn more about it.
Here's a working example of vector matrix multiplication I wrote:
//float* vec_ptr - a pointer to vector
//float* mat_ptr - a pointer to matrix
//float* out_ptr - a pointer to output vector
//int matCols - matrix columns
//int vecRows - vector rows, the same as matrix
for (int i = 0, max_i = matCols; i < max_i; i++) {
for (int j = 0, max_j = vecRows - 3; j < max_j; j+=4, mat_ptr+=4, vec_ptr+=4) {
float32x4_t mat_val = vld1q_f32(mat_ptr); //get 4 elements from matrix
float32x4_t vec_val = vld1q_f32(vec_ptr); //get 4 elements from vector
float32x4_t out_val = vmulq_f32(mat_val, vec_val); //multiply vectors
float32_t total_sum = vaddvq_f32(out_val); //sum elements of vector together
out_ptr[i] += total_sum;
}
vec_ptr = &myVec[0]; //switch ptr back again to zero element
}
The problem is that it's taking very long time to compute - 30 ms on iPhone 7+ when my goal is 1 ms or even less if it's possible. Current execution time is understandable since I launch multiplication iteration 400 * (10000 / 4) = 1 000 000 times.
Also, I tried to process 8 elements instead of 4. It seems to help, but numbers still very far from my goal.
I understand that I might make some horrible mistakes since I'm newbie with ARM NEON. And I would be happy if someone can give me some tip how I can optimize my code.
Also - is it worth doing big vector-matrix multiplication via ARM NEON? Does this technology fit well for such purpose?
Your code is completely flawed: it iterates 16 times assuming both matCols and vecRows are 4. What's the point of SIMD then?
And the major performance problem lies in float32_t total_sum = vaddvq_f32(out_val);:
You should never convert a vector to a scalar inside a loop since it causes a pipeline hazard that costs around 15 cycles everytime.
The solution:
float32x4x4_t myMat;
float32x2_t myVecLow, myVecHigh;
myVecLow = vld1_f32(&pVec[0]);
myVecHigh = vld1_f32(&pVec[2]);
myMat = vld4q_f32(pMat);
myMat.val[0] = vmulq_lane_f32(myMat.val[0], myVecLow, 0);
myMat.val[0] = vmlaq_lane_f32(myMat.val[0], myMat.val[1], myVecLow, 1);
myMat.val[0] = vmlaq_lane_f32(myMat.val[0], myMat.val[2], myVecHigh, 0);
myMat.val[0] = vmlaq_lane_f32(myMat.val[0], myMat.val[3], myVecHigh, 1);
vst1q_f32(pDst, myMat.val[0]);
Compute all the four rows in a single pass
Do a matrix transpose (rotation) on-the-fly by vld4
Do vector-scalar multiply-accumulate instead of vector-vector multiply and horizontal add that causes the pipeline hazards.
You were asking if SIMD is suitable for matrix operations? A simple "yes" would be a monumental understatement. You don't even need a loop for this.
I am using this GitHub project called "The Amazing Audio Engine", to capture audio from the microphone. So I am using this method:
id<AEAudioReceiver> receiver = [AEBlockAudioReceiver audioReceiverWithBlock: ^(void *source, const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) {
// Do something with 'audio'
}];
This method fires every 23 ms delivering an audio array containing all amplitudes of the sound wave over that 23 ms interval.
This is the catch. This audio sound I am dealing with is a FM signal, composed of two frequencies, one at 1000 Hz and one at twice the frequency that represents zeros and ones of a digital stream.
This is my problem. At that point I have an array of audio amplitudes over 0.23 ms.
So I thought I could do a FFT to convert the signal into frequency levels. I used this code:
// Setup the length
vDSP_Length log2n = log2f(numFrames);
// Calculate the weights array. This is a one-off operation.
FFTSetup fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = numFrames/2;
// Populate *window with the values for a hamming window function
float *window = (float *)malloc(sizeof(float) * numFrames);
vDSP_hamm_window(window, numFrames, 0);
// Window the samples
vDSP_vmul(data, 1, window, 1, data, 1, numFrames);
// Define complex buffer
COMPLEX_SPLIT A;
A.realp = (float *) malloc(nOver2*sizeof(float));
A.imagp = (float *) malloc(nOver2*sizeof(float));
// Pack samples:
// C(re) -> A[n], C(im) -> A[n+1]
vDSP_ctoz((COMPLEX*)data, 2, &A, 1, numFrames/2);
// RUN THE FFT
//Perform a forward FFT using fftSetup and A
//Results are returned in A
vDSP_fft_zrip(fftSetup, &A, 1, log2n, FFT_FORWARD);
Because each interval is 172 Hz and I want to isolate 1000Hz, I think the 6th "bucket" of the FFT result would be the one, so I have this code:
//Convert COMPLEX_SPLIT A result to magnitudes
float amp[numFrames];
amp[0] = A.realp[0]/(numFrames*2);
for(int i=1; i<numFrames; i++) {
amp[i]=A.realp[i]*A.realp[i]+A.imagp[i]*A.imagp[i];
}
// I need the 6th and the 12th bucket, so I need a[5] and a[11]
but then I am starting to think that the FFT is not what I want because a[5] and a[11] will give me the amplitudes of ~1000Hz and ~2000Hz over 0.23 ms but in fact what I need are all the variations of the 1000 Hz and 2000 Hz sounds had over the 0.23ms time. In fact I need to obtain arrays, not single values.
In broad lines what should I do to obtain the amplitudes over time of the two frequencies, 1000 and 2000 Hz?
If you know what time resolution you want, two Goertzel filters slid by that length would allow you to measure the amplitudes of your two frequencies with much less overhead than using FFTs. The length of the filter or FFT need not (usually should not) be the same length as the number of frames from each audio callback. You can use a circular buffer or fifo to decouple the lengths. (In iOS, the numFrames can be different on different device models, and may suddenly change depending on other factors outside the apps control).
I am trying to port an existing FFT based low-pass filter to iOS using the Accelerate vDSP framework.
It seems like the FFT works as expected for about the first 1/4 of the sample. But then after that the results seem wrong, and even more odd are mirrored (with the last half of the signal mirroring most of the first half).
You can see the results from a test application below. First is plotted the original sampled data, then an example of the expected filtered results (filtering out signal higher than 15Hz), then finally the results of my current FFT code (note that the desired results and example FFT result are at a different scale than the original data):
The actual code for my low-pass filter is as follows:
double *lowpassFilterVector(double *accell, uint32_t sampleCount, double lowPassFreq, double sampleRate )
{
double stride = 1;
int ln = log2f(sampleCount);
int n = 1 << ln;
// So that we get an FFT of the whole data set, we pad out the array to the next highest power of 2.
int fullPadN = n * 2;
double *padAccell = malloc(sizeof(double) * fullPadN);
memset(padAccell, 0, sizeof(double) * fullPadN);
memcpy(padAccell, accell, sizeof(double) * sampleCount);
ln = log2f(fullPadN);
n = 1 << ln;
int nOver2 = n/2;
DSPDoubleSplitComplex A;
A.realp = (double *)malloc(sizeof(double) * nOver2);
A.imagp = (double *)malloc(sizeof(double) * nOver2);
// This can be reused, just including it here for simplicity.
FFTSetupD setupReal = vDSP_create_fftsetupD(ln, FFT_RADIX2);
vDSP_ctozD((DSPDoubleComplex*)padAccell,2,&A,1,nOver2);
// Use the FFT to get frequency counts
vDSP_fft_zripD(setupReal, &A, stride, ln, FFT_FORWARD);
const double factor = 0.5f;
vDSP_vsmulD(A.realp, 1, &factor, A.realp, 1, nOver2);
vDSP_vsmulD(A.imagp, 1, &factor, A.imagp, 1, nOver2);
A.realp[nOver2] = A.imagp[0];
A.imagp[0] = 0.0f;
A.imagp[nOver2] = 0.0f;
// Set frequencies above target to 0.
// This tells us which bin the frequencies over the minimum desired correspond to
NSInteger binLocation = (lowPassFreq * n) / sampleRate;
// We add 2 because bin 0 holds special FFT meta data, so bins really start at "1" - and we want to filter out anything OVER the target frequency
for ( NSInteger i = binLocation+2; i < nOver2; i++ )
{
A.realp[i] = 0;
}
// Clear out all imaginary parts
bzero(A.imagp, (nOver2) * sizeof(double));
//A.imagp[0] = A.realp[nOver2];
// Now shift back all of the values
vDSP_fft_zripD(setupReal, &A, stride, ln, FFT_INVERSE);
double *filteredAccell = (double *)malloc(sizeof(double) * fullPadN);
// Converts complex vector back into 2D array
vDSP_ztocD(&A, stride, (DSPDoubleComplex*)filteredAccell, 2, nOver2);
// Have to scale results to account for Apple's FFT library algorithm, see:
// http://developer.apple.com/library/ios/#documentation/Performance/Conceptual/vDSP_Programming_Guide/UsingFourierTransforms/UsingFourierTransforms.html#//apple_ref/doc/uid/TP40005147-CH202-15952
double scale = (float)1.0f / fullPadN;//(2.0f * (float)n);
vDSP_vsmulD(filteredAccell, 1, &scale, filteredAccell, 1, fullPadN);
// Tracks results of conversion
printf("\nInput & output:\n");
for (int k = 0; k < sampleCount; k++)
{
printf("%3d\t%6.2f\t%6.2f\t%6.2f\n", k, accell[k], padAccell[k], filteredAccell[k]);
}
// Acceleration data will be replaced in-place.
return filteredAccell;
}
In the original code the library was handling non power-of-two sizes of input data; in my Accelerate code I am padding out the input to the nearest power of two. In the case of the sample test below the original sample data is 1000 samples so it's padded to 1024. I don't think that would affect results but I include that for the sake of possible differences.
If you want to experiment with a solution, you can download the sample project that generates the graphs here (in the FFTTest folder):
FFT Example Project code
Thanks for any insight, I've not worked with FFT's before so I feel like I am missing something critical.
If you want a strictly real (not complex) result, then the data before the IFFT must be conjugate symmetric. If you don't want the result to be mirror symmetric, then don't zero the imaginary component before the IFFT. Merely zeroing bins before the IFFT creates a filter with a huge amount of ripple in the passband.
The Accelerate framework also supports more FFT lengths than just powers of 2.
This is a formula for LoG filtering:
(source: ed.ac.uk)
Also in applications with LoG filtering I see that function is called with only one parameter:
sigma(σ).
I want to try LoG filtering using that formula (previous attempt was by gaussian filter and then laplacian filter with some filter-window size )
But looking at that formula I can't understand how the size of filter is connected with this formula, does it mean that the filter size is fixed?
Can you explain how to use it?
As you've probably figured out by now from the other answers and links, LoG filter detects edges and lines in the image. What is still missing is an explanation of what σ is.
σ is the scale of the filter. Is a one-pixel-wide line a line or noise? Is a line 6 pixels wide a line or an object with two distinct parallel edges? Is a gradient that changes from black to white across 6 or 8 pixels an edge or just a gradient? It's something you have to decide, and the value of σ reflects your decision — the larger σ is the wider are the lines, the smoother the edges, and more noise is ignored.
Do not get confused between the scale of the filter (σ) and the size of the discrete approximation (usually called stencil). In Paul's link σ=1.4 and the stencil size is 9. While it is usually reasonable to use stencil size of 4σ to 6σ, these two quantities are quite independent. A larger stencil provides better approximation of the filter, but in most cases you don't need a very good approximation.
This was something that confused me too, and it wasn't until I had to do the same as you for a uni project that I understood what you were supposed to do with the formula!
You can use this formula to generate a discrete LoG filter. If you write a bit of code to implement that formula, you can then to generate a filter for use in image convolution. To generate, say a 5x5 template, simply call the code with x and y ranging from -2 to +2.
This will generate the values to use in a LoG template. If you graph the values this produces you should see the "mexican hat" shape typical of this filter, like so:
(source: ed.ac.uk)
You can fine tune the template by changing how wide it is (the size) and the sigma value (how broad the peak is). The wider and broader the template the less affected by noise the result will be because it will operate over a wider area.
Once you have the filter, you can apply it to the image by convolving the template with the image. If you've not done this before, check out these few tutorials.
java applet tutorials more mathsy.
Essentially, at each pixel location, you "place" your convolution template, centred at that pixel. You then multiply the surrounding pixel values by the corresponding "pixel" in the template and add up the result. This is then the new pixel value at that location (typically you also have to normalise (scale) the output to bring it back into the correct value range).
The code below gives a rough idea of how you might implement this. Please forgive any mistakes / typos etc. as it hasn't been tested.
I hope this helps.
private float LoG(float x, float y, float sigma)
{
// implement formula here
return (1 / (Math.PI * sigma*sigma*sigma*sigma)) * //etc etc - also, can't remember the code for "to the power of" off hand
}
private void GenerateTemplate(int templateSize, float sigma)
{
// Make sure it's an odd number for convenience
if(templateSize % 2 == 1)
{
// Create the data array
float[][] template = new float[templateSize][templatesize];
// Work out the "min and max" values. Log is centered around 0, 0
// so, for a size 5 template (say) we want to get the values from
// -2 to +2, ie: -2, -1, 0, +1, +2 and feed those into the formula.
int min = Math.Ceil(-templateSize / 2) - 1;
int max = Math.Floor(templateSize / 2) + 1;
// We also need a count to index into the data array...
int xCount = 0;
int yCount = 0;
for(int x = min; x <= max; ++x)
{
for(int y = min; y <= max; ++y)
{
// Get the LoG value for this (x,y) pair
template[xCount][yCount] = LoG(x, y, sigma);
++yCount;
}
++xCount;
}
}
}
Just for visualization purposes, here is a simple Matlab 3D colored plot of the Laplacian of Gaussian (Mexican Hat) wavelet. You can change the sigma(σ) parameter and see its effect on the shape of the graph:
sigmaSq = 0.5 % Square of σ parameter
[x y] = meshgrid(linspace(-3,3), linspace(-3,3));
z = (-1/(pi*(sigmaSq^2))) .* (1-((x.^2+y.^2)/(2*sigmaSq))) .*exp(-(x.^2+y.^2)/(2*sigmaSq));
surf(x,y,z)
You could also compare the effects of the sigma parameter on the Mexican Hat doing the following:
t = -5:0.01:5;
sigma = 0.5;
mexhat05 = exp(-t.*t/(2*sigma*sigma)) * 2 .*(t.*t/(sigma*sigma) - 1) / (pi^(1/4)*sqrt(3*sigma));
sigma = 1;
mexhat1 = exp(-t.*t/(2*sigma*sigma)) * 2 .*(t.*t/(sigma*sigma) - 1) / (pi^(1/4)*sqrt(3*sigma));
sigma = 2;
mexhat2 = exp(-t.*t/(2*sigma*sigma)) * 2 .*(t.*t/(sigma*sigma) - 1) / (pi^(1/4)*sqrt(3*sigma));
plot(t, mexhat05, 'r', ...
t, mexhat1, 'b', ...
t, mexhat2, 'g');
Or simply use the Wavelet toolbox provided by Matlab as follows:
lb = -5; ub = 5; n = 1000;
[psi,x] = mexihat(lb,ub,n);
plot(x,psi), title('Mexican hat wavelet')
I found this useful when implementing this for edge detection in computer vision. Although not the exact answer, hope this helps.
It appears to be a continuous circular filter whose radius is sqrt(2) * sigma. If you want to implement this for image processing you'll need to approximate it.
There's an example for sigma = 1.4 here: http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm