I am trying to make triangular waves for audio recorder through metering. I am using AVAudioRecorder this means that Fast Fourier Transformation will not work in this case (Secondly i don't have enough knowledge how to implement it). I found this project on github. In this project author is using the following equation to make smooth sine wave:
CGFloat y = scaling * self.maxAmplitude * normedAmplitude * sinf(2 * M_PI *(x / self.waveWidth) * self.frequency + self.phase) + (self.waveHeight * 0.5);
If you consider this sinf(2 * M_PI *(x / self.waveWidth) * self.frequency + self.phase) part of equation you will find that it is the equation of sine wave (wikipedia). If i replace this part with the equation of triangular equation (wikipedia) it still make sine wave with little difference. I want to transform this equation in such a way that it make triangular wave instead of sine wave.
My triangle wave equation looks like this:
CGFloat t = x / self.waveWidth;
CGFloat numerator = sinf( (2.0 * M_PI * (2.0 * self.amplitude + 1.0) * self.frequency * t) );
CGFloat denominator = (2.0 * self.amplitude + 1.0) * (2.0 * self.amplitude + 1.0);
CGFloat multiplyer = (8.0 / pow(M_PI, 2.0));
CGFloat result = multiplyer * (numerator / denominator);
Then finally y position is calculated by:
y = (result * scaling * self.maxAmplitude * normedAmplitude) + (self.waveHeight * 0.5);
Animation is also look unnatural. Output of this equation is:
Thanks
Well by looking at the equation you're using (which is the fourier transform), you're implementing it a bit wrong (k samples should be increasing but you've left it constant with 2.0 * self.amplitude + 1.0. You're also leaving out (-1)^k which adds in the odd harmonics.
Wikipedia wrote this:
It is possible to approximate a triangle wave with additive synthesis by adding odd harmonics of the fundamental, multiplying every (4n−1)th harmonic by −1 (or changing its phase by π), and rolling off the harmonics by the inverse square of their relative frequency to the fundamental.
I'm guessing (as I'm not a DSP expert) that because you're leaving the k value as a constant it is just giving you a sine wave output.
Look at this algorithm block for the triangle wave (try it, then change it for your code):
phaseIncr = (2.0 * M_PI / sample_rate) * self.frequency;
for (int i = 0; i < numSamples; i++) {
triVal = (phase * 2.0/M_PI);
if (phase < 0) triVal = 1.0 + triVal;
else triVal = 1.0 - triVal;
sample = amplitude * triVal;
if ((phase += phaseIncr) >= M_PI) phase -= (2.0 * M_PI);
}
I also see that the original project wrapped the phase in setLevel method so check that out. Hope this helps out and let me know if this doesn't work, I'll try to help as much as I can.
Related
I am using STM32F4 Discovery board. I have generated a 10Hz sine wave using DAC Channel1.
As per STM's Application note, the sine wave generation should be done as follows:
And it can be used to produce desired frequency using following formula:
This is my simple function which populates 100 Samples. Since I used fTimerTRGO = 1kHz, fSinewave is correctly coming as 1k/100 = 10Hz
Appl_getSineVal();
HAL_DAC_Start_DMA(&hdac, DAC_CHANNEL_1, (uint32_t*)Appl_u16SineValue, 100, DAC_ALIGN_12B_R);
.
.
.
.
void Appl_getSineVal(void)
{
for (uint8_t i=0; i<100; i+=1){
Appl_u16SineValue[i] = ((sin(i*2*PI/100) + 1)*(4096/2));
}
}
Now I want to super impose another sine wave of frequency 5Hz in addition to this on the same channel to get a mixed frequency signal. I need help how to solve this.
I tried by populating Appl_u16SineValue[] array with different sine values, but those attempts doesnot worth mentioning here.
In order to combine two sine waves, just add them:
sin(...) + sin(...)
Since the sum is in the range [-2...2] (instead of [-1...1]), it needs to be scaled. Otherwise it would exceed the DAC range:
0.5 * sin(...) + 0.5 * sin(...)
Now it can be adapted to the DAC integer range as before:
(0.5 * sin(...) + 0.5 * sin(...) + 1) * (4096 / 2)
Instead of the gain 0.5 and 0.5, it's also possible to choose other gains, e.g. 0.3 and 0.7. They just need to add up to 1.0.
Update
For your specific case with a 10Hz and a 5Hz sine wave, the code would look like so:
for (uint8_t i=0; i < 200; i++) {
mixed[i] = (0.5 * sin(i * 2*PI / 100) + 0.5 * sin(i * 2*PI / 200) + 1) * (4096 / 2);
}
I have a 3D scene in which in the imaginary sphere I position few objects, now I want to rotate them within device motion.
I use spherical coordinate system and calculate position in sphere like below:
x = ρ * sinϕ * cosθ
y = ρ * sinϕ * sinθ
z = ρ * cosϕ.
Also, I use angles (from 0 to 2_M_PI) for performing rotation horizontally (in z-x)
As result all works perfect until I want to use Quaternion from motion matrix.
I can extract values like pitch, yaw, roll
GLKQuaternion quat = GLKQuaternionMakeWithMatrix4(motionMatrix);
CGFloat adjRoll = atan2(2 * (quat.y * quat.w - quat.x * quat.z), 1 - 2 * quat.y * quat.y - 2 * quat.z * quat.z);
CGFloat adjPitch = atan2(2 * (quat.x * quat.w + quat.y * quat.z), 1 - 2 * quat.x * quat.x - 2 * quat.z * quat.z);
CGFloat adjYaw = asin(2 * quat.x * quat.y + 2 * quat.w * quat.z);
or try also
CMAttitude *currentAttitude = [MotionDataProvider sharedProvider].attitude; //from CoreMotion
CGFloat roll = currentAttitude.roll;
CGFloat pitch = currentAttitude.pitch;
CGFloat yaw = currentAttitude.yaw;
*the values that i got is different for this methods
The problem is that pitch, yaw, roll is not applicable in this format to my scheme.
How can I convert pitch, yaw, roll or quaternion or motionMatrix to required angles in x-z for my rotation model? Am I on correct way of things doing, or I missed some milestone point?
How to get rotation around y axis from received rotation matrix/quaternion from CoreMotion, converting current z and x to 0, so displayed object can be rotated only around y axis?
I use iOS, by the way, but guess this is not important here.
I want to generate a square wave sound on iPhone, I found a sine wave code on Web (sorry forgotten the link), but i want to generate Square wave format.
Could you help me please?
const double amplitude = 0.25;
ViewController *viewController =
(__bridge ViewController *)inRefCon;
double theta = viewController->theta;
double theta_increment = 2.0 * M_PI * viewController->frequency / viewController->sampleRate;
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
viewController->theta = theta;
Sum of the odd harmonics
A perfect square wave is the sum of all the odd harmonics divided by the harmonic number up to infinity. In the real world you have to stop of course - specifically at the nyquist frequency in digital. Below is a picture of the fundamental plus the first 3 odd harmonics. You can see how the square begins to take shape.
In your code sample, this would mean wrapping the sine generation in another loop. Something like this:
double harmNum = 1.0;
while (true)
{
double freq = viewController->frequency * harmNum;
if (freq > viewController->sampleRate / 2.0)
break;
double theta_increment = 2.0 * M_PI * freq / viewController->sampleRate;
double ampl = amplitude / harmNum;
// and then the rest of your code.
for (UInt32 frame = ....
The main problem you'll have is that you need to track theta for each of the harmonics.
A cheater solution
A cheat would be to draw a square like you would on paper. Divide the sample rate by the frequency by 2 and then produce that number of -1 and that number of +1.
For example, for a 1kHz sine at 48kHz. 48000/1000/2 = 24 so you need to output [-1,-1,-1,....,1,1,1,.....] where there are 24 of each.
A major disadvantage is that you'll have poor frequency resolution. Like if your sample rate were 44100 you can't produce exactly 1kHz. because that would require 22.05 samples at -1 and 22.05 samples at 1 so you have to round down.
Depending on your requirements this might an easier way to go since you can implement it with a counter and the last count between invocations (as you're tracking theta now)
Getting data from the CMMotionManager is fairly straight forward, processing it not so much.
Does anybody have any pointers to code for relatively accurately detecting a step (and ignoring smaller movements) or guidelines in a general direction how to go about such a thing?
What you basically need is a kind of a Low Pass Filter that will allow you to ignore small movements. Effectively, this “smooths” out the data by taking out the jittery.
- (void)updateViewsWithFilteredAcceleration:(CMAcceleration)acceleration
{
static CGFloat x0 = 0;
static CGFloat y0 = 0;
const NSTimeInterval dt = (1.0 / 20);
const double RC = 0.3;
const double alpha = dt / (RC + dt);
CMAcceleration smoothed;
smoothed.x = (alpha * acceleration.x) + (1.0 - alpha) * x0;
smoothed.y = (alpha * acceleration.y) + (1.0 - alpha) * y0;
[self updateViewsWithAcceleration:smoothed];
x0 = smoothed.x;
y0 = smoothed.y;
}
The alpha value determines how much weight to give the previous data vs the raw data.
The dt is how much time elapsed between samples.
RC value controls the aggressiveness of the filter. Bigger values mean smoother output.
I am learning how to make a Julia Set fractal. I am using this as a reference.
I know the math theory behind it very well. I can compute it manually, too. However, what I do not understand is how it is being done in the program mentioned in the reference.
The author has certain variables that determine the zoom and displacement and he performs some calculations on it.
Can someone please explain what they are ?
Let's take a look at this line (the one below it works the same way):
newRe = (x - w / 2) / (0.5 * zoom * w) + moveX;
(Ignore the lack of 1.5 factor, that's just there to make sure it doesn't look "squished.")
It's in a for loop that assigns values between 0 and w to x.[1] So the leftmost and rightmost newRe values are going to be:
Leftmost:
newRe = (0 - w / 2) / (0.5 * zoom * w) + moveX;
= -(w / 2) / w / 0.5 / zoom + moveX;
= -(1 / 2) / 0.5 / zoom + moveX;
= -1 / zoom + moveX;
Rightmost:
newRe = (w - w / 2) / (0.5 * zoom * w) + moveX;
= (w / 2) / w / 0.5 / zoom + moveX;
= (1 / 2) / 0.5 / zoom + moveX;
= 1 / zoom + moveX;
Their difference -- that is, the width of the actual rectangle of the Julia fractal being displayed -- is equal to:
(1 / zoom + moveX) - (-1 / zoom + moveX)
= (1 / zoom) - (-1 / zoom)
= 2 / zoom
(This whole calculation also works for newIm, h, and moveY.)
This is why increasing zoom causes the rectangle we're examining to shrink -- which is exactly what "zooming in" is.
[1] It actually only goes to w-1, but that one-pixel difference makes this calculation a whole lot more difficult.