terminate excuation of block in objective c - ios

- (void)beat{
__block UInt64 total_frames = 0;
// The next frame that the beat will play on
static UInt64 next_beat_frame = 0;
static UInt64 next_tick_frame = 0;
static BOOL making_beat = NO;
static BOOL tick=NO;
_bpm=self.tempo_slider.value;
// Oscillator specifics - instead you can easily load the samples from cowbell.aif or somesuch
float oscillatorRate = 440./44100.0;
__block float oscillatorPosition = 0; // this is outside the block since beats can span calls to the block
self.blockChannel.volume=0.5;
// The block that is our metronome
self.blockChannel = [AEBlockChannel channelWithBlock:^(const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) {
UInt64 frames_between_beats = 44100/(_bpm/60.);
UInt64 end_frame=frames_between_beats*_midi_length;
// For each frame, count and if we reach the frame that should start a beat, start the beat
// frames=_length;
for (int i=0; i<frames; i++) { // frame...by frame...
if(end_frame==total_frames){
if (_recordStatus != 0){
total_frames=0;
next_beat_frame =0;
next_tick_frame=0;
oscillatorPosition = 0;
_recordStatus=0;
}
NSLog(#"forever here");
return;
}
if (next_beat_frame == total_frames) {
making_beat = YES;
oscillatorPosition = 0; // reset the osc position to make them all sound the same
next_tick_frame=next_beat_frame+frames_between_beats;
next_beat_frame += _upper_timesig*frames_between_beats;
}
total_frames++;
}
}];
// Add the block channel to the audio controller
[_audioController addChannels:[NSArray arrayWithObject:_blockChannel]];
}
The logic of above code is running the block and total_frames will increment. When it reaches end_frame, it will terminate the block and reset everything. When it reaches next_beats, it will beats.
Currently I am trying to use return to terminate the block. In fact, it works. But when I re-execute the block, it wont reach the return statement, instead it will execute
total_frames=0;
next_beat_frame =0;
next_tick_frame=0;
oscillatorPosition = 0;
_recordStatus=0; .
then continue run the block, when it reaches above code again, nothing
It , and execute the lane before return statement NSLog(#"forever here"); forever. Why return does not work second time?If I go to the previous view and go back to this view, the block will run ok.
Second question, if I am changing the view, and I wanna terminate the block. What should I do in the block or generally in objective c, how can I detect the view changes.
Third question, what is exactly difference between static and __block variable in the block?

I fixed this problem by resetting variables instead of terminating the block. I will let block run in the background. Whenever I want to use it, I will reset
total_frames=0;
next_beat_frame =0;
next_tick_frame=0;
oscillatorPosition
So, I change variables to __block.

Related

My first PIC32MX ISR not firing, code is hanging

I'm just getting started with a PIC32MX340F12, and MPLABX. My first attempt was to write a timer interrupt, so I worked with the datasheet, compiler manual, and examples and came up with the below. But it doesn't work... the interrupt never fires, and in fact if I leave both the timer interrupt enable (T1IE=1) and the general interrupt enable active ("ei"), it runs for a few seconds then hangs (says "target halted" in debug mode). If I remove either of those, it just runs indefinitely but still no timer interrupt. So I appear to have a pretty bad problem somewhere in my ISR syntax. Does it jump out at anyone?
Like I said I'm just getting started so I'm sure it's a pretty dumb oversight. And as you may notice I like to work as directly as possible with registers and compiler directives (rather than manufacturer supplied functions), I feel like I learn the most that way.
Thanks!
#include <stdio.h>
#include <stdlib.h>
#include "p32mx340f512h.h"
#include <stdint.h>
int x = 0;
int main(int argc, char** argv)
{
INTCONbits.MVEC = 1; // turn on multi-vector interrupts
T1CON = 0; // set timer control to 0
T1CONbits.TCKPS = 1; // set T1 prescaler to 8
PR1 = 62499; // set t1 period
TMR1 = 0; // initialize the timer
T1CONbits.ON = 1; // activate the timer
IPC1bits.T1IP = 5; // T1 priority to 5
IPC1bits.T1IS = 0; // T1 secondary priority to
IFS0bits.T1IF = 0; // clear the T1 flag
IEC0bits.T1IE = 1; // enable the T1 interrupts
asm volatile("ei"); // enable interrupts
while (1)
{
x++;
if (x > 10000)
{
x = 0;
}
}
return (EXIT_SUCCESS);
}
bool zzz = false;
void __attribute__((interrupt(IPL5AUTO))) T1Handler(void)
{
IFS0bits.T1IF = 0;
zzz = true;
}
Embedded systems are somewhat specialized, and this is a specific one I'm not familiar with.
However, from working with other systems, you may have to associate the Int Handler function address (T1Handler) with the interrupt it is handling. (Unless the framework you are using is doing that for you under the covers when building?)
Are all those names you are using automatically mapped for you by the build system?
If not, you may need to call some kind HW init or framework init at the top of main, before using them.
Some HW init/reset may be needed also, before the HW can be programmed.
Hope some of this helps.

How to update a UILabel in realtime when receiving Serial.print statements

I am using a Bluno microcontroller to send / receive data from an iPhone, and everything is working as it should, but I would like to update the text of a UILabel with the real time data that is being printed from the Serial.print(numTicks); statement. If I stop the flowmeter the UILabel gets updated with the most current value, but I would like to update this label in realtime. I am not sure if this is a C / Arduino question or more of a iOS / Objective-C question. The sketch I'm loading on my Bluno looks like the following, https://github.com/ipatch/KegCop/blob/master/KegCop-Bluno-sketch.c
And the method in question inside that sketch looks like the following,
// flowmeter stuff
bool getFlow4() {
// call the countdown function for pouring beer
// Serial.println(flowmeterPin);
flowmeterPinState = digitalRead(flowmeterPin);
// Serial.println(flowmeterPinStatePinState);
volatile unsigned long currentMillis = millis();
// if the predefined interval has passed
if (millis() - lastmillis >= 250) { // Update every 1/4 second
// disconnect flow meter from interrupt
detachInterrupt(0); // Disable interrupt when calculating
// Serial.print("Ticks:");
Serial.print(numTicks);
// numTicks = 0; // Restart the counter.
lastmillis = millis(); // Update lastmillis
attachInterrupt(0, count, FALLING); // enable interrupt
}
if(numTicks >= 475 || valveClosed == 1) {
close_valve();
numTicks = 0; // Restart the counter.
valveClosed = 0;
return 0;
}
}
On the iOS / Objective-C side of things I'm doing the following,
- (void)didReceiveData:(NSData *)data Device:(DFBlunoDevice *)dev {
// setup label to update
_ticks = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
[_tickAmount setText:[NSString stringWithFormat:#"Ticks:%#",_ticks]];
[_tickAmount setNeedsDisplay];
NSLog(#"ticks = %#",_ticks);
}
Basically I would like to update the value of the UILabel while the flowmeter is working.
UPDATE
I just tested the functionality again with the serial monitor within the Arduino IDE, and I got the same if not similar results as to what I got via Xcode and the NSLog statements. So this leads me to believe something in the sketch is preventing the label from updating in real time. :/ Sorry for the confusion.

how to get the AudioToolbox MusicTrack to loop in iOS 9.0?

this code works fine for looping a MusicTrack in iOS 8.4, but will halt the app under iOS 9.0 when setting the sequence with MusicPlayerSetSequence
var loopInfo = MusicTrackLoopInfo(loopDuration: 1.0,numberOfLoops: 0)
MusicTrackSetProperty(track, UInt32(kSequenceTrackProperty_LoopInfo), &loopInfo, UInt32(sizeofValue(loopInfo)))
is there another way to get the track to loop in iOS 9?
Others are having a similar problem: https://forums.developer.apple.com/thread/9940 the general idea for a current work around is to set the playback marker back to 0 once it reaches the end of the track.
For example use this in the method you're using to start your music player:
MusicTrack track = NULL;
MusicTimeStamp trackLen = 0;
UInt32 trackLenLen = sizeof(trackLen);
//Get main track
MusicSequenceGetIndTrack(musicSequence, 0, &track);
//Get length of track
MusicTrackGetProperty(track, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLen);
//Create UserData for User Event with any data
static MusicEventUserData userData = {1, 0x09};
//Put new user event at the end of the track
MusicTrackNewUserEvent(track, trackLen, &userData);
//Set a callback for when User Events occur
MusicSequenceSetUserCallback(musicSequence, sequenceUserCallback, musicPlayer);
And then you can have a callback function:
static void sequenceUserCallback(void *inClientData,
MusicSequence inSequence,
MusicTrack inTrack,
MusicTimeStamp inEventTime,
const MusicEventUserData *inEventData,
MusicTimeStamp inStartSliceBeat,
MusicTimeStamp inEndSliceBeat)
{
[[NSOperationQueue mainQueue] addOperationWithBlock:^ {
MusicPlayerSetTime((MusicPlayer) inClientData, 0.0);
}];
}
Which will set the player back to zero.

Precise time of audio queue playback finish

I am using Audio Queues to playback audio files. I need precise timing on the finish of last buffer.
I need to notify a function no later than 150ms-200 ms after the last buffer is played...
Thru callback method I know how many buffers are enqueued
I know the buffer size, I know the how many bytes last buffer is filled with.
First I initialize a number of buffers end fill the buffers with audio data, then enqueue them. When Audio Queue needs a buffer to be filled it calls the callback and I fill the buffer with data.
When there is no more audio data available Audio Queue sends me the last empty buffer, so I fill it with whatever data I have:
if (sharedCache.numberOfToTalPackets>0)
{
if (currentlyReadingBufferIndex==[sharedCache.baseAudioCache count]-1) {
inBuffer->mAudioDataByteSize = (UInt32)bytesFilled;
lastEnqueudBufferSize=bytesFilled;
err=AudioQueueEnqueueBuffer(inAQ,inBuffer,(UInt32)packetsFilled,packetDescs);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_ENQUEUE_FAILED];
}
printf("if that was the last free packet description, then enqueue the buffer\n");
//go to the next item on keepbuffer array
isBufferFilled=YES;
[self incrementBufferUsedCount];
return;
}
}
When Audio Queue asks for more data via callback and I have no more data , I start to countdown the buffers. If buffer count equals to zero, which means only one buffer left on the flight to be played, the moment playback is done I try to stop the audio queue.
-(void)decrementBufferUsedCount
{
if (buffersUsed>0) {
buffersUsed--;
printf("buffer on the queue %i\n",buffersUsed);
if (buffersUsed==0) {
NSLog(#"playback is finished\n");
// end playback
isPlayBackDone=YES;
double sampleRate = dataFormat.mSampleRate;
double bufferDuration = lastEnqueudBufferSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
[self performSelector:#selector(stopPlayer) withObject:nil afterDelay:estimatedTimeNeded];
}
}
}
-(void)stopPlayer
{
#synchronized(self)
{
state=AP_STOPPING;
}
err=AudioQueueStop(queue, TRUE);
if (err) {
[self failWithErrorCode:err customError:AP_AUDIO_QUEUE_STOP_FAILED];
}
else
{
#synchronized(self)
{
state=AP_STOPPED;
NSLog(#"Stopped\n");
}
However it seems I can't get precise timing here. Above code stops player early.
if I do following audio cuts early too
double bufferDuration = XMAQDefaultBufSize/ sampleRate;
double estimatedTimeNeded=bufferDuration*1;
if increase 1 to 2 since the buffer size is big I get some delay, seem 1.5 is the optimum value for now but I dont understand why lastEnqueudBufferSize/ sampleRate is not wotking
Details of the audio file, and buffers:
Audio file has 22050 sample rate
#define kNumberPlaybackBuffers 4
#define kAQDefaultBufSize 16384
it is a vbr file format with no bitrate information available
EDIT:
I found an easier way that gets the same results (+/-10ms). After you set up your output Queue with AudioQueueNewOutput() you initialize a AudioQueueTimelineRef to be used in your output callback. (ticksToSeconds function is included below in my first method) don't forget to import<mach/mach_time.h>
//After AudioQueueNewOutput()
AudioQueueTimelineRef timeLine; //ivar
AudioQueueCreateTimeline(queue, self.timeLine);
Then in your output callback you call AudioQueueGetCurrentTime(). Caveat: queue must be playing for valid timestamps. So for very short files you might need to use the AudioQueueProcessingTap method below.
AudioTimeStamp timestamp;
AudioQueueGetCurrentTime(queue, self->timeLine, &timestamp, NULL);
The timestamp ties together the current sample playing with the current machine time. With that info we can get an exact machine time in the future when our last sample will be played.
Float64 samplesLeft = self->frameCount - timestamp.mSampleTime;//samples in file - current sample
Float64 secondsLeft = samplesLeft / self->sampleRate; //seconds of audio to play
UInt64 ticksLeft = secondsLeft / ticksToSeconds(); //seconds converted to machine ticks
UInt64 machTimeFinish = timestamp.mHostTime + ticksLeft; //machine time of first sample + ticks left
Now that we have this future machine time we can use it to time whatever it is that you want to do with some accuracy.
UInt64 currentMachTime = mach_absolute_time();
Uint64 ticksFromNow = machTimeFinish - currentMachTime;
float secondsFromNow = ticksFromNow * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
printf("Giggety");
});
If GCD dispatch_async isn't accurate enough there are ways to set up a precision timer
Using AudioQueueProcessingTap
You can get fairly low response time from an AudioQueueProcessingTap. First you make your callback that will essentially put itself in-between the audio stream. The MyObject type is just whatever self is in your code(this is ARC bridging here to get self inside the function). Inspecting ioFlags tells you when the stream starts and finishes. The ioTimeStamp of an output callback describes time that the first sample in the callback will hit the speaker in the future. So if you want to get exact here's how you do it. I added some convenience functions for converting machine time to seconds.
#import <mach/mach_time.h>
double getTimeConversion(){
double timecon;
mach_timebase_info_data_t tinfo;
kern_return_t kerror;
kerror = mach_timebase_info(&tinfo);
timecon = (double)tinfo.numer / (double)tinfo.denom;
return timecon;
}
double ticksToSeconds(){
static double ticksToSeconds = 0;
if (!ticksToSeconds) {
ticksToSeconds = getTimeConversion() * 0.000000001;
}
return ticksToSeconds;
}
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_EndOfStream) {
Float64 sampTime;
UInt32 frameCount;
AudioQueueProcessingTapGetQueueTime(inAQTap, &sampTime, &frameCount);
Float64 samplesInThisCallback = self->frameCount - sampleTime;//file sampleCount - queue current sample
//double secondsInCallback = outNumberFrames / (double)self->sampleRate; outNumberFrames was inaccurate
double secondsInCallback = * samplesInThisCallback / (double)self->sampleRate;
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (secondsInCallback / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
-(void)lastSampleDoneAt:(uint64_t)lastSampTime{
uint64_t currentTime = mach_absolute_time();
if (lastSampTime > currentTime) {
double secondsFromNow = (lastSampTime - currentTime) * ticksToSeconds();
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(secondsFromNow * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
//do the thing!!!
});
}
else{
//do the thing!!!
}
}
You set it up like this after AudioQueueNewOutput and before AudioQueueStart. Notice the passing of bridged self to the inClientData argument. The queue actually holds self as void* to be used in callback where we bridge it back to an objective-C object within the callback.
AudioStreamBasicDescription format;
AudioQueueProcessingTapRef tapRef;
UInt32 maxFrames = 0;
AudioQueueProcessingTapNew(queue, processingTapCallback, (__bridge void *)self, kAudioQueueProcessingTap_PostEffects, &maxFrames, &format, &tapRef);
You could get the end machine time as soon as the file starts too. A little cleaner too.
void processingTapCallback(
void * inClientData,
AudioQueueProcessingTapRef inAQTap,
UInt32 inNumberFrames,
AudioTimeStamp * ioTimeStamp,
UInt32 * ioFlags,
UInt32 * outNumberFrames,
AudioBufferList * ioData){
MyObject *self = (__bridge Object *)inClientData;
AudioQueueProcessingTapGetSourceAudio(inAQTap, inNumberFrames, ioTimeStamp, ioFlags, outNumberFrames, ioData);
if (*ioFlags == kAudioQueueProcessingTap_StartOfStream) {
uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (self->audioDurSeconds / ticksToSeconds());
[self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker];
}
}
If you use AudioQueueStop in asynchronous mode, then stopping happens after all queued buffers have been played or recorded. See doc.
You're using it in a synchronous mode, where stopping happens ASAP, and playback cuts out immediately, without regard for previously buffered audio data. You want precise timing, but only because audio is cutting off. Right? So rather than go synchronous + add additional timing/callback code, I recommend going asynchronous:
err=AudioQueueStop(queue, FALSE);
From docs:
If you pass false, the function returns immediately, but the audio
queue does not stop until its queued buffers are played or recorded
(that is, the stop occurs asynchronously). Audio queue callbacks are
invoked as necessary until the queue actually stops.
For me this worked really well for what I heeded:
stopping the queue in callback when data is over using AudioQueueStop(queue, FALSE), while:
listening to actual stop using kAudioQueueProperty_IsRunning property (happens later than AudioQueueStop() is called, actually, when last buffer gets actually rendered)
after stopping the queue You can get prepared for action You need to execute on audio ending, and when listener fires - actually execute this action.
I am not sure about time precision of that event but for my task it behaved definitely better than using notification straight from callback. There is buffering inside AudioQueue and output device itself so definitely IsRunning listener gives better results as to when AudioQueue stops playing.

iOs Play Audio for VOIP App

This is my first post to stackoverflow. At present I am developing an voip app for ios. I want to do something like this.
//in a thread
while(callIsOnGoing){
data = getDataFromNetwork()
playData()
sleep(10ms)
}
But problem is that audio in ios works in a "Pull" model(uses callback to get data). But i need to push data to play it. I have tried AudioQueue, but in audioQueue the data i push in buffer outside of callback doesn't get played though callback is called.
Again, i have seen AVCaptureToAudioUnit example by apple(http://developer.apple.com/library/ios/#samplecode/AVCaptureToAudioUnit/Introduction/Intro.html) where they called AudioUnitRender synchronously in case of of a delay audio unit. I tried similar for RemoteI/O Audio unit. But every time it returns OSStatus -50.
The code is given below
//in a separate thread
do { // 5
int data_length = [NativeLibraryHelper GetData:(playBuff)];
if(data_length == 0){
}else{
double numberOfFrameCount = data_length / player->audioStreamDesc->mBytesPerFrame;
currentSampleTime += numberOfFrameCount;
//AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp timeStamp;
memset(&timeStamp, 0, sizeof(AudioTimeStamp));
timeStamp.mSampleTime = currentSampleTime;
timeStamp.mFlags |= kAudioTimeStampSampleTimeValid;
AudioUnitRenderActionFlags flags = 0;
AudioBuffer buffer;
buffer.mNumberChannels = player->audioStreamDesc->mChannelsPerFrame;
buffer.mDataByteSize = data_length;
buffer.mData = malloc(data_length);
memcpy(buffer.mData, playBuff, data_length);
AudioBufferList audBuffList;
audBuffList.mBuffers[0] = buffer;
audBuffList.mNumberBuffers = 1;
printf("Audio REnder call back funciotn called with data size %d\n", data_length);
status = AudioUnitRender(audioUnitInstance, &flags, &timeStamp, 0, numberOfFrameCount, &audBuffList);
printf("osstatus %d\n", status);
}//end if else
CFRunLoopRunInMode ( // 6
kCFRunLoopDefaultMode, // 7
0.25, // 8
false // 9
);
//} while (aqData.mIsRunning);
[NSThread sleepForTimeInterval:.05];
}while (player->isRunning == YES);
I am struggling with audio play part for more than one month. Please help. Thanks in advance.
One general solution is to have an async network getdata/read function push data to an intermediate buffer or queue, then have the audio callback read from that intermediate buffer (or read silence if the intermediate buffer/queue is empty).

Resources