I have a camera app where I am trying to limit the capture length to exactly 15 seconds.
I have tried two different approaches, and neither of them are working to my satisfaction.
The first approach is to fire a repeating timer every second:
self.timer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:#selector(countTime:) userInfo:[NSDate date] repeats:YES];
- (void)countTime:(NSTimer*)sender {
NSDate *start = sender.userInfo;
NSTimeInterval duration = [[NSDate date] timeIntervalSinceDate:start];
NSInteger time = round(duration);
if (time > 15) {
[self capture:nil]; // this stops capture
}
}
this gives me a 15 second video 8/10 times, with a periodic 16 second one... and I have tried a mixture of the NSTimeInterval double and the rounded integer here, with no apparent difference...
The second approach is to fire a selector once after the desired duration, like so:
self.timer = [NSTimer scheduledTimerWithTimeInterval:15.0f target:self selector:#selector(capture:) userInfo:nil repeats:NO];
this just calls the capture method - which stops camera capture - directly, and gives me the same results...
Is there something that I am overlooking here?
Now, because I have tested with a number of tweaked floating point values as the cap (14.5, 15.0, 15.1, 15.5, 16.0 etc) and I almost always see a 16 second video after a few tries, I am starting to wonder whether it's just the AVFoundation taking a second to flush the buffer... ???
NSTimer is not guaranteed to fire when you want it to, just after you want it to fire:
From Apple's docs:
A timer is not a real-time mechanism; it fires only when one of the run loop modes to which the timer has been added is running and able to check if the timer’s firing time has passed. Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds. If a timer’s firing time occurs during a long callout or while the run loop is in a mode that is not monitoring the timer, the timer does not fire until the next time the run loop checks the timer. Therefore, the actual time at which the timer fires potentially can be a significant period of time after the scheduled firing time. See also Timer Tolerance.
But to answer your question, I used to work for a company that had a max 15 seconds video. I didn't write the video code but I think we used AVComposition after the fact to ensure that the video was no more than 15 seconds. And even then it could be a frame shorter sometimes. See How do I use AVFoundation to crop a video
Thanks to Paul and Linuxious for their comments and answers... and Rory for thinking outside the box (intriguing option).
And yes, in the end it is clear that NSTimer isn't sufficient by itself for this.
In the end, I listen for the captureOutput delegate method to fire, test for the length of the asset, and trim the composition appropriately.
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray *)connections error:(NSError *)error
{
_isRecording = NO;
AVURLAsset *videoAsset = [AVURLAsset assetWithURL:outputFileURL];
CMTime length = [videoAsset duration];
CMTimeShow(length);
if(CMTimeGetSeconds(length) > 15)
{
NSLog(#"Capture Longer Than 15 Seconds - Attempting to Trim");
Float64 preferredDuration = 15;
int32_t preferredTimeScale = 30;
CMTimeRange timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(preferredDuration, preferredTimeScale));
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:videoAsset presetName:AVAssetExportPresetHighestQuality];
exportSession.outputURL = outputFileURL;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
exportSession.timeRange = timeRange;
NSError *err = nil;
[[NSFileManager defaultManager] removeItemAtURL:outputFileURL error:&err];
if (err) {
NSLog(#"Error deleting File: %#", [err localizedDescription]);
}
else {
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (exportSession.status == AVAssetExportSessionStatusCompleted) {
NSLog(#"Export Completed - Passing URL to Delegate");
if ([self.delegate respondsToSelector:#selector(didFinishRecordingToOutputFileAtURL:error:)]) {
[self.delegate didFinishRecordingToOutputFileAtURL:outputFileURL error:error];
}
}
else if(exportSession.status == AVAssetExportSessionStatusFailed) {
NSLog(#"Export Error: %#", [exportSession.error localizedDescription]);
if ([self.delegate respondsToSelector:#selector(didFinishRecordingToOutputFileAtURL:error:)]) {
[self.delegate didFinishRecordingToOutputFileAtURL:outputFileURL error:exportSession.error ];
}
}
}];
}
}
}
Related
Am using AVCapture to recording the video and storing in documents directory.
Right now We are storing the start time of video recording on click of the recording button click.
We are observing some delay in milliseconds between the click of recording button and the actual recording file.
dispatch_async( sessionQueue, ^{
if ( ! self.movieFileOutput.isRecording ) {
// Update the orientation on the movie file output video connection before starting recording.
AVCaptureConnection *movieFileOutputConnection = [self.movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
movieFileOutputConnection.videoOrientation = videoPreviewLayerVideoOrientation;
// Start recording to a temporary file.
NSString *outputFileName = [NSUUID UUID].UUIDString;
NSString *outputFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[outputFileName stringByAppendingPathExtension:#"mov"]];
dispatch_async(dispatch_get_main_queue(), ^{
VideoInfo *info = [dbManager getVideoWithId:self.currentVideoID];
if (info == nil) {
VideoInfo *tempVid = [[VideoInfo alloc] init];
tempVid.videoId = (int)[tempVid generateVideoID];
[dbManager insertVideoDetails:tempVid];
info = tempVid;
}
[appDelegate.realm beginWriteTransaction];
info.videoDate = [NSString stringWithFormat:#"%lld",[#(floor([[NSDate date] timeIntervalSince1970] * 1000000)) longLongValue]];
[appDelegate.realm commitWriteTransaction];
});
//
[self.movieFileOutput startRecordingToOutputFileURL:[NSURL fileURLWithPath:outputFilePath] recordingDelegate:self];
// Setup socket connection
// [[SocketManager sharedManager] socketThreadStart];
[[SocketManager sharedManager] setupSocket];
}
Can anyone guide that how we can to store the exact starting recording time (With date time with Hours,Minutes,seconds,Milliseconds)of the video with accuracy in milliseconds?
Following is the code that we are storing the date of start recording just before start capturing the output.
info.videoDate = [NSString stringWithFormat:#"%lld",[#(floor([[NSDate date] timeIntervalSince1970] * 1000000)) longLongValue]];
[appDelegate.realm commitWriteTransaction];
});
[self.movieFileOutput startRecordingToOutputFileURL:[NSURL fileURLWithPath:outputFilePath] recordingDelegate:self];
Why don't you use AVCaptureFileOutputRecordingDelegate methods to determine the time for all the operations? Check the list of delegate methods iOS have as follows
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections{
//WHEN RECORDING STARTED
}
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error{
//WHEN RECORDING ENDED
}
Check others here.
Hope it helps.
Cheers.
am trying to use SFSpeechRecognizer for speech to text, after speaking a welcome message to the user via AVSpeechUtterance. But randomly, the speech recognition does not start(after speaking the welcome message) and it throws the error message below.
[avas] ERROR: AVAudioSession.mm:1049: -[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
It works few times. Am not clear on why is it not working consistently.
I tried the solutions mentioned in other SO posts, where it mentions to check if there are audio players running. I added that check in the speech to text part of the code. It returns false (i.e. no other audio player is running) But still the speech to text does not start listening for the user speech. Can you pls guide me on what is going wrong.
Am testing on iPhone 6 running iOS 10.3
Below are code snippets used:
TextToSpeech:
- (void) speak:(NSString *) textToSpeak {
[[AVAudioSession sharedInstance] setActive:NO withOptions:0 error:nil];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback
withOptions:AVAudioSessionCategoryOptionDuckOthers error:nil];
[synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
AVSpeechUtterance* utterance = [[AVSpeechUtterance new] initWithString:textToSpeak];
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:locale];
utterance.rate = (AVSpeechUtteranceMinimumSpeechRate * 1.5 + AVSpeechUtteranceDefaultSpeechRate) / 2.5 * rate * rate;
utterance.pitchMultiplier = 1.2;
[synthesizer speakUtterance:utterance];
}
- (void)speechSynthesizer:(AVSpeechSynthesizer*)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance*)utterance {
//Return success message back to caller
[[AVAudioSession sharedInstance] setActive:NO withOptions:0 error:nil];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient
withOptions: 0 error: nil];
[[AVAudioSession sharedInstance] setActive:YES withOptions: 0 error:nil];
}
Speech To Text:
- (void) recordUserSpeech:(NSString *) lang {
NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:lang];
self.sfSpeechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];
[self.sfSpeechRecognizer setDelegate:self];
NSLog(#"Step1: ");
// Cancel the previous task if it's running.
if ( self.recognitionTask ) {
NSLog(#"Step2: ");
[self.recognitionTask cancel];
self.recognitionTask = nil;
}
NSLog(#"Step3: ");
[self initAudioSession];
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
NSLog(#"Step4: ");
if (!self.audioEngine.inputNode) {
NSLog(#"Audio engine has no input node");
}
if (!self.recognitionRequest) {
NSLog(#"Unable to created a SFSpeechAudioBufferRecognitionRequest object");
}
self.recognitionTask = [self.sfSpeechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult *result, NSError *error) {
bool isFinal= false;
if (error) {
[self stopAndRelease];
NSLog(#"In recognitionTaskWithRequest.. Error code ::: %ld, %#", (long)error.code, error.description);
[self sendErrorWithMessage:error.localizedFailureReason andCode:error.code];
}
if (result) {
[self sendResults:result.bestTranscription.formattedString];
isFinal = result.isFinal;
}
if (isFinal) {
NSLog(#"result.isFinal: ");
[self stopAndRelease];
//return control to caller
}
}];
NSLog(#"Step5: ");
AVAudioFormat *recordingFormat = [self.audioEngine.inputNode outputFormatForBus:0];
[self.audioEngine.inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//NSLog(#"Installing Audio engine: ");
[self.recognitionRequest appendAudioPCMBuffer:buffer];
}];
NSLog(#"Step6: ");
[self.audioEngine prepare];
NSLog(#"Step7: ");
NSError *err;
[self.audioEngine startAndReturnError:&err];
}
- (void) initAudioSession
{
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:nil];
[audioSession setMode:AVAudioSessionModeMeasurement error:nil];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];
}
-(void) stopAndRelease
{
NSLog(#"Invoking SFSpeechRecognizer stopAndRelease: ");
[self.audioEngine stop];
[self.recognitionRequest endAudio];
[self.audioEngine.inputNode removeTapOnBus:0];
self.recognitionRequest = nil;
[self.recognitionTask cancel];
self.recognitionTask = nil;
}
Regarding the logs added, am able to see all logs till "Step7" printed.
When debugging the code in the device, it consistently triggers break at the below lines (I have exception breakpoints set) though, continue keeps on with the execution. It however happens same way during few successful executions as well.
AVAudioFormat *recordingFormat = [self.audioEngine.inputNode outputFormatForBus:0];
[self.audioEngine prepare];
The reason is audio didn't completely finish, when -speechSynthesizer:didFinishSpeechUtterance: was called, therefore you get such kind of error trying to call setActive:NO. You cant deactivate AudioSession or change any settings during I/O is running. Workaround: wait for several ms (how long read below) and then perform AudioSession deactivation and stuff.
A few words about audio playing completion.
That might seem weird at first glance, but I've spent tones of time to research this issue. When you put last sound chunk to device output you have only approximate timing when it actually will be completed. Look at the AudioSession property ioBufferDuration:
The audio I/O buffer duration is the number of seconds for a single
audio input/output cycle. For example, with an I/O buffer duration of
0.005 s, on each audio I/O cycle:
You receive 0.005 s of audio if obtaining input.
You must provide 0.005 s of audio if providing output.
The typical maximum I/O buffer duration is 0.93 s (corresponding to 4096 sample
frames at a sample rate of 44.1 kHz). The minimum I/O buffer duration
is at least 0.005 s (256 frames) but might be lower depending on the
hardware in use.
So, we can interpret this value as the one chunk playback time. But you still have a small non-calculated duration between this timeline and actual audio playing completion (hardware delay). I would say you need wait about ioBufferDuration * 1000 + delay ms for being sure audio playing complete (ioBufferDuration * 1000 - coz it is duration in seconds), where delay is some quite small value.
More over seems like even Apple developers are also not pretty sure about audio completion time. Quick look at the new audio class AVAudioPlayerNode and func scheduleBuffer(_ buffer: AVAudioPCMBuffer, completionHandler: AVFoundation.AVAudioNodeCompletionHandler? = nil):
#param completionHandler called after the buffer has been consumed by
the player or the player is stopped. may be nil.
#discussion Schedules the buffer to be played following any previously scheduled commands. It is possible for the completionHandler to be called
before rendering begins or before the buffer is played completely.
You can read more about audio processing in Understanding the Audio Unit Render Callback Function (AudioUnit is low-level API that provides fasten access to I/O data).
I use zxing to scan barcodes. But the camera scans real quick ,so that my method gets overloaded with the result. How to slow down it or create a delay to scan the barcodes?
Here is my result method:
- (void)captureResult:(ZXCapture *)capture result:(ZXResult *)result {
if (!result) return;
// We got a result. Display information about the result onscreen.
NSString *formatString = [self barcodeFormatToString:result.barcodeFormat];
NSString *display = [NSString stringWithFormat:#"Scanned!\n\nFormat: %#\n\nContents:\n%#", formatString, result.text];
[self.decodedLabel performSelectorOnMainThread:#selector(setText:) withObject:display waitUntilDone:YES];
// Vibrate
AudioServicesPlaySystemSound(kSystemSoundID_Vibrate);
}
You could record an NSTimeInterval and reject all results for the next 'x' seconds. Example of detecting at most once every half second:
- (void)captureResult:(ZXCapture *)capture result:(ZXResult *)result {
if ([[NSDate date] timeIntervalSince1970] < _nextUpdateTime) {
return;
}
_nextUpdateTime = [[NSDate date] timeIntervalSince1970] + 0.5;
// remainder of function.
}
I would suggest you to use sleep function. Try to use sleep(timeInSeconds) ,so it will delay the scanner by the seconds you enter.
My log window goes crazy, constantly reloading after a minute. Did I use NSTimer at the end correctly? Is performSelectorOnMainThread correct use? Thanks
-(void)URL
{
dispatch_async(myQueue, ^{
NSString* myURL= #"https://url.json";
NSData* data = [NSData dataWithContentsOfURL: [NSURL URLWithString:myURL]];
NSLog(#"url: %#", myURL);
if ((!data) || (data==nil))//v1.2
{
NSLog(#"url: loading ERROR");
[NSTimer scheduledTimerWithTimeInterval:5 target:self selector:#selector(URL) userInfo:nil repeats:YES];
}else
{
[self performSelectorOnMainThread:#selector(fetchedData:) withObject:data waitUntilDone:YES];
}
});
}
- (void)fetchedData:(NSData *)responseData {
NSLog=#"fetch";
NSError* error;
NSDictionary* json = [NSJSONSerialization JSONObjectWithData:responseData //1
options:kNilOptions
error:&error];
NSNumber* spot= [json objectForKey:#"amount"];
float spotfloat = [spot floatValue];
timer = [NSTimer scheduledTimerWithTimeInterval:15 target:self selector:#selector(URL) userInfo:nil repeats:YES];
}
set the repeats to NO or set the actual maximum timer. it's an infinite loop if you don't set the maximum time.
[NSTimer scheduledTimerWithTimeInterval:15 target:self selector:#selector(URL) userInfo:nil repeats:NO];
- (void)fetchedData:(NSData *)responseData {
NSLog=#"fetch";
NSError* error;
NSDictionary* json = [NSJSONSerialization JSONObjectWithData:responseData //1
options:kNilOptions
error:&error];
NSNumber* spot= [json objectForKey:#"amount"];
float spotfloat = [spot floatValue];
if (timer < 60)
{
timer = [NSTimer scheduledTimerWithTimeInterval:15 target:self selector:#selector(URL) userInfo:nil repeats:YES];
}
else
{
[timer invalidate];
}
}
A couple of observations:
You are repeatedly creating new repeating timers, but never calling invalidate on the old ones and as a result, you will undoubtedly end up with a cascade of timer events as the old ones will keep firing. (A repeating timer will continue firing until you explicitly call invalidate. Simply nil-ing or replacing the object in your class property/ivar is insufficient.) If you want to replace a repeating timer, make sure to invalidate the old one first (otherwise the old will keep firing).
Generally, though, you'd either create a repeating timer once and let it keep firing, or you'd create non-repeating timer, and at the end of the method, schedule another non-repeating timer. Given that you're dealing with network requests that may take an indeterminate amount of time (i.e. it could still be trying the previous request by the time the next repeating timer fires), I'd suggest using non-repeating timers that schedule the next one at the end of the method. Or, given the hassles in creating timers in background queues, just use dispatch_after.
BTW, timers need a run loop to function properly, so your attempt to create a timer in myQueue if the request failed is unlikely to succeed. If you wanted to schedule a timer from the background queue, the easiest way to do this would be to create the timer with timerWithTimeInterval and then manually add it to the main run loop. (The alternative, of creating a new background NSThread and scheduling your timer on that, is overkill for such a simple problem.)
As an aside, I'd be inclined to run fetchedData on the background queue, too. Just dispatch the UI and/or model update back to the main queue. That way you minimize how much you're doing on the main queue.
I'm unclear as to how you're determining when to stop this process. Maybe you've concluded you don't need that at this point, but I'd suggest you include some cancellation logic, even if you don't avail yourself of it at this point.
You're using a shorter delay if you encounter network problem. That might be a dangerous solution because if your server failed because it was overwhelmed with requests coming in every 15 seconds from too many users, having clients start sending requests every 5 seconds might only make the situation worse.
Frankly, in an ideal scenario, you'd look at the exact cause of the error, and decide the correct behavior at that point. For example, if a request failed because the device doesn't have Internet connectivity, use Reachability to determine when the network is restored, rather than blindly trying every five seconds.
You're sending requests ever 5-15 seconds. If you really need that sort of interactivity, you might consider a different architecture (e.g. sockets). That's beyond the scope of this question, but something for you to research/consider.
Anyway, you might consider something like:
- (void)viewDidLoad
{
[super viewDidLoad];
self.myQueue = ...
[self schedulePollWithDelay:0];
}
- (void)viewDidDisappear:(BOOL)animated
{
[super viewDidDisappear:animated];
self.stopPolling = YES;
}
- (void)schedulePollWithDelay:(CGFloat)delay
{
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delay * NSEC_PER_SEC)), self.myQueue, ^{
[self retrieveData];
});
}
-(void)retrieveData
{
if (self.stopPolling)
return;
NSString* myURL= #"https://url.json";
NSData* data = [NSData dataWithContentsOfURL: [NSURL URLWithString:myURL]];
if (!data) {
NSLog(#"url: loading ERROR");
} else {
[self processFetchedData:data];
}
[self schedulePollWithDelay:15];
}
- (void)processFetchedData:(NSData *)responseData {
NSError* error;
NSDictionary* json = [NSJSONSerialization JSONObjectWithData:responseData //1
options:kNilOptions
error:&error];
NSNumber* spot = [json objectForKey:#"amount"];
float spotfloat = [spot floatValue];
dispatch_sync(dispatch_get_main_queue(), ^{
// update your UI or model here
});
}
I have a game where is uses a countdown timer and when the timer is up, you are brougt to a Game over view. I want to add a feature were if they tap a button, it will add like 1, 2 or 3 more seconds to the timer. I already have the code for the timer (Below), but i just need to know how to add more time to the counter. I thought of it and i have to say when the views will switch and it would need to take into a count the added time.
Code:
-(IBAction)Ready:(id)sender {
[self performSelector:#selector(TimerDelay) withObject:nil afterDelay:1.0];
[self performSelector:#selector(delay) withObject:nil afterDelay:36.5];
}
-(void)TimerDelay {
MainInt = 36;
timer = [NSTimer scheduledTimerWithTimeInterval:1.0
target:self
selector:#selector(countDownDuration)
userInfo:nil
repeats:YES];
if (timer == 0)
{
[timer invalidate];
}
}
-(void)countDownDuration {
MainInt -= 1;
seconds.text = [NSString stringWithFormat:#"%i", MainInt];
}
-(void)delay {
GameOverView1_4inch *second= [self.storyboard instantiateViewControllerWithIdentifier:#"GameOverView1"];
second.finalScore = self.currentScore;
[self presentViewController:second animated:YES completion:nil];
}
If you're using a timer to manage the game, using a perform selector at the same time (to end the game or anything else) kind of defeats the point and makes the management very complex. Choose one route and stick with it.
When the timer is running, you can change it's fire date using setFireDate:. So, you could get the current fire date, add your time to it and then set the new fire date:
- (void)extendByTime:(NSInteger)seconds
{
NSDate *newFireDate = [[self.timer fireDate] dateByAddingTimeInterval:seconds];
[self.timer setFireDate:newFireDate];
}
Then, your button callbacks are something like:
- (void)buttonOnePressed:(id)sender
{
[self extendByTime:1];
}
- (void)buttonFivePressed:(id)sender
{
[self extendByTime:5];
}
Once you've removed the performSelector which calls delay your game end will be defined by
MainInt reaching zero.
As an aside, don't do this:
if (timer == 0)
The correct approach is:
if (timer == nil)
And if the timer is nil, there's no point in trying to invalidate it...
Also a good idea to take a look at the Objective-C naming guidelines.
Based on your recent comment, it seems that you actually want the timer to continue counting at a second interval, but to add time only to the number of seconds remaining. That's even easier and doesn't require any change to the timer fire date.
- (void)extendByTime:(NSInteger)seconds {
MainInt += seconds;
seconds.text = [NSString stringWithFormat:#"%i", MainInt];
}
And you need to add a check in 'countDownDuration':
if (MainInt <= 0) {
[timer invalidate];
[self delay];
}
To determine when you're done.
You can keep reference of the time you start the timer. Then, when you want to add extra time, calculate how much time has passed since the timer started, invalidate the timer and create a new one passing as time interval the difference between the time left of the previous timer and the extra seconds.
Hy, try to use this:
Put this in -(void)viewDidLoad method
NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:1.0f target:self
selector:#selector(countDownTimer) userInfo:nil repeats:YES];
then create the -(void)countDownTimer method
- (void)countDownTimer {
// my method which returns the differences between two dates in my case
double diff = [self getDateDifference];
double days = trunc(diff / (60 * 60 * 24));
double seconds = fmod(diff, 60.0);
double minutes = fmod(trunc(diff / 60.0), 60.0);
double hours = fmodf(trunc(diff / 3600.0), 24);
if(diff > 0) {
NSString *countDownString = [NSString stringWithFormat:#"%02.0f day(s)\n%02.0f:%02.0f:%02.0f",
days, hours, minutes, seconds];
// IBOutlet label, added in .h
self.labelCountDown.text= countDownString;
} else {
// stoping the timer
[timer invalidate];
timer = nil;
// do something after countdown ...
}
}
and you can add a - (double)getDateDifference method which returns the difference between two dates in my case
- (double)getDateDifference {
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
NSDate *dateFromString = [[NSDate alloc] init];
NSDate *now = [NSDate date];
NSString *myDateString = #"2013-10-10 10:10:10"; // my initial date with time
// if you want to use only time, than delete the
// date in myDateString and setDateFormat:#"HH:mm:ss"
// this line is not required, I used it, because I need GMT+2
[dateFormatter setTimeZone:[NSTimeZone timeZoneForSecondsFromGMT:+2]];
[dateFormatter setDateFormat:#"yyyy-MM-dd HH:mm:ss"];
// get the date
dateFromString = [dateFormatter dateFromString:myDateString];
// this line is also not required, I used it because I need GMT+2
// so I added two hours in seconds to 'now'
now = [now dateByAddingTimeInterval:60*60*2];
// getting the difference
double diff = [dateFromString timeIntervalSinceDate:now];
NSLog(#"dateString: %#", dateString);
NSLog(#"now: %#", now);
NSLog(#"targetDate: %#", dateFromString);
NSLog(#"diff: %f", diff);
return diff;
}
the output is similar to this
dateString: 2013-10-10 00:20:00
now: 2013-10-10 00:20:00 +0000
target: 2013-10-10 00:20:28 +0000
diff: 28.382786
I hope it was helpfull
Here's a library that you can use.
https://github.com/akmarinov/AMClock