I have an AVComposition containing multiple video and audio tracks. So far, it's pure programmatically, no GUI. I want the user to be able to push one track a couple of frames back or forth on will, by clicking a button. e.g: a button titled "-10 frames" will push a track 10 frames back, while "+10 frames" pushes it 10 frames forward. I can't find any way to actually move a track after it has been added using insertTimeRange:ofTrack:atTime:.
I tried to remove it and re-adding it like this:
[secondAudioTrack removeTimeRange:CMTimeRangeMake(kCMTimeZero, kCMTimeIndefinite)];
[secondAudioTrack insertTimeRange:range ofTrack:[[secondAsset tracksWithMediaType:AVMediaTypeAudio] firstObject] atTime:newTime error:nil];
And it's not actually moving so far (I might've done something wrong though), but I feel like this is such a hacky way of doing it.. Anyone know of a proper way to give a new time to an already added track in an AVComposition?
I don't think there is. The complete video should be planned out before involving video composition. Let the user provide only information, save it, then run the video composition only when you have enough information from the user. Start clean each time. You should never have to change the time range inside an AVMutableCompositionTrack.
Related
I am making an app that allows the user to add captions to their photos. I'd like to be able to allow the user to save / share 3 of their creations, and after they share for the third time, shut down the app until they purchase more credits.
I'm trying to think of ways that the user can get around this to use my app for free all the time. The only way I can think that the user can save a photo manually is to do a screen shot, which I know how to detect so I'm not worried about. Is there anything else I'm not thinking of?
You can do it in an indirect way like the Snapchat app. Taking a screenshot interrupts any screen touches. See this for the solution.
You can trap the screen shot event and if they do create a screen shot you can delete it.
You can display some watermarks .
Remove watermark only after they buy it.
Here is a stackoverflow link to add watermarks
I have an app where on start it checks the user's position and then get the weather for that spot. Mainly wind direction and speed.
It does the normal check to see it it has Intenet connection, but I found that if there is connection, but very slow the app freezes on launch screen (doing the check on startup).
I have a class that does this, which is called at startup after which a home screen is loaded.looking around, GCD seems the right way to go, but can I get the answer to be displayed in a label in the home screen when it is finished with getting the data? Main tread would have left, or rather bypassed that class and have arrived at the main screen.
Would I need to maybe use something like Notification Centre to help the label updating and re-load?
Thanks for any thoughts or code snippets.
Jorgen
PS. I am trying to keep the app iOS 5.1 to keep old iPads happy.
GCD seems the right way to go, but can I get the answer to be displayed in a label in the home screen when it is finished with getting the data? Main tread would have left, or rather bypassed that class and have arrived at the main screen. Would I need to maybe use something like Notification Centre to help the label updating and re-load?
Yes, I think you're on a very good track here. Let's keep the two issues separate, though:
After doing your background work, still in GCD, you're going to come back onto the main thread because you now want to update the interface. That's easy and straightforward.
If you have a communication problem, a notification can be an excellent solution. You can broadcast the need to update that label, and if the correct view controller exists and is listening, it will get that information.
Having said that, though, you should think about your architecture, since there may be a better way than a notification. Once you are back on the main thread, why are you not in a place where you have a way to know whether the correct view controller exist and to talk to it directly. I'm not saying the notification is bad/wrong! I've used this solution myself, and a notification is quite a standard way to come back from, say, an NSOperation. I'm just saying, give it a little thought.
Nuance's DragonMobile component apparently turns off VoiceOver announcements between the initial call to SKRecognizer's initWithType:detection:language:delegate and the component's call to recognizerDidFinishRecording:. It makes some sense that they do this, since they don't want the VoiceOver announcements to be picked up by the mic and transcribed.
The problem is that there's usually a 1-2 second gap between the initialization of the recognizer and the initial call to recognizerDidBeginRecording:. In order to prevent the user's first few words from getting cut out of the transcription, it's necessary to use recognizerDidBeginRecording: to indicate to the user that they should start speaking (i.e. you can't just have them hit the mic button and start speaking immediately).
My problem is that since DragonMobile turns off VoiceOver as soon as initWithType: is called, I have no way of indicating to a VoiceOver user that they should begin talking at the appropriate time.
Found something of a workaround: DragonMobile allows you to specify SKEarcons, which are audio files that play whenever recording is started, stopped or canceled. I'm going to record VoiceOver making the announcements that I need and then use these recordings as the earcons, so that it will sound like the rest of VoiceOver.
According to a Nuance technical rep I just spoke to, DragonMobile does indeed take over the audio layer and suppress any output during recording, and they don't expose any way around this other than the earcons.
I am working on a research project that involves user interaction with an iPod. The application runs for approximately 10 hours, and I need to record the following information:
1) How many times was the screen touched?
2) For each touch, where was it touched? (Inside the button vs.outside)
3) What time was it touched? This would preferably be the time of day, but I can do back-calculations if this is not possible.
This information needs to be saved, and exported to a file that can be manipulated in Matlab, Igor, Excel,etc.
The application itself is very simple. A button in the middle of the screen flashes while playing a sound. When a touch event occurs inside the button, the flashing and sound stop for 30 seconds.
My questions is similar to this one from 2010 How to record all of the user's touches in iPhone app. The asker, however, did not provide the details for how he accomplished this, and I'm afraid I need a little more guidance than is given.
I'd be grateful for some advice, or if you could point me to the appropriate resources. I would especially appreciate it if you could also let me know the general area where the code should go. I'm still getting the hang of objective-C.
I am developing a tic tac toe game for iOS and I am using a combination of UIButtons and UIImageViews to allow for user interaction and to display the moves made. My problem is that the buttons continue to accept user input before the cpu makes it's move, which breaks my game logic. I have made several attempts to toggle the userInteractionEnabled property, but I have only been able to turn it off. The engine that gets everything started in the game is my buttonPressed method. I also toggle the userInteractionEnabled property within this method and therein lies my problem: How do I re-enable the property after disabling user interaction? Is there a method that is called in between events that I can overwrite?
I have searched the web and I have searched through the developer documentation provided by Apple and I found information on the touchesBegan and touchesEnded methods. However, from what I understand, those methods need to be explicitly called which brings me back to my original problem of not being able to call those functions without the user's interaction.
If anyone can help, I would greatly appreciate it! I have been racking my brain over this for the past couple of weeks and I am just not seeing a solution.
I'd think that for a game like tic-tac-toe, calculating the countermove should be so fast that it can be done immediately in response to the first button press. If you're doing something complicated to calculate the next move, like kicking off a thread, you might want to reconsider that.
Let's say, though, that your game is something like chess or go, where coming up with a countermove might take a bit longer. Your view controller should have a method to make a move for the current player, let's call it -makeMove:. Your -buttonPressed action should call that method to make a move for the user. In response, -makeMove: should update the state of the game, switch the current player to the next player. If the new current player is the computer, it should then disable the controls and start the process of calculating the next move. Let's imagine that's done by invoking some NSOperation, so that coming up with the next move is an asynchronous task. Once the operation has come up with a move, it should again invoke -makeMove: (by calling -performSelectorOnMainThread:), which will again update the game state and the current player. This time, though, it should see that the new current player is not the computer, and so it should re-enable the controls.