I'm new to core audio and I've been banging my head against a brick wall for a while on how to do this and I was hoping someone might be able to point me in the right direction.
I'm creating an app for an assignment and I want the user to select a file from the iPod library (MPMediaPickerontroller ?) and then perform an fft on said file to detect the pitch.
I have code working that selects the file and saves it's location as an NSURL and I have code working for OSX that will play a file from a URL! I can't get this part to work on iOS for reasons that are beyond me.
I've also seen lots of sample code that implements FFT using remote i/o to fill the buffers but I can't work out how to do this from the iPod library.
Can anyone help? Idealluy point me to some sample code that will show me how best to do some of these tasks? I've looked at previous threads and can't see anything that's quite what I need.
Many thanks in Advance!
Since you have a NSURL Link to your songs, why not try using AVFoundation for the FFT part. Since you have the url it is perfect because the player in AVFoundation imports songs by URL.
Related
I'm building a small game prototype, and I'd like to be able to play simple sounds whose length/tone/pitch will vary based on what the user is doing.
This is surprisingly hard to do. Closest resource I found was:
http://www.tmroyal.com/playing-sounds-in-swift-audioengine.html
But this does not actually generate any sound on my device or on the iOS simulator.
Does anyone know of any working code to play ANY procedurally generated audio? Simple Sine Wave would do.
https://gist.github.com/rgcottrell/5b876d9c5eea4c9e411c
This code on the other hand works, and it's beautifully written...
Success!
You can try AudioKit.
It's an audio framework built on top of Core Audio.
In their Continuous Control example they use a simple FM oscillator with controlled parameters.
I'm trying to put together an open source library that allows iOS devices to play files with unsupported containers, as long as the track formats/codecs are supported. e.g.: a Matroska video (MKV) file with an H264 video track and an AAC audio track. I'm making an app that surely could use that functionality and I bet there are many more out there that would benefit from it. Any help you can give (by commenting here or—even better— collaborating with me) is much appreciated. This is where I'm at so far:
I did a bit of research trying to find out how players like AVPlayerHD or Infuse can play non-standard containers and still have hardware acceleration. It seems like they transcode small chunks of the whole video file and play those in sequence instead.
It's a good solution. But if you want to throw that video to an Apple TV, things don't work as planned since the video is actually a bunch of smaller chunks being played as a playlist. This site has way more info, but at its core streaming to Apple TV is essentially a progressive download of the MP4/MPV file being played.
I'm thinking a sort of streaming proxy is the way to go. For the playing side of things, I've been investigating AVSampleBufferDisplayLayer (more info here) as a way of playing the video track. I haven't gotten to audio yet. Things get interesting when you think about the AirPlay side of things: by having a "container proxy", we can make any file look like it has the right container without the file size implications of transcoding.
It seems like GStreamer might be a good starting point for the proxy. I need to read up on it; I've never used it before. Does this approach sound like a good one for a library that could be used for App Store apps?
Thanks!
Finally got some extra time to go over GStreamer. Especially this article about how it is already updated to use the hardware decoding provided by iOS 8. So no need to develop this; GStreamer seems to be the answer.
Thanks!
The 'chucked' solution is no longer necessary in iOS 8. You should simply set up a video decode session and pass in NALUs.
https://developer.apple.com/videos/wwdc/2014/#513
I am trying to crop videos both taken in my app and uploaded from the user's photo library. I am looking to crop every video to be the size of the iPhone 5s screen (I know that sounds dumb, but that's what I need to do).
Can I do this using the AV Foundation framework or do I need to use Core Video? I've made multiple attempts with AV Foundation and gotten nowhere.
Also if you could link to any helpful tutorials or code samples that would be greatly appreciated.
I'm using Objective-C and working on an app designated for iOS 7+.
Thanks!
1) Use AVAssetReader to open your video and extract CMSampleBuffers
Link
2) Modify CMSampleBuffer:
Link
3) Create AVAssetWriter and add modified CMSampleBuffers to it's input
Link
In this article CVPixelBuffer is used as an input to AVAssetWriter
using adapter. You actually don't need an adapter because
you have CMSampleBuffers ready you can add them straight
to the input using appendSampleBuffer: method.
I try to get some help. I would like to Pause/Resume a video recording on iOS. But it's very complicated. My goal is to get something like in the app of Instagram App (when you press it 's recording).
I find the "CapturePause" app example but it's very complicated from me. A lot of stream bytes algorithm. too complex for me.
I try to develop the solution by using multiple AvCaptureMovieFileOutput, but it's very complicated. And I am losing my mind in it.
Does anyone know a framework or piece of code to do it nicely ? I searched on GG and I found something.
Thank you,
Arnaud
i am also finding similar to Instagram recording.. i find pause recoding
https://github.com/piemonte/PBJVision
i thing helping something
I am trying to create a video player for iOS, but with some additional audio track reading. I have been checking out MPVideoPlayerController, and also AVPlayer in the AV Foundation, but it's all kinda vague.
What I am trying to do is play a video (from a local .mp4), and while the movie is playing get the current audio buffer/frames, so I can do some calculations and other (not video/audio relevant) actions that depend on the currently played audio. This means that the video should keep on playing, with its audio tracks, but I also want the live raw audio data for calculations (like i.e.: getting the amplitude for certain frequency's).
Does anyone have an example or hints to do this ? Of-course I checked out Apple's AV Foundation library documentation, but it was not clear enough for me.
After a really (really) long time Googling, I found a blog post that describes MTAudioProcessingTap. Introduced in iOS 6.0, it solves my problem perfectly.
The how-to/blogpost can be found here : http://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
I Hope it helps anyone else now....... The only thing popping up for me Googling (with a lot of different terms) is my own post here. And as long as you don't know MTAudioProcessingTap exists, you don't know how to Google for it :-)