I've made an app that plays music using AVAudioPlayer. It either uploads or downloads songs, writes them to Core Data, then recalls them to play when selected. All of the fifteen songs that I've been testing with operate normally using both the iPhone Music Client and my own computer.
However, three of them don't play back on the app. Specifically, I can upload these fifteen songs in any order, clear my Model.sqlite, download them again into the app, and find that three of them just don't play. They do, however, have the right title and artist.
Looking into this, I noticed that the difference is that the non-working files are .m4a. How do I play files of that format with AVAudioPlayer?
EDIT ("Whats "recalling?", what URL do you initialise AVAudioPlayer with?"):
There is a server with songs that the user can access through the app. After choosing which subset S to retrieve, the app then downloads S and writes it to a CoreModel using NSManagedObjectContext. Each song is stored as a separate entity with a unique ID and a relationship to a subset entity (in this case, S).
When I "recall" using the AppDelegate to get the right song using the context, the data is returned as well. I then initialize the AVAudioPlayer like so:
[[AVAudioPlayer alloc] initWithData:(NSData *)[currentSong valueForKey:#"data"] error:nil];
... So I wrote that and then realized that I haven't actually checked out what the error is (silly me). I found that it's OSStatus error 1954115647, which returns as Unsupported File Type. Looking into this a bit more, I found this iPhone: AVAudioPlayer unsupported file type. A solution is presented there as either trimming off bad data in the beginning or initializing from the contents of a URL. Is it possible to find where the data is written to in core model to feed that as the URL?
EDIT: (Compare files. Are they different?)
Yes, they are. I'm grabbing a sample .m4a file from my server, which was uploaded by the app, and comparing it to the one that's in iTunes. What I found is that the file is cut off before offset 229404 (out of 2906191 bytes), which starts 20680001 A0000E21. In the iTunes version, 0028D83B 6D646174 lies before those bytes. Before that is a big block of zeroes preceded by a big block of data preceded by iTunes encoding information. At the very top is more encoding information listing the file as being M4A.
Are you sure your codec is supported in iOS? AVAudioPlayer is ought to play any format that iOS supports, you can read the list of supported formats here :http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW6 .
I will suggest you to try manually adding those files to your device through iTunes and playing them in iPod, if they won't play then the problem is not your code or sdk, but the format.
How are you recalling them to play - are you writing them to a temporary file which has an m4a extension - this m4a extension is probably required.
This is not a direct solution, but you probably shouldn't be saving the blobs in Core Data directly. Write the files to a cached location and save the file paths in Core Data. This will both use the database more efficiently and give you a local file path to give to your AVAudioPlayer, which will bypass the problem.
Related
I'm working on a mobile application that can perform basic analysis on audio input from the microphone in real time. However, the usual way to do it using the 'AVAudioRecorder` as shown in this guide and the API requires you to save it to a file first.
Since the app is meant to stay on for a long time and be used multiple times a day, I want to avoid clutter the phone with many audio files or audio files that are too big. However, I can't seem to find the way around it. Searching for solutions on the internet always lead to solutions of how to save an audio to a file, instead of avoiding saving to a file and work with some kind of buffer.
Any pointers would be super helpful!
Both the iOS Audio Unit and the Audio Queue APIs allow one to process short buffers of audio input in real-time without saving to a file.
You can also use a tap on the AVAudioEngine. See Apple's documentation: https://developer.apple.com/library/ios/samplecode/AVAEMixerSample/Introduction/Intro.html
You can use /dev/null as path in the AVAudioRecorder instance. This way it will not save to a file, but just discard the data.
var url = NSUrl.FromString("/dev/null");
var recorder = new AVAudioRecorder(url, settings, out error);
I've got experience with building iOS apps but don't have experience with video. I want to build an iPhone app that streams real time video to a server. Once on the server I will deliver that video to consumers in real time.
I've read quite a bit of material. Can someone let me know if the following is correct and fill in the blanks for me.
To record video on the iPhone I should use the AVFoundation classes. When using the AVCaptureSession the delegate method captureOutput:didOutputSampleBuffer::fromConnection I can get access to each frame of video. Now that I have the video frame I need to encode the frame
I know that the Foundation classes only offer H264 encoding via AVAssetWriter and not via a class that easily supports streaming to a web server. Therefore, I am left with writing the video to a file.
I've read other posts that say they can use two AssetWritters to write 10 second blocks then NSStream those 10 second blocks to the server. Can someone explain how to code the use of two AVAssetWriters working together to achieve this. If anyone has code could they please share.
You are correct that the only way to use the hardware encoders on the iPhone is by using the AVAssetWriter class to write the encoded video to a file. Unfortunately the AVAssetWriter does not write the moov atom to the file (which is required to decode the encoded video) until the file is closed.
Thus one way to stream the encoded video to a server would be to write 10 second blocks of video to a file, close it, and send that file to the server. I have read that this method can be used with no gaps in playback caused by the closing and opening of files, though I have not attempted this myself.
I found another way to stream video here.
This example opens 2 AVAssetWriters. Then on the first frame it writes to two files but immediately closes one of the files so the moov atom gets written. Then with the moov atom data the second file can be used as a pipe to get a stream of encoded video data. This example only works for sending video data but it is very clean and easy to understand code that helped me figure out how to deal with many issues with video on the iPhone.
I have a little app that works with iCloud. It stores audio files into the cloud. I noticed that loading the audio files from a second device doesn't work immediately. So I implemented
- (BOOL)downloadFileIfNotAvailable:(NSURL*)file
That helped but still it works sluggish.
I wanted to speed up the downloading process on other iCloud devices by wrapping the audio file in a UIDocument. Is this even possible? I could store the file contents in a NSData, but is there a point (seeing as how AVAudioPlayer wants a URL)? Is there another way for me to speed up the synchronization?
Thanks
You can use NSFileCoordinator's coordinateReadingItemAtURL method for this. Take a look at the answer from this SO?
I’m working on a small iPhone app which is streaming movie content over a network connection using regular sockets. The video is in H.264 format. I’m however having difficulties with playing/decoding the data. I’ve been considering using FFMPEG, but the license makes it unsuitable for the project. I’ve been looking into Apple’s AVFoundation framework (AVPlayer in particular), which seems to be able to handle h264 content, however I’m only able to find methods to initiate the movie using an url – not by proving a memory buffer streamed from the network.
I’ve been doing some tests to make this happen anyway, using the following approaches:
Play the movie using a regular AVPlayer. Every time data is received on the network, it’s written to a file using fopen with append-mode. The AVPlayer’s asset is then reloaded/recreated with the updated data. There seems to be two issues with this approach: firstly, the screen goes black for a short moment while the first asset is unloaded and the new loaded. Secondly, I do not know exactly where the playing stopped, so I’m unsure how I would find out the right place to start playing the new asset from.
The second approach is to write the data to the file as in the first approach, but with the difference that the data is loaded into a second asset. A AVQueuedPlayer is then used where the second asset is inserted/queued in the player and then called when the buffering has been done. The first asset can then be unloaded without a black screen. However, using this approach it’s even more troublesome (than the first approach) to find out where to start playing the new asset.
Has anyone done something like this and made it work? Is there a proper way of doing this using AVFoundation?
The official method to do this is the HTTP Live Streaming format which supports multiple quality levels (among other things) and automatically switches between them (eg: if the user moves from WiFi to cellular).
You can find the docs here: Apple Http Streaming Docs
I was looking for information on http://developer.apple.com/library/ios as well as on https://stackoverflow.com/, but could not find a simple and elegant solution.
I will describe the key problems: it takes to get MP3 file from your media library iPod and increase its volume. On receipt of the file and playing key problems arise.
But the questions - which are not resolved:
How do I change the volume and re-encode MP3 file - so the volume was changed forever? The solution given in
iOS: Create an MP3 on device
Xcode, building and dylibs
Trouble playing mp3s after id3 image edit not strike me as simple and good.
How do I replace the files from your iTunes library to the ones that made my program? The need to force the user to synchronize this device, and manually drag and drop files to the library I razocharovyaet.
Are there any - any comments or suggestions. I would appreciate it.
Re-encoding would cause a decrease of the audio quality. The good news is that you don't need to do this: There is a feature called "Sound Check" built into iTunes that ensures that all your songs are played with the same volume. iTunes scans the songs and stores the volume information inside the ID3 tags. For more information on this, read here: http://en.wikipedia.org/wiki/ReplayGain
This also tells you how to implement it on iOS if you still want to do it.
However, there is no way to sync your changes back to your iTunes library.
I'm find in iTunes Music Library.xml key Volume Adjustment I can boost volume on iTunes via this =). This match more better for me, except encode and realization meshanisms of lame/ffmpeg etc.
2009
Track ID2009
NameStanding On The Shore
ArtistEmpire Of The Sun
Album ArtistEmpire Of The Sun
AlbumWalking On A Dream
Genre
KindАудиофайл MPEG
Size10564118
Total Time263836
Track Number1
Date Modified2011-11-26T19:52:01Z
Date Added2011-09-23T20:59:24Z
Bit Rate320
Sample Rate44100
Volume Adjustment192
Play Count12
Play Date3402675506
Play Date UTC2011-10-28T17:38:26Z
Skip Count2
Skip Date2011-11-06T10:15:51Z
Rating100
Album Rating60
Album Rating Computed
Persistent ID2CF1305AEEF0FCDB
Track TypeFile
Locationfile://localhost/Users/wins/Music/iTunes/iTunes%20Media/Music/Empire%20Of%20The%20Sun/Walking%20On%20A%20Dream/01%20Standing%20On%20The%20Shore.mp3
File Folder Count5
Library Folder Count1