I'm working on an app that will play several audio files. I have gotten that to work, having no trouble. But I'm not sure what file format to use. Right now I am using .wav and one .mp3. Is there a file type that is recommended? I don't know how the app is packaged for the App Store; should the audio be compressed or uncompressed?
Thank you!
It depends on what you're trying to accomplish.
Personally, I favor compression unless quality is an issue. Mp3s, while lossy is my preferred default. It's a standard file type, easy to work with, it can be high quality and iPhone/iOS is efficient at decoding.
However, if you need higher quality, AAC or uncompressed can be better. It's also possible for an mp3 to take a fraction of a second before it starts to play due to decoding. That may or may not be an issue if your audio is tied to a UI event.
An app bundle is the most common way of packaging the executable code(though not the only way.)
I will recommend you to read the following to know about how the bundle structure is.
https://developer.apple.com/library/ios/documentation/CoreFoundation/Conceptual/CFBundles/BundleTypes/BundleTypes.html#//apple_ref/doc/uid/10000123i-CH101-SW1
With regards to the audio file format, There is no favourite as such for apple. You can find the preferred list of audio formats as:
https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html#//apple_ref/doc/uid/TP40009767-CH2-SW9
Hope this solves your problems.
Related
I have an app (game) in the app store that is very large (over 250mb) and realized it's due to the sound files in it rather than the images. Is there any good way to compress mp3 files?
Thanks a lot!
MP3 audio files are compressed. However, if you still have the original uncompressed audio and you used "default" settings convert it to MP3 the first time, it's quite possible that the result wasn't as small as it could be.
Try saving at a lower bitrate.
Try using mono samples instead of stereo.
Try using shorter samples repeated instead of a really long sample.
We are trying to use capture the PCM data from an HLS stream for processing, ideally just before it is played, though just after is acceptable. We want to do all this while still using AVPlayer.
Has anyone done this? For non-HLS streams, as well as local files, this seems to be possible with the MPAudioProcessingTap, but not with HLS. This issue discusses doing it with non-HLS:
AVFoundation audio processing using AVPlayer's MTAudioProcessingTap with remote URLs
Thanks!
Unfortunately, this has been confirmed to be unsupported, at least for the time being.
From an Apple engineer:
The MTAudioProcessingTap is not available with HTTP live streaming. I suggest filing an enhancement if this feature is important to you - and it's usually helpful to describe the type of app you're trying to design and how this feature would be used.
Source: https://forums.developer.apple.com/thread/45966
Our best bet is to file enhancement radars to try to get them to devote some development time towards it. I am in the same unfortunate boat as you.
I'm trying to put together an open source library that allows iOS devices to play files with unsupported containers, as long as the track formats/codecs are supported. e.g.: a Matroska video (MKV) file with an H264 video track and an AAC audio track. I'm making an app that surely could use that functionality and I bet there are many more out there that would benefit from it. Any help you can give (by commenting here or—even better— collaborating with me) is much appreciated. This is where I'm at so far:
I did a bit of research trying to find out how players like AVPlayerHD or Infuse can play non-standard containers and still have hardware acceleration. It seems like they transcode small chunks of the whole video file and play those in sequence instead.
It's a good solution. But if you want to throw that video to an Apple TV, things don't work as planned since the video is actually a bunch of smaller chunks being played as a playlist. This site has way more info, but at its core streaming to Apple TV is essentially a progressive download of the MP4/MPV file being played.
I'm thinking a sort of streaming proxy is the way to go. For the playing side of things, I've been investigating AVSampleBufferDisplayLayer (more info here) as a way of playing the video track. I haven't gotten to audio yet. Things get interesting when you think about the AirPlay side of things: by having a "container proxy", we can make any file look like it has the right container without the file size implications of transcoding.
It seems like GStreamer might be a good starting point for the proxy. I need to read up on it; I've never used it before. Does this approach sound like a good one for a library that could be used for App Store apps?
Thanks!
Finally got some extra time to go over GStreamer. Especially this article about how it is already updated to use the hardware decoding provided by iOS 8. So no need to develop this; GStreamer seems to be the answer.
Thanks!
The 'chucked' solution is no longer necessary in iOS 8. You should simply set up a video decode session and pass in NALUs.
https://developer.apple.com/videos/wwdc/2014/#513
I really stucked with that problem, because I haven't seen enough information in the internet regarding video encoding in iOS, however we can observe plenty of apps that deal with the problem of video streaming successfully (skype, qik, justin.tv, etc.)
I'm going to develop an application, that should send video frames obtained from camera and encoded in h.263 (h.264 or MPEG-4 it is under decision) to a web-server. For this, I need some video encoding library. Obviously, ffmpeg can deal with that task, but it is under LGPL license, which could probably lead to some problems in submitting the app in the AppStore. On the other hand, there are some applications, which are seemed to use ffmpeg library, but only Timelapser clearly states this fact in app description. Does this mean, that other apps are not using ffmpeg or just hiding this information?
Please, share your thoughts and experience in this topic. I'm open for dicsussion.
After googling and making some research in this area, I found this one library http://www.foxitsolutions.com/iphone_h264_sdk.html. They really use hardware encoding. I've examined demo example with instruments, and they showed me that while encoding, ~12% cpu is used and syscall read() constantly called. From that I can conclude, that their library uses standard AVFoundation's AVAssetWriter to write into the temporary file, and (most probably) concurrent thread is used to read this temp file for retrieving encoded frames.
Also, take a look at http://www.videolan.org/developers/x264.html. It is under GPL, but still can be useful.
I've got an app that enables end-users to upload their audio files. Mostly songs/music. Currently, I am using Zencoder for my encoding service, which allows .mp3, .m4a, .mp4 or .ogg
When a user uploads an audio file, it will be available for other users to listen too via the app as well. Would the mp3 format be suitable enough for this?
The licensing should be a major concern here. mp3 has some interesting licensing conditions based on whether your service is free to the end-user. Too complicated to go into length here, you can look it up on the web or contact Frauenhofer for more details.
The second obvious concern is bandwidth and audio quality. The sampling has to be high enough that the end-user cannot tell the audio has been limited or compressed, but the file still needs to be small enough that the file can be downloaded or streamed quickly. Any broadband connection these days can handle a 320kbps mp3 fairly easily.
Hopefully this will give you some good starting points for research:
wikipedia:Comparison_of_audio_formats
mp3 would suffice, mp4 would be better as it offers improved sound quality and compression over mp3. Ogg is a good format but has less broad support on players.
To state the obvious, the quality of the sound is very much dependent on the original file uploaded by the user. You will never improve on that quality, and each time you transcode between formats, you will degrade the quality.
If you ask people to compare between mp3, AAC (m4a, mp4) and ogg - they will give you different answers. Different codecs with different bit rates produce different audio print. Some claim that for certain specific music types you should prefer one format over another.
You can google different bit rates and comparisons easily - technical part is easy.
I would go for ogg. Here's why:
1) It's easily good enough for the job
2) It's an Open Source
3) You don't get into trouble (legally) using it together with upload encodings.