Stream music like spotify - ios

i'm working on creating an app that streams mp3s from a server. i'm using Matt Gallagher's AudioStreamer to accomplish this, but noticed that (especially on non-wifi) that it takes a few seconds to buffer and start streaming the audio. I'm looking to minimize this, à la Spotify, which is does it almost instantly.
What's the best way of doing this?

Spotify does a lot of clever stuff to ensure music streams instantly. I don't know how you're exactly implementing your streaming, but it's worth reading this article to get a handle on what exactly is going on when you 'stream' from Spotify:
http://pansentient.com/2011/04/spotify-technology-some-stats-and-how-spotify-works/
There's a lot of 'nifty' logic going on involving Spotify predicting which tracks you're likely to play in the future and pre-fetching them. Their mobile apps originally didn't take advantage of this to the same extent as the desktop, but I suspect as the apps have matured they've drip-fed some of the platform improvements down to mobile.

Related

How to manipulate (slow down/change pitch) audio from Spotofy?

From what I've found online so far, it seems as though the Spotify SDK does not allow developers to manipulate audio (by slowing down songs/changing their pitch); all it allows you to do is simply play the audio at its original pitch/speed.
What I'm wondering is how apps like the Amazing Slow Downer (https://apps.apple.com/us/app/amazing-slow-downer/id308998718) are able to manipulate audio pitch/tempo from Spotify/Apple Music?
I am trying to accomplish this for an iOS app I am building, but I have no idea where to start -- I would appreciate any help in pointing me in the right direction to learn how I can do this!

Best way to get precise timing on iOS for sequencing musical notes

So I'm trying to write a basic music sequencer sort of thing. Something that needs very precise timing. This is for iOS 9.
I'm using libpd (Pure Data) right now, just sending in events with various delays to achieve the effect I'm after. And it sounds alright, but not great.
Is there a "best practice" for this kind of precise musical timing on iOS? Could I take the note scheduling out of libpd and maybe get a better effect?
Thanks!
If you want the most precise timing, then rendering the audio before playback would allow you to play back just one audio file and all of the audio would be at the correct time. You would also be able to have the user play notes in real time with little to no delay since you're only playing one other audio track. You can do most of this rendering on background threads as the user makes changes so the main thread is not blocked by all of this processing.
The downsides to this pre-rendered audio include dealing with rendering time (which could be just a fraction of a second or a full minute with complex audio on an older device), memory management, and complexity of code. This will generate the best results though.
If you're going for manipulating notes on the fly, I would recommend taking events as they come. As the user makes a change, play the new audio file. This should be relatively trivial to implement.
If you're trying to have some sort of MIDI sequencer, then I'd highly recommend pre-rendering audio. It does require a fair amount of processing power and the programming can be difficult, but the results are much, much better for the user.

Designing a library for Hardware-accelerated unsupported containers on iOS (and Airplay)

I'm trying to put together an open source library that allows iOS devices to play files with unsupported containers, as long as the track formats/codecs are supported. e.g.: a Matroska video (MKV) file with an H264 video track and an AAC audio track. I'm making an app that surely could use that functionality and I bet there are many more out there that would benefit from it. Any help you can give (by commenting here or—even better— collaborating with me) is much appreciated. This is where I'm at so far:
I did a bit of research trying to find out how players like AVPlayerHD or Infuse can play non-standard containers and still have hardware acceleration. It seems like they transcode small chunks of the whole video file and play those in sequence instead.
It's a good solution. But if you want to throw that video to an Apple TV, things don't work as planned since the video is actually a bunch of smaller chunks being played as a playlist. This site has way more info, but at its core streaming to Apple TV is essentially a progressive download of the MP4/MPV file being played.
I'm thinking a sort of streaming proxy is the way to go. For the playing side of things, I've been investigating AVSampleBufferDisplayLayer (more info here) as a way of playing the video track. I haven't gotten to audio yet. Things get interesting when you think about the AirPlay side of things: by having a "container proxy", we can make any file look like it has the right container without the file size implications of transcoding.
It seems like GStreamer might be a good starting point for the proxy. I need to read up on it; I've never used it before. Does this approach sound like a good one for a library that could be used for App Store apps?
Thanks!
Finally got some extra time to go over GStreamer. Especially this article about how it is already updated to use the hardware decoding provided by iOS 8. So no need to develop this; GStreamer seems to be the answer.
Thanks!
The 'chucked' solution is no longer necessary in iOS 8. You should simply set up a video decode session and pass in NALUs.
https://developer.apple.com/videos/wwdc/2014/#513

Best low latency audio API for an iOS Music Game? OpenAL, Cocoas2d Denshion, PhoneGap

I have been doing some research on the best way to program a music game for iOS similar to Tap Tap Revenge, Guitar Hero, Rock Band etc. Portability is a plus.
This video explains that Open AL has some great ways of handling sounds, playing multiple sounds at once and recycling memory. I have also come across Cocoas2d Denshion for handling audio at low latency.
This article states that HTML5 is terrible for audio playback especially polyphony. He goes on to state that Phonegap's Media class works nicely and by using the native plugin model you can create a low latency solution with Phonegap
If you were to choose an API which would you choose to create a low latency audio based game and why? If you have a different suggestion than the ones mentioned please describe and why. Thank you.
The RemoteIO Audio Unit, when configured with an Audio Session requesting very short buffers, will allow the lowest latencies on current iOS devices. OpenAL appears to be built on top of it.
There are ways to address HTML5 latency, as described here and here. I suggest you try those out on your phone and see if they feel responsive enough. If not, then Novocaine is probably your best bet.
If you should decide to go the PhoneGap route then Andy Trice's Low Latency Audio Plugin should address your concerns.
Wedge.js is something I saw on Hacker News today, maybe it'll help you out
http://www.boxuk.com/labs/wedge-js

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Resources