iphone cocos2d multiple audio files, best approach? - ios

I have found already some similar discussion on this, but would like to investigate further and find out what's the best library and approach to use for a game made with Cocos2d v1 with the following requirements:
swap one soundtrack with another one, interlacing them (the current one lowers the volume whilst the other one increases)
have multiple audio effects (like bullet shoot sounds different for each enemy)
Cocosdenshion seems to be the best approach for a cocos2d game (rather than using avplayer). Would you agree?
Thanks!

Following up this question.. I did use "CDXPropertyModifierAction.h" library which allowed me to reference to the shared SimpleAudioEngine of the application as well as the CDAudioManager.
Although, having 4 tracks and several .caf effects, I have a consierable memory footprint. I read the comment of #LearnCocos2D and I will now try integrating ObjectAL and do some benchmarks on performance and memory footprint.
I will add comments to this answer, and please do feel free to do so as well to contribute.
Thanks a lot for your comments..

Related

webgl viewer/game engine for low poly turntables?

I am looking for the best tool to achieve something like this (this is Blender's game engine, no real reflections, etc.) in an webgl viewer.
http://youtu.be/9-n12ZH5O6k
The idea is to prepare several basic scenes like this and then for the user to upload his design and have it previewed on a car (or other far more basic objects).
While p3d is nice, I don't think it does the job. There's no API for these cases yet. What are some options to pull this off? The requirement would be to have a library that doesn't have a too large footprint, since the feature/product is planned for the Asian market, so internet speed has to be considered.
you should look into three.js/babylon.js maybe? But you surely won't achieve that app just by a fingersnap, so read the tutorials as well, but these libs will surely ease your task by much.

Audio Framework Confusion

I've read quite a bit both here (Audio Framework in iPhone) and abroad but am still confused as to which Audio Framework to use.
I'm able to get some easier things done, like recording and playing back but I'm looking to the future of the app where I'll be doing more complex things, like managing past recordings (although maybe that's a NSURL bookmark thing) and editing audio.
Right now I'm using AVFoundation but have started reading the docs for Core Audio (and there's also AudioToolbox). I wish there was a developer doc called "Understanding the Different Audio Frameworks and How and When to use them" because, well, the docs are dense and I'm having trouble figuring out which path to go down.
Links to good docs would also be much appreciated!
I recommend you take a look at the recent Learning Core Audio book. The purpose of it was to disambiguate the confusion around audio frameworks on Mac OS and iOS. If you want "good docs", it's well worth getting.
Depending on your requirements, you might also want to consider some of the non-Apple audio frameworks, particularly the MoMu release of STK, which in may respects will be simpler and easier-to-use than Apple's frameworks.

Animations in flash to IOS

I am developing an application that presents different stories for kids.
These stories are already made ​​in flash.
My problem is how to add to the storys That app.
Initially one of the approaches was to create an app in Coco2D and make the animations in this language (converting the animations in pictures). However due to possess many animations and sounds, this process is very complicated
Thus, I asked opinions on what approach should I do to solve this problem.
Any opinion will help.
Thanks...
If you want to change the way you will solve the problem, you can try Adore Air which let you use Flash on iOS Apps : An example on how to do this
If you want to continue with Cocos2D, you can find good tutorials there for your project :
Good Tutorials to make a project with Cocos2D
If your stories doesn't have any interactive elements, you can try to convert your stories in Flash to Video Files with some softwares (try to enter it in Google).

wavetables implemented on iOS

I just saw an iPhone app which uses wavetables to generate sounds. I wish to know how it is possible to implement.
I am pretty much sure that core audio have to be used, but any other idea where to go for some other info will be appreciated.
You'll want CoreAudio or AudioUnits for a responsive program (e.g. AudioQueue's latency is a bit high).
You'll want AudioFile APIs (in AudioToolbox) for reading the tables if you save them as a common audio file format (just wave files with a new shape every cycle, which is every N samples).
Beyond that, you'll probably have to write the wavetable engine. I have done that; It's not tough if you know how wavetable synthesis works and are familiar with audio signals. It's one of the most basic synthesis types.
musicdsp.org may have something you can use as a starting point for this.
After huge investigating I have found an open source project regarding this. http://gitorious.org/pdlib/
Audio file I/O: I found a great resource here. This guy created an excellent API for using ExtAudioFileServices.
A must read is Learning Core Audio. Chris Adamson and company have really put together a great resource. Chris's blog can also be found here
Also, sign up for the Core Audio mailing list.
Michael Tyson's blog/ resources are great too A Tasty Pixel.
Hope this helps!
Take a look at this tutorial on how to use the STK: http://arielelkin.github.io/articles/mandolin/
It is an open-source C++ library with cool synths, some with wavetables.

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Resources