I want to generate different types of traffic for analyzing OFDMA transmission in NS3. How can I simulate video and audio calls?
The first three options that come to mind are:
If you want to be as close to reality as possible, try out the Direct Code Execution (DCE) Module. I've never used it, so I'm not sure how well it's supported.
Use the OnOffApplication. The OnOffApplication allows you to set onTime, offTime, and a DataRate (among other variables). You can determine the rate at which data is sent for your audio or video program, and then provide those rates to the OnOffApplication. You may find the OnOffHelper convenient to set various parameters of an OnOffApplication.
Create your own Application. This option may be of particular interest since you could simulate variable bitrate audio/video calls. If you choose this option, I highly suggest you checkout the ns-3 tutorial for the walkthrough of fifth.cc to learn more about how to create your own Application.
The second option is probably easiest to use, but may not be as accurate as the first, or as flexible as the third.
Related
Working in audio kit and I am looking to understand how people have incorporated drums. Obviously, the sampler is an option, but I am wondering if there is a built in option similar to some of the basic synthesis options.
There are a few options. I personally like the AppleSampler/MidiSampler like in the example but instead of using audio files you can create a EXS Sampler instrument in Logic where you can assign notes for different velocities. AppleSampler can also load AUPresets made in GarageBand and SoundFonts (SF2). The DunneAudioKit Sampler is an option if you are working with SFZ files, but I think that might be a work-in-progress in AudioKit 5. Loading WAV files directly into AppleSampler is also a good option if you just want one shot sounds.
I'm assuming you're mostly talking about playback of samples, not recording.
The best built-in option I've seen (other than AppleSampler/MidiSampler) is AudioPlayer, which lets you load in a sample and play it back on demand (from an on-screen pad, etc). MIDIListener can then help you respond to external MIDI events, etc. It works (I have a pretty big branch in my app where I tried it), but not sure it works well.
I wouldn't recommend DunneAudioKit Sampler for drums. There is no one-shot playback (so playing the same note in quick succession will cut off the previous note, even if you mess with the release). If you're trying to build a complex/realistic acoustic drum instrument, you'll also want round-robins so that variations of the same hit can be played, which Dunne also doesn't have. It can load SFZ files, but only a very limited subset of SFZ's opcodes (so again, it's missing things like round robins, mute groups, one-shot, etc).
Having gone down all those roads, I would suggest starting with AppleSampler, and I would build the EXS or aupreset file in Logic or Mainstage rather than trying to build something programmatically.
If your needs are really simple, the examples in AudioKit's recently released drum pad playground is a great place to start, loading single samples into a specific note on AppleSampler.
I am trying to build an app that allows the user to record individual people speaking, and then save the recordings on the device and tag each record with the name of the person who spoke. Then there is the detection mode, in which i record someone and can tell whats his name if he is in the local database.
First of all - is this possible at all? I am very new to iOS development and not so familiar with the available APIs.
More importantly, which API should I use (ideally free) to correlate between the incoming voice and the records I have in the local db? This should behave something like Shazam, but much more simple since the database I am looking for a match against is much smaller.
If you're new to iOS development, I'd start with the core app to record the audio and let people manually choose a profile/name to attach it to and worry about the speaker recognition part later.
You obviously have two options for the recognition side of things: You can either tie in someone else's speech authentication/speaker recognition library (which will probably be in C or C++), or you can try to write your own.
How many people are going to use your app? You might be able to create something basic yourself: If it's the difference between a man and a woman you could probably figure that out by doing an FFT spectral analysis of the audio and figure out where the frequency peaks are. Obviously the frequencies used to enunciate different phonemes are going to vary somewhat, so solving the general case for two people who sound fairly similar is probably hard. You'll need to train the system with a bunch of text and build some kind of model of frequency distributions. You could try to do clustering or something, but you're going to run into a fair bit of maths fairly quickly (gaussian mixture models, et al). There are libraries/projects that'll do this. You might be able to port this from matlab, for example: https://github.com/codyaray/speaker-recognition
If you want to take something off-the-shelf, I'd go with a straight C library like mistral, as it should be relatively easy to call into from Objective-C.
The SpeakHere sample code should get you started for audio recording and playback.
Also, it may well take longer for the user to train your app to recognise them than it's worth in time-saving from just picking their name from a list. Unless you're intending their voice to be some kind of security passport type thing, it might just not be worth bothering with.
I've implemented an AUGraph similar to the one on the iOS Developer's Library. In my App, however, I need to be able to playback sound at different sample rates (probably two different ones).
I've been looking around Apple's documentation and haven't found a way to set the sample rate at runtime. I've been thinking of three possible work-arounds:
Re-initialize the AUGraph every time I need to change the sample rate;
Initialize a different AUGraph for each different sample rate;
Convert the sample rate of every sound before playing;
These methods all seem really clunky and heavy on the processor.
What is the best way of changing sample rate of an AUGraph at runtime?
typically #1 for continuous audio streaming scenarios.
your program may have a special need or benefit by using another approach you have listed:
#2: you need to process where reinitialization is not a concern.
#3: you need to mix and process two streams with different input sample rates together at the same time, particularly if you find yourself SRCing the signal multiple times.
but, if you just need playback with SRC and lowest latency is not a concern, you may want to try an AudioQueue instead.
I'm pretty sure that it can't be done in runtime. Solution #2 is your best bet, along with #3. For sample rate conversion, libsndfile can probably be adapted to your needs.
If you don't want latency from tearing down and rebuilding the audio graph, you may need to resample the sound data (for all but one sample rate).
You could either resample the sounds data before starting to play it, or run a real-time resampler as part of the audio graph. Many iOS music apps do the latter as part of a built-in sampler-based synth unit, so the device has plenty of compute power to do so.
I've been searching for some examples that show how to do ADSR in iOS using audio samples (preferably WAV files with loop points, but thats secondary). I guess most people who write a sampler/synth app use audio unit for this. Does any one know a good code example that shows ADSR in any iOS audio library?
In the new iOS SDK 5.0 there's now a Sampler Audio Unit! Which can do ADSR envelopes.
The presets demo shows how to use the sampler:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214
If you want to load different sound formats to play this article is helpful:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html
And here's the iOS documentation reference:
http://developer.apple.com/library/ios/#documentation/AudioUnit/Reference/AUComponentServicesReference/Reference/reference.html#//apple_ref/doc/uid/TP40007291
you can find (a very basic) one in the Apple's SinSynth sample. That is an AU, but it should demonstrate how one would apply a envelope to an audio buffer. i don't remember - it may simply be an ASR, but adding a fourth stage is simple once you have understood the existing program. The implementation is right in the note's render.
Envelope Generators are not platform specific.
musicdsp.org will be a better resource if you want more than a push in the right direction.
MusicDSP has source code for an example envelope follower with attack/release. If you understand this, then sustain/decay should be pretty logical. ;)
But an ADSR envelope is basically just a matter of applying gain to your output signal with a state machine. Each state has a starting value, and ending value, and a duration. Calculating the slope of that line and the value of each point along it was covered in your algebra class back in high school. ;) If you want to be really fancy, you can implement other types of curves, but the concept remains the same.
I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.