Say I want to trigger a random note and velocity on an instrument every quarter note. What is the best way to achieve this in AudioKit V5?
The examples seem to use the sequencer to schedule sounds with proper timing, but then you have to add in the notes to the track in advance.
One solution is to pre-generate a bar of random quarter notes with looping enabled - when the bar of random notes is complete, clear the bar and replace with new random notes.
I'm wondering if there's a lower level way of doing this? Some kind of callback that is called with precise timing where I can generate the values as they're needed? Or another approach?
There's nothing that enforces you to have to obey the incoming note data or velocity from the sequencer. Just make your instrument respond to any note ons with a random note and velocity. That way you get the timing without worrying about anything else.
Related
How to best add portamento to a currently playing note in an app that includes AudioKit? Tried changing the note frequency, but that didn't do it unless stopped and started note again...tried that with tiny increments, but it sounded choppy.
Basically the question really is: What is the optimal way to do Roland style slides (portamento that slides a constant duration between 2 notes, regardless of distance between frequencies) between notes using Audiokit?
Already built a sequencer, so thanks in advance for best recommended solutions!
This is a very common question, and I am not asking for technical details. What I am looking for is an approach or best practice guidance for the following situation:
Imagine a jump and run game entirely made with SceneKit. The game is played by controlling a character running to the right or left side, climbing up walls and jump over obstacles. (it is a 2½-D style game where the character always runs in one direction or it's opposite. Mainly on the x-Axis)
At certain "locations" (on the scene x-axis) I need to implement some specific actions, that should occur as soon as the main character walks, jumps or runs over that specific point.
What I have done so far, is adding simple invisible SCNPlanes with static physics bodies, with the bitmask configured to detect only the contact (no collision). In the physics handler (physics delegate) I can now fetch contact of the objects the character trespasses. Currently I have a large switch-statement to fetch the names of the "Action-Walls" as I call the planes through which the character walks. So, I can trigger whatever specific action that I want to happen at that location.
It is working this way, even quite good, but I wonder if there is a way to add some better "action points" using i.e. Gameplaykit, but I only find information about agents and behaviors or pathfinding.
What I am looking for is some static point, when trespassing it, an associated action happens.
I have no clue about the possibilities of Gameplaykit. Can anyone tell me an approach using Gameplaykit or whatever Apple has in its magic box of useful tools to make this better than using action walls of SCNPlanes and the physics handler?
I'm looking for a way to create a audio bars visualizer similar to this in iOS.
Every white bar will move up and down depending of audio wave. I'm really lost because haven't much experience dealing with audio in Objective-c.
EDIT: What i'm seeking is what Overcast's app does on its visualizer (the group of vertical orange bars on the lower part of the podcast's image)
Anyone can help?
Thanks
EDIT: Thanks to Tomer's answer I finally made it. First I did this tutorial in order to make it all clear. Then I created my own VisualizerView for my project, you can find it in this gist. Maybe is not perfect but it does what I needed to do.
Generally, you have a few options if you want to get an idea of what something sounds like in iOS:
Use the simple AVAudioPlayer audio player, and then use the [audioPlayer averagePowerForChannel:] method to get the avarage audio level for the current moment. Check out this tutorial.
Use the Audio Queue API, which lets you send whatever audio you want to the speaker: You would read audio from your source and fill the buffers with it every time. (If you're reading from a file, use AVAssetReader) This way you always know exactly what waveform you're playing, so you can, for example, calculate its avarage power or process it in other ways like FFT. Then you'd update the bars accordingly.
EDIT: The standard way of doing such a thing is to use the Fast Fourier Transform (FFT) - it extracts frequency information from a sound. Here's a good example of using it on iOS (Apple's guide here). But, of course, to use it you have to know exactly what waveform you're playing every time, so you'd probably want to use a lower-level API such as Audio Queue.
I want a lower level representation of the touch pattern. If someone lays their hand on the screen I want to see the hand shape. Is this possible in iOS?
As asked, the answer is no. You might be able to do something by fetching all of the touch events in your beginTouches method, but even that is going to be a rough guess at best as the touch reporting is relatively limited when compared to taking a palm print (I don't have the technical details in front of me, but it's on the order of 10 simultaneous touches).
ok, so apparently xna games can only run at 30fps, which is a shame, because our game on iphone looked alot better at 60...
at any rate, because the only way you can get information about the touch screen state is to get its current state, effectively this means you can only sample the touch screen at 30 fps.
even if our game has to run at 30fps, is there any way to get higher resolution sampling from the touch screen? maybe through callbacks? or by accessing a list of touch events with time stamps?
The function you are looking for is TouchPanel.GetState. It is a simple matter of calling this function at 60Hz.
To get 60Hz you could set Game.TargetElapsedTime to 1/60th of a second. This will give you two updates to every one draw (according to Shawn Hargreaves' post here) assuming you are VSyncing at 30FPS.
If you still want your game state updates to run at 30FPS (just doing touch input at 60FPS), then you could put those updates on a different thread. Start an update going on that thread on the first call to Game.Update, and wait for it to finish on the second one, and so on.
(You should note that normally XNA input must be done on the main thread (source). I assume this applies to Phone and to touch input.)
Alternately you could replace the Game class's timing yourself entirely (calling GraphicsDevice.Present yourself). It's not easy to do, but it's possible. A good place to start is to look at the Game class in Reflector.
(Disclaimer: I haven't tried any actual Phone-based development yet, so there may be some Phone-related gotchas I am unaware of.)
The sampling rate of 30fps is set for performance reasons.
Even if you could find a way to query for touches more frequently you still couldn't update the UI at a faster rate so I'm not sure what benefit you'd get.
Before spending too much time on trying to find a solution I'd test on an actual device to see how acceptable 30fps really is.