using AUScheduleParameterBlock from a host on AUAudioUnit v3 - ios

I am new AUAudioUnits and trying to get an understanding of how to use the v3 API. I am trying to schedule a parameter change on a kAudioUnitSubType_MultiChannelMixer AUAudioUnit so that I can ramp a gain change over time.
I am able to set the gain directly with a slider and the change takes affect immediately like this:
/*...*/
// Inside my ViewController
#IBAction func slider1Changed(sender: UISlider) {
player.gainParameter.setValue(sender.value, originator: nil)
}
// Inside my AudioPlayerClass
guard let parameterTree = self.multichannelMixer.parameterTree else {
print("Param Tree")
return
}
self.gainParameter = parameterTree.parameterWithID(
kMultiChannelMixerParam_Volume,
scope: kAudioUnitScope_Input,
element: 0)
but when I try to do this using the scheduleParameterBlock by adding this to the above AudioPlayerClass I would expect the gain to ramp from 1 to 0 over 10 seconds but there is no change:
let scheduleMixerParamBlock = self.multichannelMixer.scheduleParameterBlock
let rampTime = AUAudioFrameCount(10.secondsToSampleFrames)
scheduleMixerParamBlock(AUEventSampleTimeImmediate, rampTime, self.gainParameter.address, 0)
Examples I have seen of it working in the apple examples include code such as this (without the dispatch_async part):
parameterTree.implementorValueObserver = {param, value in
dispatch_async(dispatch_get_main_queue()) {
print(param, value)
}
scheduleMixerParamBlock(AUEventSampleTimeImmediate, rampTime, param.address, value)
}
When I run this and change the gain parameter with the slider then the block is run and the param and value are printed to the console with the correct looking values but the gain is not changed in the actual audio.
These examples I have seen are also on custom AUAudioUnits where the implementor has direct access to the dspKernel function so I might be missing something crucial there.
The other alternative I have to calculate a series of ramp values and then set the gain parameter directly for each value but since the scheduleParameterBlock is there is seems like I should be able to use that. Any help would be great. Thanks

You have to look at the AURenderEvent head in the internalRenderBlock which is operating on the audio render thread. Your scheduled parameter events will appear there for you to respond to.
For example, pass the head to this function:
void doEvents(AURenderEvent const* event)
{
while (event != nullptr) {
switch (event->head.eventType) {
case AURenderEventParameter:
doParameterEvent(event->parameter);
break;
default:
break;
}
event = event->head.next;
}
}
void doParameterEvent(AUParameterEvent const &event) {
switch (event.parameterAddress) {
case FilterParameterAddressFoo:
doFoo();
break;
case FilterParameterAddressBar:
doBar();
break;
default:
break;
}
}

Related

Place/City names from a list of CLLocation

Here is my situation. (Using Swift 2.2)
I have a list of coordinates (CLLocation). I need to call the reverseGeocodeLocation to fetch the corresponding Place/City. If I try to loop through the elements there is a chance for some calls to fails as Apple suggest to send one call in a second. So I need to add a delay between each calls as well.
Is there any way to achieve this? Any help is appreciated.
(If we have multiple items with same lat, long we only call the api once)
This code declares a set of locations and looks them up one by one, with at least 1 second between requests:
var locations = Set<CLLocation>()
func reverseGeocodeLocation() {
guard let location = locations.popFirst() else {
geocodingDone()
return
}
CLGeocoder().reverseGeocodeLocation(location) { placemarks, error in
//Do stuff here
//Dispatch the next request in 1 second
_ = Timer.scheduledTimer(withTimeInterval: 1, repeats: false) { _ in
self.reverseGeocodeLocation()
}
}
}
func geocodingDone() {
//Put your finish logic in here
}
FYI I used the block syntax for the Timer, but that only works on iOS 10. If you are using iOS 9 or earlier just use the selector version and it works the same way.

RxSwift: How to use shareReplay to lazily get subscription

So I want to be able to lazily subscribe to shared data without it persisting when nobody is subscribed. Then if someone subscribes again, a new observable will be created. I would use a Variable, but I don’t want it to persist if no one is subscribed (because if I were using arrays or something larger than an int I don’t want to keep them in memory). My current implementation works, except when resubscribing it still gets the last value, which means the value is still persisted. I’m thinking of setting the observable to nil, but I don’t know where to do that. Can anyone help me complete this? The code below shows it mostly working, but it looks like the data is sticking around when no one is subscribed.
var switchTwoDisposable: Disposable? = nil
​
#IBAction func switchOneChanged(sender: UISwitch) {
if sender.on {
self.switchOneDisposable = currentNumber().subscribeNext { (value) in
log.debug("Switch 1: \(value)")
}
} else {
switchOneDisposable?.dispose()
}
}
​
#IBAction func switchTwoChanged(sender: UISwitch) {
if sender.on {
self.switchTwoDisposable = currentNumber().subscribeNext { (value) in
log.debug("Switch 2: \(value)")
}
} else {
switchTwoDisposable?.dispose()
}
}
​
var numberObservable: Observable<Int>? = nil
​
func currentNumber() -> Observable<Int> {
if let number = numberObservable {
return number
}
self.numberObservable = Observable<Int>.interval(5.0, scheduler: MainScheduler.instance).shareReplay(1)
return self.numberObservable!
}
​
​
// Switch 1 turned on
// logs "Switch 1: 0"
// logs "Switch 1: 1"
// Switch 2 turned on
// immediately logs "Switch 2: 1"
// logs "Switch 1: 2"
// logs "Switch 2: 2"
// Switch 1 turned off
// logs "Switch 2: 3"
// Switch 2 turned off
// nothing happens here until we take action again
// Switch 1 turned on
// logs "Switch 1: 3"
// logs "Switch 1: 0"
I finally found the convenient method that does exactly what I need. shareReplayLatestWhileConnected() on an observable will replay the latest value to the 2nd, 3rd, 4th, etc. subscribers, but when everyone unsubscribes, then the last value is not retained.
From the example above replace this line:
self.numberObservable = Observable<Int>.interval(5.0, scheduler: MainScheduler.instance).shareReplay(1)
...with this line:
self.numberObservable = Observable<Int>.interval(5.0, scheduler: MainScheduler.instance).shareReplayLatestWhileConnected()
Update
In my case, I specifically want to get a value from disk (e.g. Core data or NSUserDefaults) and then if someone updates that value, they can post to a notification that I'll observe with rx_notification. Therefore, in order for this lazy loading to truly work, I would also want an initial value. Therefore it would be helpful to use startWith in this case, where the value given to startWith is the current value on disk. So the code would be analogous to:
Observable<Int>.interval(5.0, scheduler: MainScheduler.instance).startWith(100).shareReplayLatestWhileConnected()

How to set up GCController valueChangeHandler Properly in Xcode?

I have successfully connected a steel series Nimbus dual analog controller to use for testing in both my iOS and tvOS apps. But I am unsure about how to properly set up the valueChangeHandler portion of my GCController property.
I understand so far that there are microGamepad, gamepad and extendedGamepad classes of controllers and the differences between them. I also understand that you can check to see if the respective controller class is available on the controller connected to your device.
But now I am having trouble setting up valueChangeHandler because if I set the three valueChangeHandler portions like so, then only the valueChangeHandler that works is the last one that was loaded in this sequence:
self.gameController = GCController.controllers()[0]
self.gameController.extendedGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.extendedGamepad?.leftThumbstick {
//Never gets called
}
}
self.gameController.gamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.gamepad?.dpad {
//Never gets called
}
}
self.gameController.microGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.microGamepad?.dpad {
//Gets called
}
}
If I switch them around and call self.gameController.extendedGamepad.valueChangeHandler... last, then those methods will work and the gamepad and microGamepad methods will not.
Anyone know how to fix this?
You test which profile is available and depending on the profile, you set the valueChangedHandler.
It's important to realise that the extendedGamepad contains most functionality and the microGamepad least (I think the microGamepad is only used for the AppleTV remote). Therefore the checks should be ordered differently. An extendedGamepad has all functionality of the microGamepad + additional controls so in your code the method would always enter the microGamepad profile.
Apple uses the following code in the DemoBots example project:
private func registerMovementEvents() {
/// An analog movement handler for D-pads and movement thumbsticks.
let movementHandler: GCControllerDirectionPadValueChangedHandler = { [unowned self] _, xValue, yValue in
// Code to handle movement here ...
}
#if os(tvOS)
// `GCMicroGamepad` D-pad handler.
if let microGamepad = gameController.microGamepad {
// Allow the gamepad to handle transposing D-pad values when rotating the controller.
microGamepad.allowsRotation = true
microGamepad.dpad.valueChangedHandler = movementHandler
}
#endif
// `GCGamepad` D-pad handler.
// Will never enter here in case of AppleTV remote as the AppleTV remote is a microGamepad
if let gamepad = gameController.gamepad {
gamepad.dpad.valueChangedHandler = movementHandler
}
// `GCExtendedGamepad` left thumbstick.
if let extendedGamepad = gameController.extendedGamepad {
extendedGamepad.leftThumbstick.valueChangedHandler = movementHandler
}
}

How to detect open/closed hand using Microsoft Kinect for Windows SDK ver 1.7 C#

I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}

Design pattern for waiting for user interaction in iOS?

I'm developing a BlackJack game for iOS. Keeping track of the current state and what needs to be done is becoming difficult. For example, I have a C++ class which keeps track of the current Game:
class Game {
queue<Player> playerQueue;
void hit();
void stand();
}
Currently I'm implementing it using events (Method A):
- (void)hitButtonPress:(id)sender {
game->hit();
}
void Game::hit() {
dealCard(playerQueue.top());
}
void Game::stand() {
playerQueue.pop();
goToNextPlayersTurn();
}
as more and more options are added to the game, creating events for each one is becoming tedious and hard to keep track of.
Another way I thought of implementing it is like so (Method B):
void Game::playersTurn(Player *player) {
dealCards(player);
while (true) {
string choice = waitForUserChoice();
if (choice == "stand") break;
if (choice == "hit")
dealCard(player);
// etc.
}
playerQueue.pop();
goToNextPlayersTurn();
}
Where waitForUserChoice is a special function that lets the user interact with the UIViewController and once the user presses a button, only then returns control back to the playersTurn function. In other words, it pauses the program until the user clicks on a button.
With method A, I need to split my functions up every time I need user interaction. Method B lets everything stay a bit more in control.
Essentially the difference between method A and B is the following:
A:
function A() {
initialize();
// now wait for user interaction by waiting for a call to CompleteA
}
function CompleteA() {
finalize();
}
B:
function B() {
initialize();
waitForUserInteraction();
finalize();
}
Notice how B keeps the code more organized. Is there even a way to do this with Objective-C? Or is there a different method which I haven't mentioned recommended instead?
A third option I can think of is using a finite state machine. I have heard a little about them, but I'm sure if that will help me in this case or not.
What is the recommended design pattern for my problem?
I understand the dilemma you are running into. When I first started iOS I had a very hard time wrapping my head around relinquishing control to and from the operating system.
In general iOS would encourage you to go with method A. Usually you have variables in your ViewController which are set in method A(), and then they are checked in CompleteA() to verify that A() ran first etc.
Regarding your question about Finite State Machines, I think that it may help you solve your problem. The very first thing I wrote in iOS was a FSM (there for this is pretty bad code) however you can take a look here (near the bottom of FlipsideViewController.m:
https://github.com/esromneb/ios-finite-state-machine
The general idea is that you put this in your .h file inside an #interface block
static int state = 0;
static int running = 0;
And in your .m you have this:
- (void) tick {
switch (state) {
case 0:
//this case only runs once for the fsm, so setup one time initializations
// next state
state = 1;
break;
case 1:
navBarStatus.topItem.title = #"Connecting...";
state = 2;
break;
case 2:
// if something happend we move on, if not we wait in the connecting stage
if( something )
state = 3;
else
state = 1;
break;
case 3:
// respond to something
// next state
state = 4;
break;
case 4:
// wait for user interaction
navBarStatus.topItem.title = #"Press a button!";
state = 4;
globalCommand = userInput;
// if user did something
if( globalCommand != 0 )
{
// go to state to consume user interaction
state = 5;
}
break;
case 5:
if( globalCommand == 6 )
{
// respond to command #6
}
if( globalCommand == 7 )
{
// respond to command #7
}
// go back and wait for user input
state = 4;
break;
default:
state = 0;
break;
}
if( running )
{
[self performSelector:#selector(tick) withObject:nil afterDelay:0.1];
}
}
In this example (modified from the one on github) globalCommand is an int representing the user's input. If globalCommand is 0, then the FSM just spins in state 4 until globalCommand is non zero.
To start the FSM, simply set running to 1 and call [self tick] from the viewController. The FSM will "tick" every 0.1 seconds until running is set to 0.
In my original FSM design I had to respond to user input AND network input from a windows computer running it's own software. In my design the windows PC was also running a similar but different FSM. For this design, I built two FIFO queue objects of commands using an NSMutuableArray. User interactions and network packet would enqueue commands into the queues, while the FSM would dequeue items and respond to them. I ended up using https://github.com/esromneb/ios-queue-object for the queues.
Please comment if you need any clarification.

Resources