I'm developing a BlackJack game for iOS. Keeping track of the current state and what needs to be done is becoming difficult. For example, I have a C++ class which keeps track of the current Game:
class Game {
queue<Player> playerQueue;
void hit();
void stand();
}
Currently I'm implementing it using events (Method A):
- (void)hitButtonPress:(id)sender {
game->hit();
}
void Game::hit() {
dealCard(playerQueue.top());
}
void Game::stand() {
playerQueue.pop();
goToNextPlayersTurn();
}
as more and more options are added to the game, creating events for each one is becoming tedious and hard to keep track of.
Another way I thought of implementing it is like so (Method B):
void Game::playersTurn(Player *player) {
dealCards(player);
while (true) {
string choice = waitForUserChoice();
if (choice == "stand") break;
if (choice == "hit")
dealCard(player);
// etc.
}
playerQueue.pop();
goToNextPlayersTurn();
}
Where waitForUserChoice is a special function that lets the user interact with the UIViewController and once the user presses a button, only then returns control back to the playersTurn function. In other words, it pauses the program until the user clicks on a button.
With method A, I need to split my functions up every time I need user interaction. Method B lets everything stay a bit more in control.
Essentially the difference between method A and B is the following:
A:
function A() {
initialize();
// now wait for user interaction by waiting for a call to CompleteA
}
function CompleteA() {
finalize();
}
B:
function B() {
initialize();
waitForUserInteraction();
finalize();
}
Notice how B keeps the code more organized. Is there even a way to do this with Objective-C? Or is there a different method which I haven't mentioned recommended instead?
A third option I can think of is using a finite state machine. I have heard a little about them, but I'm sure if that will help me in this case or not.
What is the recommended design pattern for my problem?
I understand the dilemma you are running into. When I first started iOS I had a very hard time wrapping my head around relinquishing control to and from the operating system.
In general iOS would encourage you to go with method A. Usually you have variables in your ViewController which are set in method A(), and then they are checked in CompleteA() to verify that A() ran first etc.
Regarding your question about Finite State Machines, I think that it may help you solve your problem. The very first thing I wrote in iOS was a FSM (there for this is pretty bad code) however you can take a look here (near the bottom of FlipsideViewController.m:
https://github.com/esromneb/ios-finite-state-machine
The general idea is that you put this in your .h file inside an #interface block
static int state = 0;
static int running = 0;
And in your .m you have this:
- (void) tick {
switch (state) {
case 0:
//this case only runs once for the fsm, so setup one time initializations
// next state
state = 1;
break;
case 1:
navBarStatus.topItem.title = #"Connecting...";
state = 2;
break;
case 2:
// if something happend we move on, if not we wait in the connecting stage
if( something )
state = 3;
else
state = 1;
break;
case 3:
// respond to something
// next state
state = 4;
break;
case 4:
// wait for user interaction
navBarStatus.topItem.title = #"Press a button!";
state = 4;
globalCommand = userInput;
// if user did something
if( globalCommand != 0 )
{
// go to state to consume user interaction
state = 5;
}
break;
case 5:
if( globalCommand == 6 )
{
// respond to command #6
}
if( globalCommand == 7 )
{
// respond to command #7
}
// go back and wait for user input
state = 4;
break;
default:
state = 0;
break;
}
if( running )
{
[self performSelector:#selector(tick) withObject:nil afterDelay:0.1];
}
}
In this example (modified from the one on github) globalCommand is an int representing the user's input. If globalCommand is 0, then the FSM just spins in state 4 until globalCommand is non zero.
To start the FSM, simply set running to 1 and call [self tick] from the viewController. The FSM will "tick" every 0.1 seconds until running is set to 0.
In my original FSM design I had to respond to user input AND network input from a windows computer running it's own software. In my design the windows PC was also running a similar but different FSM. For this design, I built two FIFO queue objects of commands using an NSMutuableArray. User interactions and network packet would enqueue commands into the queues, while the FSM would dequeue items and respond to them. I ended up using https://github.com/esromneb/ios-queue-object for the queues.
Please comment if you need any clarification.
Related
I have successfully connected a steel series Nimbus dual analog controller to use for testing in both my iOS and tvOS apps. But I am unsure about how to properly set up the valueChangeHandler portion of my GCController property.
I understand so far that there are microGamepad, gamepad and extendedGamepad classes of controllers and the differences between them. I also understand that you can check to see if the respective controller class is available on the controller connected to your device.
But now I am having trouble setting up valueChangeHandler because if I set the three valueChangeHandler portions like so, then only the valueChangeHandler that works is the last one that was loaded in this sequence:
self.gameController = GCController.controllers()[0]
self.gameController.extendedGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.extendedGamepad?.leftThumbstick {
//Never gets called
}
}
self.gameController.gamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.gamepad?.dpad {
//Never gets called
}
}
self.gameController.microGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.microGamepad?.dpad {
//Gets called
}
}
If I switch them around and call self.gameController.extendedGamepad.valueChangeHandler... last, then those methods will work and the gamepad and microGamepad methods will not.
Anyone know how to fix this?
You test which profile is available and depending on the profile, you set the valueChangedHandler.
It's important to realise that the extendedGamepad contains most functionality and the microGamepad least (I think the microGamepad is only used for the AppleTV remote). Therefore the checks should be ordered differently. An extendedGamepad has all functionality of the microGamepad + additional controls so in your code the method would always enter the microGamepad profile.
Apple uses the following code in the DemoBots example project:
private func registerMovementEvents() {
/// An analog movement handler for D-pads and movement thumbsticks.
let movementHandler: GCControllerDirectionPadValueChangedHandler = { [unowned self] _, xValue, yValue in
// Code to handle movement here ...
}
#if os(tvOS)
// `GCMicroGamepad` D-pad handler.
if let microGamepad = gameController.microGamepad {
// Allow the gamepad to handle transposing D-pad values when rotating the controller.
microGamepad.allowsRotation = true
microGamepad.dpad.valueChangedHandler = movementHandler
}
#endif
// `GCGamepad` D-pad handler.
// Will never enter here in case of AppleTV remote as the AppleTV remote is a microGamepad
if let gamepad = gameController.gamepad {
gamepad.dpad.valueChangedHandler = movementHandler
}
// `GCExtendedGamepad` left thumbstick.
if let extendedGamepad = gameController.extendedGamepad {
extendedGamepad.leftThumbstick.valueChangedHandler = movementHandler
}
}
I am new AUAudioUnits and trying to get an understanding of how to use the v3 API. I am trying to schedule a parameter change on a kAudioUnitSubType_MultiChannelMixer AUAudioUnit so that I can ramp a gain change over time.
I am able to set the gain directly with a slider and the change takes affect immediately like this:
/*...*/
// Inside my ViewController
#IBAction func slider1Changed(sender: UISlider) {
player.gainParameter.setValue(sender.value, originator: nil)
}
// Inside my AudioPlayerClass
guard let parameterTree = self.multichannelMixer.parameterTree else {
print("Param Tree")
return
}
self.gainParameter = parameterTree.parameterWithID(
kMultiChannelMixerParam_Volume,
scope: kAudioUnitScope_Input,
element: 0)
but when I try to do this using the scheduleParameterBlock by adding this to the above AudioPlayerClass I would expect the gain to ramp from 1 to 0 over 10 seconds but there is no change:
let scheduleMixerParamBlock = self.multichannelMixer.scheduleParameterBlock
let rampTime = AUAudioFrameCount(10.secondsToSampleFrames)
scheduleMixerParamBlock(AUEventSampleTimeImmediate, rampTime, self.gainParameter.address, 0)
Examples I have seen of it working in the apple examples include code such as this (without the dispatch_async part):
parameterTree.implementorValueObserver = {param, value in
dispatch_async(dispatch_get_main_queue()) {
print(param, value)
}
scheduleMixerParamBlock(AUEventSampleTimeImmediate, rampTime, param.address, value)
}
When I run this and change the gain parameter with the slider then the block is run and the param and value are printed to the console with the correct looking values but the gain is not changed in the actual audio.
These examples I have seen are also on custom AUAudioUnits where the implementor has direct access to the dspKernel function so I might be missing something crucial there.
The other alternative I have to calculate a series of ramp values and then set the gain parameter directly for each value but since the scheduleParameterBlock is there is seems like I should be able to use that. Any help would be great. Thanks
You have to look at the AURenderEvent head in the internalRenderBlock which is operating on the audio render thread. Your scheduled parameter events will appear there for you to respond to.
For example, pass the head to this function:
void doEvents(AURenderEvent const* event)
{
while (event != nullptr) {
switch (event->head.eventType) {
case AURenderEventParameter:
doParameterEvent(event->parameter);
break;
default:
break;
}
event = event->head.next;
}
}
void doParameterEvent(AUParameterEvent const &event) {
switch (event.parameterAddress) {
case FilterParameterAddressFoo:
doFoo();
break;
case FilterParameterAddressBar:
doBar();
break;
default:
break;
}
}
I am using a Bluno microcontroller to send / receive data from an iPhone, and everything is working as it should, but I would like to update the text of a UILabel with the real time data that is being printed from the Serial.print(numTicks); statement. If I stop the flowmeter the UILabel gets updated with the most current value, but I would like to update this label in realtime. I am not sure if this is a C / Arduino question or more of a iOS / Objective-C question. The sketch I'm loading on my Bluno looks like the following, https://github.com/ipatch/KegCop/blob/master/KegCop-Bluno-sketch.c
And the method in question inside that sketch looks like the following,
// flowmeter stuff
bool getFlow4() {
// call the countdown function for pouring beer
// Serial.println(flowmeterPin);
flowmeterPinState = digitalRead(flowmeterPin);
// Serial.println(flowmeterPinStatePinState);
volatile unsigned long currentMillis = millis();
// if the predefined interval has passed
if (millis() - lastmillis >= 250) { // Update every 1/4 second
// disconnect flow meter from interrupt
detachInterrupt(0); // Disable interrupt when calculating
// Serial.print("Ticks:");
Serial.print(numTicks);
// numTicks = 0; // Restart the counter.
lastmillis = millis(); // Update lastmillis
attachInterrupt(0, count, FALLING); // enable interrupt
}
if(numTicks >= 475 || valveClosed == 1) {
close_valve();
numTicks = 0; // Restart the counter.
valveClosed = 0;
return 0;
}
}
On the iOS / Objective-C side of things I'm doing the following,
- (void)didReceiveData:(NSData *)data Device:(DFBlunoDevice *)dev {
// setup label to update
_ticks = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
[_tickAmount setText:[NSString stringWithFormat:#"Ticks:%#",_ticks]];
[_tickAmount setNeedsDisplay];
NSLog(#"ticks = %#",_ticks);
}
Basically I would like to update the value of the UILabel while the flowmeter is working.
UPDATE
I just tested the functionality again with the serial monitor within the Arduino IDE, and I got the same if not similar results as to what I got via Xcode and the NSLog statements. So this leads me to believe something in the sketch is preventing the label from updating in real time. :/ Sorry for the confusion.
I use: MPMoviePlayerController to show video.
Below I put list of thumbs from the video.
When pressing a thumb I want to jump to a specific place in the video using: setCurrentPlaybackTime.
I also have a timer updating the selected thumb according to the location of the video using: currentPlaybackTime.
My problem: when calling: setCurrentPlaybackTime, the player keeps giving the seconds before seeking to the specific second. It take few seconds to the player to reflect the new seconds. In the mean time the experience of the user is bad: Pressing a thumb shows it selected for a show time, then the timer updates to the previous thumb, then it jumps back to the thumb I selected.
I tried using (in the timer):
if (moviePlayer.playbackState != MPMoviePlaybackStatePlaying && !(moviePlayer.loadState & MPMovieLoadStatePlaythroughOK)) return;
In order to prevent from the timer to update the selected thumb as long the player is in a transition phase between showing the previous thumb and the new thumb, but it doesn't seem to work. The "playbackState" and "loadState" seems to be totally inconstant and unpredictable.
For solving this issue, this how I have implemented this nasty state coverage in one of my projects. This is nasty and fragile but worked good enough for me.
I used two flags and two time intervals;
BOOL seekInProgress_;
BOOL seekRecoveryInProgress_;
NSTimeInterval seekingTowards_;
NSTimeInterval seekingRecoverySince_;
All of the above should be defaulted to NO and 0.0.
When initiating the seek:
//are we supposed to seek?
if (movieController_.currentPlaybackTime != seekToTime)
{ //yes->
movieController_.currentPlaybackTime = seekToTime;
seekingTowards_ = seekToTime;
seekInProgress_ = YES;
}
Within the timer callback:
//are we currently seeking?
if (seekInProgress_)
{ //yes->did the playback-time change since the seeking has been triggered?
if (seekingTowards_ != movieController_.currentPlaybackTime)
{ //yes->we are now in seek-recovery state
seekingRecoverySince_ = movieController_.currentPlaybackTime;
seekRecoveryInProgress_ = YES;
seekInProgress_ = NO;
seekingTowards_ = 0.0;
}
}
//are we currently recovering from seeking?
else if (seekRecoveryInProgress_)
{ //yes->did the playback-time change since the seeking-recovery has been triggered?
if (seekingRecoverySince_ != movieController_.currentPlaybackTime)
{ //yes->seek recovery done!
seekRecoveryInProgress_ = NO;
seekingRecoverySince_ = 0.0;
}
}
In the end, MPMoviePlayerController simply is not really meant for such "micro-management". I had to throw in at least half a dozen flags for state coverage in all kinds of situations and I would never recommend to repeat this within other projects. Once you reach this level, it might be a great idea to think about using AVPlayer instead.
I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}