Synchronization in Multiplayer in Unity3D - network-programming

I am working on a Unity Network game in which I have two players which have some basic moves. One player is controlled by the server and the other player is controlled by the client.
To accomplish this I have made a client/server connection. After the connection is made, I can see both players on both sides of the screen. I have used the RPC method.
Now if I make a move on the server, I can see the server player move on the client side as well. This means they are synchronized. But when I make a move on the client side, only the client player makes a move. I cannot see that move on the server side. Why doesn't this work?
I have written the code in UnityScript.
#pragma strict
var farword:boolean=false;
var backword:boolean=false;
var FirstPlayer:GameObject;
var SecondPlayer:GameObject;
var isFarword=false;
var isBackword=false;
function Update () {
if(isFarword)
{
networkView.RPC("ChangePos",RPCMode.All);
isFarword=false;
}
}
#RPC
function ChangePos()
{
if(isFarword)
{
if(Network.isServer)
{
FirstPlayer.transform.Translate(0,0,1);
isFarword=false;
}
if(Network.isClient)
{
SecondPlayer.transform.Translate(0,0,1);
isFarword=false;
}
}
else if(isBackword)
{
if(Network.isServer)
{
FirstPlayer.transform.Translate(0,0,-1);
isBackword=false;
}
if(Network.isClient)
{
SecondPlayer.transform.Translate(0,0,-1);
isBackword=false;
}
}
}
function OnGUI()
{
if(GUI.RepeatButton(new Rect(1000,100,80,70),"Farword"))
{
isFarword=true;
}
if(GUI.RepeatButton(new Rect(850,100,80,70),"second"))
{
isBackword=true;
}
}

The problem is probably where you instantiate the players..
Use the network events to create the players, eg..
Network.OnPlayerConnected, is where you should Network.Instantiate the client's player, and Network.OnServerInitialized is where you should Network.Instantiate the server's player. This will result in each of them being the owner.
Also, network games have some psycho logic to think through, try to always rather use NetworkView.isMine to determine who is the owner of the object and who isn't. Remember, only the owner is supposed to be able to move the object, the other players are listening in for coordinates.
Goodluck!

Related

swift firebase onDisconnectRemoveValue not firing when turning off wifi

I want to be able to remove a connection value from my app's real-time firebase database when they lose connection unexpectedly. This does not seem to be possible from what I have tried already.
I have tried using the "goOffline" function to properly close the sockets because from what I've heard, it doesn't close properly when you turn off your wifi.
func connect() {
let connectionsRef = self.rootRef.child("connections")
AF.request("https://projectname.cloudfunctions.net/Connect").response { response in
if response.data != nil {
if self.visiblename != nil {
connectionsRef.observeSingleEvent(of: .value, with: { snapshot in
for value in JSON(snapshot.value!).arrayValue {
if value["Address"].string! == self.visiblename {
let connectionRef = connectionsRef.child(String(value["Index"].int!))
connectionRef.keepSynced(true)
connectionRef.onDisconnectRemoveValue()
}
}
})
}
}
}
}
self.reachability.whenUnreachable = { _ in
Database.database().goOffline()
}
self.reachability.whenReachable = { _ in
Database.database().goOnline()
}
do {
try self.reachability.startNotifier()
} catch {}
It does automatically remove the value after around 60 seconds but I need my app to be able to handle any internet interruptions and remove the connection value quickly.
Also, if there is no available way to remove the value from the client when the client turns off their wifi. Is there a way to detect the disconnection from the server on the server itself? I have tried comparing date.getTime() to another date.getTime() variable that when you invoke the Connect request it updates the variable. Then the server watches but it didn't seem to work because for some reason it stopped watching the variable as soon as the client disconnected and doesn't have time to realize it. I assume this is because the server is based on cloud functions and has no reason to run when no clients are invoking it.

Chaining multiple async functions in Swift

I'm trying to write a series of functions that will validate the user's information before asking them to confirm something. (Imagine a shopping app).
I first have to check that the user has added a card.
Then I have to check that they have sufficient balance.
Then I can ask them to confirm the payment.
I can write the async method to check the card something like ...
func checkHasCard(completion: (Bool) -> ()) {
// go to the inter webs
// get the card
// process data
let hasCard: Bool = // the user has a card or not.
completion(hasCard)
}
This can be run like this...
checkHasCard {
hasCard in
if hasCard {
print("YAY!")
} else {
print("BOO!")
}
}
But... now, based off that I have to do various things. If the user does have a card I then need to continue onwards and check there is sufficient balance (in much the same way). If the user does not have a card I present a screen for them to add their card.
But it gets messy...
checkHasCard {
hasCard in
if hasCard {
// check balance
print("YAY!")
checkBalance {
hasBalance in
if hasBalance {
// WHAT IS GOING ON?!
print("")
} else {
// ask to top up the account
print("BOO!")
}
}
} else {
// ask for card details
print("BOO!")
}
}
What I'd like instead is something along the lines of this...
checkHasCard() // if no card then show card details screen
.checkBalance() // only run if there is a card ... if no balance ask for top up
.confirmPayment()
This looks much more "swifty" but I'm not sure how to get closer to something like this.
Is there a way?
Asynchronous operations, ordered and with dependencies? You're describing NSOperation.
Certainly you can chain tasks using GCD:
DispatchQueue.main.async {
// do something
// check something...
// and then:
DispatchQueue.main.async {
// receive info from higher closure
// and so on
}
}
But if your operations are complex, e.g. they have delegates, that architecture completely breaks down. NSOperation allows complex operations to be encapsulated in just the way you're after.

How do I get the current user's name from Game Center

When the authenticated handler fires on return from Game Center, the local player is listed with displayName = "Me" and the alias is the player's username. However I'd like to display the user's full name instead, so want the actual displayName, not "Me".
Is there a way of specifying I want the full name, not "Me"?
I prefer showing the alias, especially when matching with random players but both the alias and the displayName require that the player's authentication be complete to start returning their appropriate values.
In order for the authentication process to start, you have to set the local player's authentication handler. Simply setting it starts the process and the method will be called within a few seconds. After that the local player's alias and displayName should be the right ones.
for example:
class YourGameCenterManager:GKGameCenterControllerDelegate,
GKLocalPlayerListener
{
var localGCAccount: GKLocalPlayer!
var active = false
init()
{
localGCAccount = GKLocalPlayer.localPlayer()
localGCAccount?.authenticateHandler = gameCenterAuthentication
}
func gameCenterAuthentication(gameCenterVC :UIViewController?, err:NSError?)
{
if gameCenterVC != nil
{
// Game center wants to display a sign-on view ...
// note: I personally never got this to actually happen
}
else if localGCAccount?.authenticated ?? false
{
if not(active)
{
active = true
localGCAccount?.unregisterAllListeners()
localGCAccount?.registerListener(self)
// ... whatever else you need to do when Game Center is ready
// at this point localGCAccount's alias and displayName should be ok
}
}
else if active
{
//... Game Center just went bad ... do what you have to to to handle it
active = false
}
}

How to set up GCController valueChangeHandler Properly in Xcode?

I have successfully connected a steel series Nimbus dual analog controller to use for testing in both my iOS and tvOS apps. But I am unsure about how to properly set up the valueChangeHandler portion of my GCController property.
I understand so far that there are microGamepad, gamepad and extendedGamepad classes of controllers and the differences between them. I also understand that you can check to see if the respective controller class is available on the controller connected to your device.
But now I am having trouble setting up valueChangeHandler because if I set the three valueChangeHandler portions like so, then only the valueChangeHandler that works is the last one that was loaded in this sequence:
self.gameController = GCController.controllers()[0]
self.gameController.extendedGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.extendedGamepad?.leftThumbstick {
//Never gets called
}
}
self.gameController.gamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.gamepad?.dpad {
//Never gets called
}
}
self.gameController.microGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.microGamepad?.dpad {
//Gets called
}
}
If I switch them around and call self.gameController.extendedGamepad.valueChangeHandler... last, then those methods will work and the gamepad and microGamepad methods will not.
Anyone know how to fix this?
You test which profile is available and depending on the profile, you set the valueChangedHandler.
It's important to realise that the extendedGamepad contains most functionality and the microGamepad least (I think the microGamepad is only used for the AppleTV remote). Therefore the checks should be ordered differently. An extendedGamepad has all functionality of the microGamepad + additional controls so in your code the method would always enter the microGamepad profile.
Apple uses the following code in the DemoBots example project:
private func registerMovementEvents() {
/// An analog movement handler for D-pads and movement thumbsticks.
let movementHandler: GCControllerDirectionPadValueChangedHandler = { [unowned self] _, xValue, yValue in
// Code to handle movement here ...
}
#if os(tvOS)
// `GCMicroGamepad` D-pad handler.
if let microGamepad = gameController.microGamepad {
// Allow the gamepad to handle transposing D-pad values when rotating the controller.
microGamepad.allowsRotation = true
microGamepad.dpad.valueChangedHandler = movementHandler
}
#endif
// `GCGamepad` D-pad handler.
// Will never enter here in case of AppleTV remote as the AppleTV remote is a microGamepad
if let gamepad = gameController.gamepad {
gamepad.dpad.valueChangedHandler = movementHandler
}
// `GCExtendedGamepad` left thumbstick.
if let extendedGamepad = gameController.extendedGamepad {
extendedGamepad.leftThumbstick.valueChangedHandler = movementHandler
}
}

How to detect open/closed hand using Microsoft Kinect for Windows SDK ver 1.7 C#

I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}

Resources