How does Hexar.io determine what area the player has captured - lua

I'm writing a game in lua and am looking for an efficient solution to determine what area a player has captured like Hexar.io
I can't post images, here's an example picture from hexar.io
https://ibb.co/iw5fdF
I managed to make a blocky grid system, with movement for the players. But I'm having trouble determining what area to capture based off the first capture point, and the end point for the drawing.
Any help is appreciated - Ryan

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

Making a scrolling floor like in Street fighter 2 in XNA(Monogame)

How would I make a scrolling pseudo 3d floor like in street fighter 2 in XNA(or more specifically MonoGame)?
https://gyazo.com/ea78954a5d96c3cb522eeac4a6ee5f21
for reference, if you aren't aware what I'm talking about. I Understand the concept of how it was done on the SNES(moving each line of the sprite separately) but how could I achieve the same effect in XNA with today's technology and libraries?
What you need is a simple ViewPort.
A ViewPort basically shows only a little scene from an overall bigger Picture. Like you have it in sides crollers or RPGs. You only see the current scene and not the complete Level/World.
An implementation example can be found here:
http://community.monogame.net/t/simple-2d-camera/9135
It might be a bit tricky to understand everything at the beginning, but at the end you can reuse this for almost any 2D game and several effects (camera shake, Rotation and so on) so it's worth the effort,

Image tracking - tracking a screen with a camera

I want to track the relative position of a camera aimed at a computer screen.
I can’t control what is displayed on the computer screen but I can receive screen dumps whenever something changes on the screen. Those screen dumps can hopefully be used to find the screen when analyzing the video from the camera.
I see many videos on youtube for face, logo or single colored objects tracking using OpenCV but I’m unsure those methods would work finding and tracking a more detailed image like a screen dump.
Maybe Template Matching is the way to go? But I need to find the screen even at an angle.
Basically I don’t know where to begin and need help from people with experience in this field to find the best way for achieving what I want.
Thanks
Using feature matching should do the trick (Sift/SURF/ORB/...)

Canvas and Object for Coronos

In Corona, I am trying to write a Jigsaw puzzle like game, but the pieces are part of a video, so is there a way to run a video sequence and represent clipped segments of that video in the display objects?
Any samples would be great....
Regards
If by clipped you mean create a display object that plays a snippet of the video rather than the whole thing then yes - if you mean cut out a shape from the video the size/shape of a jigsaw puzzle piece then no.
If it's the lesser take a look at the native.newVideo() API.

Augmented Reality Gaming

I want to develop a augmented reality game. Player will stand in a room and some cameras will take video of him. Idea is to add a monster to that video which will be seen by player with glasses or direct view from a lcd. Basically this can be done with some image proccessing consept. Adding colored parts or some markers where the monster will be and some hardworking would do that.
But my question is how to make this monster move and as a result have a video which monster looks like attacking the player. Actual game starts after that but I will go step by step. First step is to have that video with attacking monster.
I'm completely new to this , I only used opencv. So I will need some tools to achieve my goal. Where would you suggest me to start? I prefer C++ but any language with some api suggestions are also accepted. I m also open for theoretical, conceptual suggestions. Thank you for reading my question
Not: This idea came to my mind after watching anime Sword Art Online. If you like to watch animes and virtual reality stuff; I suggest you to watch it. It is a good one.
If you want the monster to move like attacking the player you will need to know the 3D coordinates of the player or some parts of the player. This can be done by making the player wearing recognizable markers that can be detected so homography can be extracted to get the 3D position.
You can start reading this post on the topic, it is about c++ agugmented reality on OpennCV.

Resources