About collision detection in KonvaJS - konvajs

In KonvaJS, how to detect whether a point is inside an irregular figure (e.g., pentagon) and how to detect whether a figure collides with another figure when dragging? Please write an example for reference. My central idea is (as long as it's in an irregular pattern) : collision detection and drag limits

As mentioned in the comments Konva doesn't support collision detection.
For simple cases, you can implement your own collusions: https://konvajs.github.io/docs/sandbox/Collision_Detection.html
For good collision detection support, you can use another js library. Like one of these:
http://wellcaffeinated.net/PhysicsJS/
http://brm.io/matter-js/
http://box2d-js.sourceforge.net/
So you will use "physic" library to calculate positions, collisions, etc. And you will use Konva for drawing.

Related

How to create a soft body in SceneKit

So.
After many years of iOS development I said it's time to try to do a little game for myself. Now I chose to do it using Apple's SceneKit since it looks like it provides everything I need.
My problem is that I've stumbled upon a huge problem (for me) and searching on Google doesn't yeld any results.
Any idea how do I go about having an object (a sphere for that matter) that deforms itself, say, because of a gravitational force. So basically it should squash on impact with the ground.
Or, how do I go about deforming it when it collides with other spheres, like a soft beach ball would?
Any starting point along those lines would be helpful.
I can post my code here, but I'm afraid it has nothing to do with my problem since I really don't know where to start.
Thanks!
Update
After doing a bit more reading I think that what I want could be doable with Vertex Shaders. Is that a right path to follow?
For complicated animations, you'll generally be better off using a 3D modeling tool like Blender, Maya, or Cheetah3D to build the body and construct the animation. Those tools let you think at a higher level of abstraction. Then you can export that model to Collada (DAE) format and then import it into SceneKit.
https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Basic_Animation/Bounce has a tutorial on building a deforming, bouncing ball using Blender.
SceneKit only does physics using rigid bodies. If you want something to deform, you would have to do it yourself.
It is probably because SceneKit has no way of knowing how an object should be deformed. Should it just compress, should it compress in one direction and expand in all others to preserve it's volume, should only part of the model compress and the rest stay rigid (like the tires on a car).
What you could try is wait for a collision to occur and do the following
calculate and store the velocity after the bounce
disable collision checking on the object
run an animation for the "squash"
enable collision checking on the object
apply the calculated velocity
It will be entirely up to you how real or cartoony you want to make the bounce look.

Finger/Hand Gesture Recognition using Kinect

Let me explain my need before I explain the problem.
I am looking forward for a hand controlled application.
Navigation using palm and clicks using grab/fist.
Currently, I am working with Openni, which sounds promising and has few examples which turned out to be useful in my case, as it had inbuild hand tracker in samples. which serves my purpose for time being.
What I want to ask is,
1) what would be the best approach to have a fist/grab detector ?
I trained and used Adaboost fist classifiers on extracted RGB data, which was pretty good, but, it has too many false detections to move forward.
So, here I frame two more questions
2) Is there any other good library which is capable of achieving my needs using depth data ?
3)Can we train our own hand gestures, especially using fingers, as some paper was referring to HMM, if yes, how do we proceed with a library like OpenNI ?
Yeah, I tried with the middle ware libraries in OpenNI like, the grab detector, but, they wont serve my purpose, as its neither opensource nor matches my need.
Apart from what I asked, if there is something which you think, that could help me will be accepted as a good suggestion.
You don't need to train your first algorithm since it will complicate things.
Don't use color either since it's unreliable (mixes with background and changes unpredictably depending on lighting and viewpoint)
Assuming that your hand is a closest object you can simply
segment it out by depth threshold. You can set threshold manually, use a closest region of depth histogram, or perform connected component on a depth map to break it on meaningful parts first (and then select your object based not only on its depth but also using its dimensions, motion, user input, etc). Here is the output of a connected components method:
Apply convex defects from opencv library to find fingers;
Track fingers rather than rediscover them in 3D.This will increase stability. I successfully implemented such finger detection about 3 years ago.
Read my paper :) http://robau.files.wordpress.com/2010/06/final_report_00012.pdf
I have done research on gesture recognition for hands, and evaluated several approaches that are robust to scale, rotation etc. You have depth information which is very valuable, as the hardest problem for me was to actually segment the hand out of the image.
My most successful approach is to trail the contour of the hand and for each point on the contour, take the distance to the centroid of the hand. This gives a set of points that can be used as input for many training algorithms.
I use the image moments of the segmented hand to determine its rotation, so there is a good starting point on the hands contour. It is very easy to determine a fist, stretched out hand and the number of extended fingers.
Note that while it works fine, your arm tends to get tired from pointing into the air.
It seems that you are unaware of the Point Cloud Library (PCL). It is an open-source library dedicated to the processing of point clouds and RGB-D data, which is based on OpenNI for the low-level operations and which provides a lot of high-level algorithm, for instance to perform registration, segmentation and also recognition.
A very interesting algorithm for shape/object recognition in general is called implicit shape model. In order to detect a global object (such as a car, or an open hand), the idea is first to detect possible parts of it (e.g. wheels, trunk, etc, or fingers, palm, wrist etc) using a local feature detector, and then to infer the position of the global object by considering the density and the relative position of its parts. For instance, if I can detect five fingers, a palm and a wrist in a given neighborhood, there's a good chance that I am in fact looking at a hand, however, if I only detect one finger and a wrist somewhere, it could be a pair of false detections. The academic research article on this implicit shape model algorithm can be found here.
In PCL, there is a couple of tutorials dedicated to the topic of shape recognition, and luckily, one of them covers the implicit shape model, which has been implemented in PCL. I never tested this implementation, but from what I could read in the tutorial, you can specify your own point clouds for the training of the classifier.
That being said, you did not mentioned it explicitly in your question, but since your goal is to program a hand-controlled application, you might in fact be interested in a real-time shape detection algorithm. You would have to test the speed of the implicit shape model provided in PCL, but I think this approach is better suited to offline shape recognition.
If you do need real-time shape recognition, I think you should first use a hand/arm tracking algorithm (which are usually faster than full detection) in order to know where to look in the images, instead of trying to perform a full shape detection at each frame of your RGB-D stream. You could for instance track the hand location by segmenting the depthmap (e.g. using an appropriate threshold on the depth) and then detecting the extermities.
Then, once you approximately know where the hand is, it should be easier to decide whether the hand is making one gesture relevant to your application. I am not sure what you exactly mean by fist/grab gestures, but I suggest that you define and use some app-controlling gestures which are easy and quick to distinguish from one another.
Hope this helps.
The fast answer is: Yes, you can train your own gesture detector using depth data. It is really easy, but it depends on the type of the gesture.
Suppose you want to detect a hand movement:
Detect the hand position (x,y,x). Using OpenNi is straighforward as you have one node for the hand
Execute the gesture and collect ALL the positions of the hand during the gesture.
With the list of positions train a HMM. For example you can use Matlab, C, or Python.
For your own gestures, you can test the model and detect the gestures.
Here you can find a nice tutorial and code (in Matlab). The code (test.m is pretty easy to follow). Here is an snipet:
%Load collected data
training = get_xyz_data('data/train',train_gesture);
testing = get_xyz_data('data/test',test_gesture);
%Get clusters
[centroids N] = get_point_centroids(training,N,D);
ATrainBinned = get_point_clusters(training,centroids,D);
ATestBinned = get_point_clusters(testing,centroids,D);
% Set priors:
pP = prior_transition_matrix(M,LR);
% Train the model:
cyc = 50;
[E,P,Pi,LL] = dhmm_numeric(ATrainBinned,pP,[1:N]',M,cyc,.00001);
Dealing with fingers is pretty much the same, but instead of detecting the hand you need to detect de fingers. As Kinect doesn't have finger points, you need to use a specific code to detect them (using segmentation or contour tracking). Some examples using OpenCV can be found here and here, but the most promising one is the ROS library that have a finger node (see example here).
If you only need the detection of a fist/grab state, you should give microsoft a chance. Microsoft.Kinect.Toolkit.Interaction contains methods and events that detects the grip / grip release state of a hand. Take a look at the HandEventType of InteractionHandPointer . That works quite good for the fist/grab detection, but does not detect or report the position of individual fingers.
The next kinect (kinect one) detects 3 joint per hand (Wrist, Hand, Thumb) and has 3 hand based gestures: open, closed (grip/fist) and lasso (pointer). If that is enough for you, you should consider the microsoft libraries.
1) If there are a lot of false detections, you could try to extend the negative sample set of the classifier, and train it again. The extended negative image set should contain such images, where the fist was false detected. Maybe this will help to create a better classifier.
I've had quite a bit of succes with the middleware library as provided by http://www.threegear.com/. They provide several gestures (including grabbing, pinching and pointing) and 6 DOF handtracking.
You might be interested in this paper & open-source code:
Robust Articulated-ICP for Real-Time Hand Tracking
Code: https://github.com/OpenGP/htrack
Screenshot: http://lgg.epfl.ch/img/codedata/htrack_icp.png
YouTube Video: https://youtu.be/rm3YnClSmIQ
Paper PDF: http://infoscience.epfl.ch/record/206951/files/htrack.pdf

Extract shape outline points from PNG image in iOS

I need to implement contours detection function in my iOS game, which I'm writing using cocos2d 2.1
For example user will provide me an image(PNG transparent):
So, I need detect shape polygon points and create box2d body from them, and I will able to put this image to my box2d scene.
I expect to have on output NSMutableArray with arrays of points of each polygon detected on the image.
Same do PhysicsEditor, here is result of it:
Here is also result using VertexHelper(shows wrong way of detection, as one polygon... ):
Also SpriteHelper but without detection of other parts of image
My question is: how can I do this? What way is better and faster?
I was looking for a solution in google, however I can't find any that will fit my needs...
I guess you are looking for a Sobel edge detection filter. Check out the GPUImage framework created by Brad Larson. It has an implementation of Sobel edge detection filter using objective-C which might be useful for you.
Finally done this by using Chipmunk Autogeometry feature. Work like a charm.
Just using https://github.com/slembcke/ConcaveSprite/blob/master/ConcaveSprite/ConcaveSprite.m I've saved my time...

Object tracking in OpenCV

I had been using LK algorithm in detecting corners and interested point for tracking.
However, I am stucked at this point where I need to have something like a rectangle box to follow the tracked object. All I have now was just a lot of points showing my moving objects.
Is there any methods or suggestions for that? Also, any idea on adding counter into the window so that my object moving in and out the screen can be counted as well?
Thank you
There are lots of options! Within OpenCV, I'd suggest using CamShift as a starting point, since it is a relatively easy to use. CamShift uses mean shift to iteratively search for an object in consecutive frames.
Note that you need to seed the tracker with some kind of input. You could have the user draw a rectangle around the object, or use a detector to get the initial input. If you want to track faces, for example, OpenCV has a cascade classifier and training data for a face detector included.

Dynamic(Moving) Gestures using OpenCV

I can detect hands or colored marker using openCV but I'm stuck at recognizing dynamic gestures(eg. Moving hand to right as move right gesture). I want to recognize left, right, up, down, circle (clockwise and anticlockwise)
Can you please suggest me a way of achieving above described gestures.
Have a look at the motempl.c sample from OpenCV. It allows you to track motion history gradients.
The primary functions you will be interested in are:
updateMotionHistory
calcMotionGradient
calcGlobalOrientation
segmentMotion*
* You may not want to segment things by motion since you have an
object segmentation algorithm already...
To only track the object in which you are interested, simply preprocess the video with your object detection algorithms, and then apply motion history tracking to the detected object.
Hope that helps!

Resources