I want to track a ping pong ball during game using opencv.
I'm not really sure how to tackle this problem. I tried using the orange color for detecting the ball each frame, via cv.inRange in RGB & HSV color space, however this did not work quite well. But could any kind of preprocessing help with that?
Also using MOG the ball can be seen quite well when moving (still a lot of noise due to moving people in the scene), was not the best way I guess.
Any tips how to achieve a ball tracking script?
Here is an image how the scene looks like in a "bad" frame (ball is in the middle above the net):
I'm happy to get any tips,
thanks in advance
One possibility is to just ask people not to walk behind the table, to try to improve the performance of background subtraction/MOG. You can then crop out the players, to focus only on the table area.
There are also algorithms to detect people - e.g. HOG-based methods. https://thedatafrog.com/en/articles/human-detection-video/ includes a tutorial.
You could also try blob detection after subtracting the background, and filtering out any blobs that are too large, since you know how big the ping-pong ball is.
Another concept is to keep track of where you think the ball should be based on previous measurements.
Related
I am still developing on my sci-fi video game using my own custom game engine. Now, I want to implement the combat system in my game and in the engine. While nearly everything is clear to me, I wonder how to do proper laser beams like the ones known from Star Wars, Star Trek, Babylon 5, etc.?
I did some online research, however I did not find any suitable article. I am pretty sure I searched with the wrong keywords/tags. Can you give me some hints how to implement such effects as laser beams? I think, it'd be enough to know the proper techniques or terms I need for online research...
A common way is to draw three (or more) intersecting transparent planes like this, if you excuse my crude drawing:
Each of them then bears the same laser texture that fades to black near the top and bottom edges:
If you add any subtle detail, remember to scale the texture coordinates appropriately based on the length of the beam and enable wrapping.
Finally, and most importantly, use a shader that shows only the planes facing the camera, while fading away the ones at a glancing angle to hide the fact that we're using intersecting planes and make the beam look smooth and plausible. The blending should be additive. You should also add some extra effects to the ends of the beam, again to hide the planes.
Good afternoon, everybody!
I am creating space game which has an effect similar to black hole's gravitational effect. Wherever the spaceship is going, the black hole will attract it, summing up all the vectors together.
I know SKFieldNode has a method customFieldWithEvaluationBlock: that allows to calculate everything myself, but might it be another way to do that?
I have tried almost every type in SKFieldNode, but nothing helped.
There are some side effects that don't suit me (i.e. If I use an electric or magnetic field, the object repels from the black hole)
Moreover, moving with SKAction (i.e. for the spaceship) doesn't allow me to sum all movement vectors together (I cant do
- (void)moveTo:duration: actions simultaneously, even with grouping the actions)
What can you advice me? Thank you in advance!
I'm working on a game idea (2D) that needs directional lights. Basically I want to add light sources that can be moved and the light rays interact with the other bodies on the scene.
What I'm doing right now is some test where using sensors (box2d) and ccDrawLine I could achieve something similar to what I want. Basically I send a bunch of sensors from certain point and with raycast detect collisions, get the end points and draw lines over the sensors.
Just want to get some opinions if this is a good way of doing this or is other better options to build something like this?
Also I would like to know how to make a light effect over this area (sensors area) to provide a better looking light effect. Any ideas?
I can think of one cool looking effect you could apply. Put some particles inside the area where light is visible, like sparks shining and falling down very slowly, something like on this picture
Any approach to this problem would need to use collision detection anyway so your is pretty nice providing you have limited number of box2d objects.
Other approach when you have a lot of box2d objects I would think of is to render your screen to texture with just solid colors (should be fast) and perform ray tracing on that generated texture to find pixels that are going to be affected by light. That way you are limited to resolution not the number of box2d objects.
There is a good source code here about dynamic and static lights in a 2D space.
It's Ruby code but easy to understand so it shouldn't be long to port it to Obj-C/Cocos2D/box2D.
I really hope it will help you as it helped me.
Hm, interesting question. Cocos2D does provide some rather flexible masking effects. You could have a gradient mask that you lay over your objects, where its position depends on the position of the "light", thereby giving the effect that your objects were being coloured by the light.
I intend to make a simple flash game where you're basically a ball rolling down a hill, and obstacles would appear and have to be dodged as you progress. The ball would not actually be moving, it would be in the center of the screen, and so the hill would have to work as a conveyor belt in a sense, with obstacles randomly appearing on the hill.
I know this is a vague question, but I can't really think of how I would implement this. How I would make the hill appear as the though it were moving and then when an obstacle is put on, how it would move with the hill to create the illusion of the ball moving consistently.
I'm just trying to think of how the mechanic would work, so any advice would be helpful.
Thanks
The mechanics are called parallax scrolling.
Parallax scrolling is a special scrolling technique in computer graphics, wherein background images move by the camera slower than foreground images, creating an illusion of depth in a 2D video game and adding to the immersion. The technique grew out of the multiplane camera technique used in traditional animation since the 1940s.
Now, this describes a side scroller game. You use the same principles in a forward movement game. Objects further away from you move towards you more slowly than objects closer to you. Your focus is the horizon in a forward movement game.
You might get a better answer on the Game Development exchange site.
I have a problem to detect object in images or video frames.
I have a task that is detect some people or something who enter into the sight of web camera, and then my system will be alarm.
Next step is recognize which kind of thing the object is, in this phase I know use Hough transform to detect line, circle, even rectangle. But when a people come into the sight of camera, people's profile is more complex than line, circle and rectangle. How can i recognize the object is people not a car.
I need help to know that.
thanks in advance
I suggest you look at the paper "Histograms of Oriented Gradients for Human Detection" by Dalal and Triggs. They used Histograms of Oriented Gradients to detect humans in images
I think one method is to use Bayesian analysis on your image and see how that matches with a database of known images. I believe some people run a wavelet transform to emphasize more noticeable features.