I am using vision to detect rectangles but it only seems to be detecting larger rectangles that are more square than rectangle. Is there a way to detect longer rectangles?
You have to adjust the minimumAspectRatio of the VNDetectRectanglesRequest which is 0.5 per default. The green rectangle seems to have a much lower one.
Related
My aim is to draw a set of texures (128x128 pixels) as (gap-less) tiles without filtering artifacts in XNA.
Currently, I use for example 25 x 15 fully opaque tiles (alpha is always 255) in x-y to create a background image in a game, or a similar number of semi-transparent tiles to create the game "terrain" (foreground). In both cases, the tiles are scaled and drawn using floating-point positions. As it is known, to avoid filtering artifacts (like small but visible gaps, or unwanted color overlaps at the tile borders) one has to do "edge padding" which is described as adding an additional fringe of a width of one pixel and using the color of adjacent pixels for the added pixels. Discussions about this issue can be found for example here. An example image of this issue from our game can be found below.
However, I do not really understand how to do this - technically, and specifically in XNA.
(1) When adding a fringe of one pixel width, my tiles would then be 129 x 129 and the overlapping fringes would create quite visible artifacts of their own.
(2) Alternatively, once could add the padding pixels but then not draw the full 129x129 pixel texture but only its "center" (without the fringe) e.g. by choosing the source rectangle of this texture to be (1,1,128,128). But are then the padding pixels not simply ignored or is the filtering hardware really using this information?
So basically, I wonder how this is done properly? :-)
Example image of filtering issue from game: Unwanted vertical gap in brown foreground tiles.
i have a question about achieving an effect like on a lunar eclipse. The effect should look like in the first seconds of this gif. So just like a black shadow which goes over the circle. The ideal situation would be a function where i can passed a parameter in percentage to get this amount as a shadow on the circle:
The problem which i am facing is that my background is an gradient. So it's not possible to have a black circle which moves over the moon to get the effect.
I tried something with CCClippingNode but it looks not nice. Furthermore the clip on the edges was always a bit pixelated.
I thought about using something like a GLSL Shader to achieve the effect but i am not so familiar with GLSL and i can't find an example.
The effect is for an app game developed for an iphone. I use the cocos2d framework in version 3 (the current one).
Has somebody an idea how to get this effect? An idea where i can start to search?
Thank you in advance
The physics behind is simple you change the light shining on the moon. So
I would create a 1D gradient texture representing the lighting conditions
compute each rendered pixel of moon
you obviously have the 2D texture of moon. So you now need to obtain the position of each pixel inside the 1D lighting texture. So if moon is fully visible you are in sunlight. When partially eclipsed then you are in the umbra region. And finaly while total eclipse you are in penumbra region. so just compute the middle point's of the moon position. And for the rest use relative position in the moons motion direction.
So now just multiply the Moon surface with the lighting texture and render the output.
when working you can add the curvature correction
Now you got linerly cutted Moon phases but the real phases are curved as the lighting conditions differs also with radial distance from motion direction and moons center. To fix this you can do
convert the lighting to 2D texture
or shift the texture coordinate by some curvature dependent on the radial distance
I need to detect squares on an image (for AR marker detection). Squares are rotated in 3D (meaning their projection I'm seeing isn't really a square but a 4 sided polygon). My problem is that the polygons I need to detect are moving so they are subject to motion blur. Squares are black with a white margin so there's a high contrast.
My approach for detection was to detect edges (canny for example), find contours, approximate polygons and filter them by the number of sides and maybe some other geometrical constraints.
What approach would you recommend for detecting edges on an image with a motion blur?
Thanks
I would use Harris corner detection to detect the corner points and then use Hough transform to detect the lines. Using the position of the corners and lines it is possible to get the polygons.
I'm going to find the most look-like rectangles among shapes. The first image is the original image with shapes which possibly be rectangles but they are not. The green rectangles in the second image is what I want. So is there a way to do this with opencv? I've tried hough lines but the result's not good
The source image:
And what I want is to find out the most look-like rectangle among these shapes, like the rectangles in green.
What I want:
A very simple approach is, after you have a rectangle bounding box around your shape, count the percentage of pixels inside the box which are white.
The higher the percentage of white pixels, the closest to a rectangle it is.
To get the bounding boxes you should take a look at either findContours from opencv, or some Blob extracting algorithm, you will find plenty of questions regarding those.
Edit:
Maybe you should first get the Minimum bounding rectangles of the shapes and then do this kind of heuristic:
Shrink the rectangle dimensions until the white-pixel percentage inside the rectangle reaches some threshold defined by you (like 90% of white pixels inside the rectangle).
To get the Minimum bounding rectangle (the smallest rectangle which contains the whole shape), you might check this tutorial:
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
One thing that might also help is doing the difference of sizes from the minimum bounding rectangle and the maximum inner rectangle (the biggest rectangle you can fit inside the white shape). The less difference there is between those rectangle's properties (width, height, area, center coordinates) the closest is the shape to a rectangle.
I develop a 2D match3 game in XNA. The core logic and animations are done. I use RenderTarget2D to draw the entire board. The board has 8 rows and 8 columns with 64x64 textures (the tiles), which could be clicked and moved. To capture the mouse intersection, I use SourceRectangles for each tile. Of course the SourceRectangles have same size as textures - 64x64.
I would like to scale down the entire board, using the RenderTarget2D, to support different monitor resolutions and aspects. First I draw all tiles in the RenderTarget2D. Then I scale down the RenderTarget2D with a float coefficient. Finally I draw the RenderTarget2D on the screen. As a result the entire board is scaled down properly (all textures are scaled down from 64x64 to 50x50 for example), but the SourceRectagles are not scaled, they remain 64x64 and mouse intersections are not captured for the proper tiles.
Why scaling the RenderTarget2D doesn't handle this? How I can solve this problem?
You should approach this problem differently. Your source rectangles for textures are just that — don't try to use them as button rectangles, or you will get in trouble like this.
Instead, use a different Rectangle hitboxRectangle, which will be the same size as your source rectangle initially, but will scale with your game window, and check intersections against it.