Box2d - actionscript, java, andengine - body position - actionscript

I've lost in tonnes of information for box2d and it's ports.
But I have very simple question, for which I can't get any informations anywhere.
How to properly calculate body position. Let say I have sprite on screen in position (10, 20).
Why every tutorial for box2d computing this different ?
For example:
- processing (from processing.org) using function coordPixelToWorld and coordWorldToPixel for calculate beetwen world and screen - which is a little compilcated function;
- andengine have similar function for converting beetwen world and screen;
- actionscript - here I don't understand why but every tutorial has getting screen coords and dividing it by scale function.
I ask this questions because all of the above have one common point: screen coords for 0,0 are in left top corner. Is box2d written different for every port ?
I will be grateful for explanation.
Update
I don't have very big troubles with box2d actionscript. My problemy is: why when I'm setting body position to (0, 0) it's displayed in left top corner of window in short words. In any other box2d ports (processing, jbox2d, andengine box2d extension) if I set body position to (0, 0) it's displayed in the center of window. I know rules about pixel per meters etc.

The reason for the variation is that Box2D in all of its forms, uses meters as its unit of measurement. The reason for this is that it is intended a simulation of real-world objects.
Box2D does not need to have a visual representation at all. When you create a box2D simulation, it is up to the developer what level of detail to show in the graphic render of box2D. So there will be a multiplier, representing the conversion of meters to pixels. In the case of Box2D flash, the default for this ratio is 30px:1m.
So the reason every toturial has a conversion function, is that Box2D uses meters, and computer displays use pixels.
It converts meters to pixels, and pixels to meters.

In actionscript the centre of a DisplayObject is usually top-left, but Box2D bodies' centre coincides with the barycentric centre (the actual middle of the shape, not the top-left corner). Also the Box2D units are not exactly pixels, but you can easily convert between the two.
I recommend having a look at the Box2DFlash Manual, especially the part on Units and UserData. Ideally you would want to read the whole manual to get your head around Box2D better, which will give you a lot more control.
If you're not interested in the nitty gritty then give the World Construction Kit a try. It should make it easier to setup Box2D worlds.

Related

Delphi fillpath

So first some background. Im developing a really simple 2D game, in Delphi 10.3, FMX, which at the bottom of the screen draws a random terrain for each level of the game.
Anyway, the terrain is just some random numbers which are used in Tpathdata and then i use fillpath to draw this 2d "terrain".
I want to check when a "falling" object, a trect for example, intersects with this terrain.
My idea was to get all the points of the tpathdata, every Y position of every X position of the screen width. This way i could easily check when an object intersects with the terrain.
I just cannout figure the way how to do it, or if anyone has any other solution. Id really appreciate any help. Thanks
This is not really a Delphi problem but a math problem.
You should have a math representation of your terrain. The polygon representing the boundary of the terrain. Then you need to use the math to know if a point is inside the polygon. See Wikipedia.
You may also implement it purely graphically using a B/W bitmap of the same resolution of the screen. You set the entire bitmap as white and draw the terrain on the bottom in white. Then checking the color of a pixel in that bitmap you'll know if it is outside of the terrain (black) or inside the terrain (white).

How can I convert GPS coordinate to pixel on the screen in OpenCV?

I'm writing an application in c++ which gets the camera pose using fiducial markers and also as input get a lat/lon coordinate in the real world and as output streams a video with X marker which shows the location of the coordinate on the screen.
When I move my head , the X stays in the same place spatially (because I know how to move it on the screen based on the camera pose or even hide it when I look away.
My only problem is to convert the coordinate from real life to coordinate on the screen.
I know my own gps coordinate and the target gps coordinate.
I also have the screen size (height / width) .
How can I in openCV translate all these to x,y pixel on the screen ?
In my point, your question isn't so clear.
The opencv is an image processing library
You can't convert your needs with opencv. You've need a solution with your own algorithms. So I have some advices and some experiments to explain somethings.
You can simulate to show your real life position on screen with any programming language. Imagine it, you want to develop a measurement software, it can measure a house plan image on screen with drawing lines to edges of all walls (You know some length of walls owing to an image like below)
If you want to measure wall of WC at bottom, you must know how much pixels are how ft, so firstly you should draw a line from start to end of known length for how much pixel width it. For example, If 12'4"" ft equals 9 pixels width. no longer, you can calculate length wall of WC at bottom with use basic proportion. Of course this is basic ratio for you.
I know this is not your need but this answer is helpful for you, I hope it will give some ideas.

How to I get more reliable Y position tracking for the Google Tango in Unity?

We have a unity scene that uses arealearning which has been extremely reliable and consistent about XZ position. However we are noticing that sometimes the tango delta camera’s Y position will "jump up" very high in the scene. When we force the tango to relocalize (by covering the sensors for a few seconds), the Y position remains very off. At other times, the Y position varies a 0.5 - 1.5 unity units when we first start up our Unity app on the tango and are holding it in the exact same position in the exact same room using the same ADF file. Is there a way to get a more reliable Y position tracking and/or correct for these jumps?
(All the XYZ coordinate is in the Unity convention in this context, x is right, y is up, z is forward)
Y position should work same as XZ coordinates, it relocalized to the height based on the ADF origin.
But note that, the ADF's origin is where you started learning(recording) ADF. Let's say you started the learning session by holding the device normally, then the ADF's origin might be a little bit higher than ground level. When you construct a virtual world to relocalize, you should take the height difference into consideration.
Another thing to check is that making sure there's no offset or original location set for DeltaPoseController prefab. DeltaPoseController will take the initial starting transformation as a offset, and add up pose on it. For example, if my DeltaPoseController's starting position is at (0,1,0), and my pose from device is (0,1,1), then the actually position for DeltaPoseController in Unity would be (0,2,1). This applies to both translation and rotation.
Another advanced (and preferred) way of defining ground level is to use the depth sensor to find out the ground height. In the Unity Augmented Reality example, it showed how to detect the plane and place a marker on it. You can easily apply the similar method to the ground plane, do a PlaneFinding and place the ground at the right height in Unity world space.

how to find orientation of a picture with delphi

I need to find orientation of corn pictures (as examples below) they have different angles to right or left. I need to turn them upside (90 degree angle with their normal) (when they look like a water drop)
Is there any way I can do it easily?
As starting point - find image moments (and Hu moments for complex forms like pear). From the link:
Information about image orientation can be derived by first using the
second order central moments to construct a covariance matrix.
I suspect that usage of some image processing library like OpenCV could give more reliable results in common case
From the OP I got the impression you a rookie in this so I stick to something simple:
compute bounding box of image
simple enough go through all pixels and remember min,max of x,y coordinates of non background pixels
compute critical dimensions
Just cast few lines through the bounding box computing the red points positions. So select the start points I choose 25%,50%,75% of height. First start from left and stop on first non background pixel. Then start from right and stop on first non background pixel.
axis aligned position
start rotating the image with some step remember/stop on position where the red dots are symmetric so they are almost the same distance from left and from right. Also the bounding box has maximal height and minimal width in axis aligned position so you can also exploit that instead ...
determine the position
You got 4 options if I call the distance l0,l1,l2,r0,r1,r2
l means from left, r means from right
0 is upper (bluish) line, 1 middle, 2 bottom
then you wanted position is if (l0==r0)>=(l1==r1)>=(l2==r2) and bounding box is bigger in y axis then in x axis so rotate by 90 degrees until match is found or determine the orientation directly from distances and rotate just once ...
[Notes]
You will need accessing pixels of image so I strongly recommend to use Graphics::TBitmap from VCL. Look here gfx in C specially the section GDI Bitmap and also at this finding horizon on high altitude photo might help a bit.
I use C++ and VCL so you have to translate to Pascal but the VCL stuff is the same...

What is this rotation behavior in XNA?

I am just starting out in XNA and have a question about rotation. When you multiply a vector by a rotation matrix in XNA, it goes counter-clockwise. This I understand.
However, let me give you an example of what I don't get. Let's say I load a random art asset into the pipeline. I then create some variable to increment every frame by 2 radians when the update method runs(testRot += 0.034906585f). The main thing of my confusion is, the asset rotates clockwise in this screen space. This confuses me as a rotation matrix will rotate a vector counter-clockwise.
One other thing, when I specify where my position vector is, as well as my origin, I understand that I am rotating about the origin. Am I to assume that there are perpendicular axis passing through this asset's origin as well? If so, where does rotation start from? In other words, am I starting rotation from the top of the Y-axis or the x-axis?
The XNA SpriteBatch works in Client Space. Where "up" is Y-, not Y+ (as in Cartesian space, projection space, and what most people usually select for their world space). This makes the rotation appear as clockwise (not counter-clockwise as it would in Cartesian space). The actual coordinates the rotation is producing are the same.
Rotations are relative, so they don't really "start" from any specified position.
If you are using maths functions like sin or cos or atan2, then absolute angles always start from the X+ axis as zero radians, and the positive rotation direction rotates towards Y+.
The order of operations of SpriteBatch looks something like this:
Sprite starts as a quad with the top-left corner at (0,0), its size being the same as its texture size (or SourceRectangle).
Translate the sprite back by its origin (thus placing its origin at (0,0)).
Scale the sprite
Rotate the sprite
Translate the sprite by its position
Apply the matrix from SpriteBatch.Begin
This places the sprite in Client Space.
Finally a matrix is applied to each batch to transform that Client Space into the Projection Space used by the GPU. (Projection space is from (-1,-1) at the bottom left of the viewport, to (1,1) in the top right.)
Since you are new to XNA, allow me to introduce a library that will greatly help you out while you learn. It is called XNA Debug Terminal and is an open source project that allows you to run arbitrary code during runtime. So you can see if your variables have the value you expect. All this happens in a terminal display on top of your game and without pausing your game. It can be downloaded at http://www.protohacks.net/xna_debug_terminal
It is free and very easy to setup so you really have nothing to lose.

Resources