How to rotate a UAV system - ros

Hello i want to rotate a UAV system by preserving the formation, is any simple way or maths formul to solve this problem.
here is my UAVs

Consider the middle UAV as the centre and other UAV as points over the circumference.
Now using formulas in : https://math.stackexchange.com/questions/1384994/rotate-a-point-on-a-circle-with-known-radius-and-position
You find the relative shift for each UAV.

Related

Isometric scene - how?

I programmed a simple 2d app/ game.I just noticed, that an isometric scene instead of the 2d one would look gorgeous . I did not use any SpriteKit, etc. , it's just a simple single-view-app.
Now I draw some nice isometric vectors of e.g. a petrol station, which I would love to use instead of the plain 2D-images. For sure I can just use them as an imageview. But my idea was that I may animate driving cars in straight ways, so that they seem to be 3D (isometric), but are just images moving along a given path. What is the best way to do this? Can I use my isometric image as a Gamescene (never used)?
Greetings!
If you’ve been doing without SpriteKit so far, I don’t see any need to use it now. Keep them as UIImageViews, and animate their positions normally, just make sure that the point you move them to makes sense, so as not to break the illusion.
You can open up your image in Preview, click and drag along it’s direction to make a rectangular selection along that direction, and use the width and height of that box to make a ratio, like 20:25, which then simplifies to 4:5. Meaning for every 4 points it moves along x-axis, it should move 5 points along y-axis. Then store this ratio as a CGPoint somewhere in your code, note that all your isometric images should give you this same ratio.
Then you could make an abstraction that moves an imageview x points along its direction, using that ratio. Say you want to move it 100 points along its direction, and say the ratio is 4:5. The ratio is a triangle of width 4, height 5. You use a^2=b^2+c^2 to calculate the hypotenuse of that triangle. Then find a k such that ck=100. Then multiply a and b by that k to give you your delta x and delta y. Apply those deltas to its current position and you have the final position to animate to, which will be 100 points along its direction.

How can I convert GPS coordinate to pixel on the screen in OpenCV?

I'm writing an application in c++ which gets the camera pose using fiducial markers and also as input get a lat/lon coordinate in the real world and as output streams a video with X marker which shows the location of the coordinate on the screen.
When I move my head , the X stays in the same place spatially (because I know how to move it on the screen based on the camera pose or even hide it when I look away.
My only problem is to convert the coordinate from real life to coordinate on the screen.
I know my own gps coordinate and the target gps coordinate.
I also have the screen size (height / width) .
How can I in openCV translate all these to x,y pixel on the screen ?
In my point, your question isn't so clear.
The opencv is an image processing library
You can't convert your needs with opencv. You've need a solution with your own algorithms. So I have some advices and some experiments to explain somethings.
You can simulate to show your real life position on screen with any programming language. Imagine it, you want to develop a measurement software, it can measure a house plan image on screen with drawing lines to edges of all walls (You know some length of walls owing to an image like below)
If you want to measure wall of WC at bottom, you must know how much pixels are how ft, so firstly you should draw a line from start to end of known length for how much pixel width it. For example, If 12'4"" ft equals 9 pixels width. no longer, you can calculate length wall of WC at bottom with use basic proportion. Of course this is basic ratio for you.
I know this is not your need but this answer is helpful for you, I hope it will give some ideas.

Recreate the 3D outlines of a City street in iOS SceneKit with OSM XML data

What is best strategy to recreate part of a street in iOS SceneKit using .osm XML data?
Please assume part of a street is offered in the OSM XML data and contains the necessary geopoints with latitude and longitude denoting the Nodes to describe the paths/footprints of 6 buildings (i.e. ground floor plans that line the side of a street).
Specifically, what's the best strategy to convert latitude and longitude Nodes in order to locate these building footprints/polygons on the ground floor in a scene within SceneKit iOS? (i.e. running through position 0,0,0)? Thank you.
Very roughly and briefly, based on my own experience with 3D map rendering:
Transform the XML data from lat/long to appropriate coordinates for a 2D map (that is, project it to a plane using a map projection, then apply a 2D affine transform to get it into screen pixel coordinates). Create a 2D map that's wider and taller than the actual screen, because of what's going to happen in step 2:
Using a 3D coordinate system with your map vertical (i.e., set all the Z coordinates to zero), rotate the map so that it reclines at an appropriate shallow angle, as if you're in an aeroplane looking down on it; the angle might be 30 degrees from horizontal. To rotate the map you'll need to create a 3D rotation matrix. The axis of rotation will be the X axis: that is, the horizontal line that is the bottom border of your 2D map. The rotation is exactly the same as what happens when you rotate your laptop screen away from you.
Supply the new 3D coordinates to your rendering system. I haven't used SceneKit but I had a quick look at the documentation and you can use any coordinate system you like, so you will be able to use one that is convenient for the process I have just described: something that uses units the size of a screen pixel at the viewing plane, with Y going upwards, X going right, and Z going away from the viewer.
One final caveat: if you want to add extrusions giving a rough approximation of the 3D building shapes (such data is available in OSM for some areas) note that my scheme requires the tops of buildings, and indeed anything above ground level, to have negative Z coordinates.
Pretty simple. First, convert Your CLLocationCoordinate2D to a MKMapPoint, which is exactly the same as a CGRect. Second, scale down the MKMapPoint by some arbitrary number so it fits in with how you want it on your scene graph, let's say by 200. Since scenekit's coordinate system is centered at (0,0), you'll need to make sure your location is correct. Then just create your scnvector3's with the x/y of he MKMapPoint, and you will be locked to coordinates.

Calculate the position of the camera using the two reference points

I am trying to find the position of the camera in (x,y,z). R1 and R2 are the two reference points on the floor. R1' and R2' are the points shown in a image plane. 2492 pixels is the width of the viewing I was able to find the distance between R1 and R2 (0.98m), also between R1' and R2' (895.9pixels). The angle at which the camera is 69 degrees. The camera is placed on the left side of the reference points.
I am trying to do a mathematical model. If anyone could please help me with this would be much appreciated.
I think that the problem, as you described it, is too ambiguous to be solved.
Your problem is quite similar to the general PnP problem, whose objective is to estimate the relative pose between an object and a camera, based on a set of N known 3D points and their projections in the image. From what I know, the P3P problem, i.e. the PnP problem for 4 3D points and their 4 projections (see this website for a description), is the best we can solve. FYI, OpenCV implements the solvePnP function, which does that for N>=4.
Comparatively, in your problem, you do know the viewing angle of the camera, but you only have the distance between 2 3D points and the distance between their 2 projections. I do not think this can be solved as is. However, there may be a way if you look for hidden additional constraints (e.g. the camera center and 2 3D points are coplanar, etc) or if you intentionally add some (e.g. use more points).

Convert world to object coordinates

The iPhone gyroscope receives rotation data relative to some reference attitude and it doesn't change (unless multiplied.) Lets say I face the wall using my iPhone camera, and rotate 45 degrees left (roll += PI/4.)
Now, if I lift the phone towards the ceiling, both yaw and pitch change since the coordinate space is fixed (world coordinate space, doesn't move or rotate with the phone.) Is there a way to determine this angle (the one between the floor plane and the camera direction vector), roll, yaw and pitch given?
Edit: Instead of opening another question I'll try here. Luc's solution works. But how to get the other two angles of rotation? I've read the info on the posted link but it's been years since I studied linear algebra. This might be more math than a programming question, actually.
I don't really code for iPhone so I'll trust you on the "real world coordinates" frame.
In that case, you want the dot product between both z-axis' vectors. That'll give you the cosine of the angle you're looking for, pretty close thus. Since an angle between planes only really makes sense as a value between 0° and 90°, you actually have all the information you need in that cosine.
And there is no latex formatting here, otherwise I'd go into a bit more of detail, but read this page if you're interested, I'll just include the final result here, the rotation matrix for your three rotations :
Now the z-axis' vector of the horizontal plan is (0,0,1) (read this as a vertical vector though) and rotated with this matrix, you simply get its third column.
So we want to have the dot product between that third column and our (0,0,1) vector, so you get cos(β)cos(γ) which is cos(pitch)*cos(roll)
In conclusion, the angle between your plans is arccos(cos(pitch)*cos(roll)). This value will tell you how much your iPhone is inclined, not in which direction of course. But you can work that out from the values of the vector (rightmost column of the matrix) we spoke of.

Resources