Turn an entire SceneKit scene into an image suitable for a texture - ios

I've written a little app using CoreMotion, AV and SceneKit to make a simple panorama. When you take a picture, it maps that onto a SK rectangle and places it in front of whatever CM direction the camera is facing. This is working fine, but...
I would like the user to be able to click a "done" button and turn the entire scene into a single image. I could then map that onto a sphere for future viewing rather than re-creating the entire set of objects. I don't need to stitch or anything like that, I want the individual images to remain separate rectangles, like photos glued to the inside of a ball.
I know about snapshot and tried using that with a really wide FOV, but that results in a fisheye view that does not map back properly (unless I'm doing it wrong). I assume there is some sort of transform I need to apply? Or perhaps there is an easier way to do this?

The key is "photos glued to the inside of a ball". You have a bunch of rectangles, suspended in space. Turning that into one image suitable for projection onto a sphere is a bit of work. You'll have to project each rectangle onto the sphere, and warp the image accordingly.
If you just want to reconstruct the scene for future viewing in SceneKit, use SCNScene's built in serialization, write(to:​options:​delegate:​progress​Handler:​) and SCNScene(named:).
To compute the mapping of images onto a sphere, you'll need some coordinate conversion. For each image, convert the coordinates of the corners into spherical coordinates, with the origin at your point of view. Change the radius of each corner's coordinate to the radius of your sphere, and you now have the projected corners' locations on the sphere.
It's tempting to repeat this process for each pixel in the input rectangular image. But that will leave empty pixels in the spherical output image. So you'll work in reverse. For each pixel in the spherical output image (within the 4 corner points), compute the ray (trivially done, in spherical coordinates) from POV to that point. Convert that ray back to Cartesian coordinates, compute its intersection with the rectangular image's plane, and sample at that point in your input image. You'll want to do some pixel weighting, since your output image and input image will have different pixel dimensions.

Related

Easiest/most robust to detect shape for OpenCV for Intersection over Union of two objects

I am trying to measure the precision of my marker tracking algorithm via post-processing a video.
My algorithm is: Find a printed planar marker in a Videostream and place a virtual marker at that position. I am working with AR.
Here are two frames of such a video:
Virtual Marker on top of detected marker
Virtual Marker with offset to actual marker
I want to calculate the Intersecion over Union / Jaccard Index of the actual marker and virtual marker. For the first picture it would give me ~98% and the second ~1/5th %. This will give me the quality for my algorithm, how precise and well it works.
I want to get the position and rotation of both markers in each frame with OpenCV and calculate the Jaccard Index. As you can see though, if I directly place a virtual marker on top of the paper marker, I will make it difficult for myself (with OpenCV) to detect them.
My idea is to not place a white marker on top of the actual marker, but place an easily detectable "thing" with a specific color or shape with an offset to the marker, let's say 10cm to the right maybe. Then I subtract the offset. So now, at the best case scenario, the position and rotation of the actual marker and the "thing" with the offset subtracted will be the same.
But what should I use as the easily detectable "thing"? I don't have enough experience with OpenCV to know what (colored?) shape I should use. The augmentation can go in front, behind, left, right... of the actual marker anytime during the video and it should do two things:
Not hinder the detection of the actual marker, like currently shown in the pictures
Be easily detectable itself
Help would be much appreciated!
Assuming you have enough white background around the visual marker:
You could use colored circles, for example in red, green, blue and black.
Use opencv blob detection [1] to detect all blobs and filter for circular ones:
Look-up average color values for detected blobs and filter for the colors of the circles.
Alternatively you could filter the whole image for each color and do blob detection on the filtered images. But this is slower.
Find the centroids (~ center point) of each blob using moments of the blob contours. [2] "Center of multiple blobs in an Image".
Now you have the four pixel positions of your circles. If you know the world coordinates of your light projected circles you can use solvePnP to get a pose from this.
Knowing the correct world coordinates is tricky in your case because you project the circle with light on a surface. This involves some 3D geometry. You need to know the transformation from camera coordinate system to pattern projector coordinate system and the projection parameters of your projector.
I guess you send the projected pattern as an image to the projector. I think you can then model the projector as a camera with a certain camera matrix (basically field of view & center point). Naturally you know the pixel coordinates of the projected circles. From this you can compute rays in 3D space (in projector coordinate system). As a starting point see [3]. Intersecting [4] them with the correct surface plane (in projector coordinate system) gives you the 3D coordinates of
the projected circle pattern in projector coordinate system. Transform these to camera coordinate system using your known transformation. Now use opencv solvePnP to determine pose of projected light marker.
How to get surface plane?
If your setup is static you could use visual marker detection of all recorded images and use mean oder median of marker pose as surface plane. Not sure what this implies for your evaluation though..
[1] https://www.learnopencv.com/blob-detection-using-opencv-python-c/
[2] https://www.learnopencv.com/find-center-of-blob-centroid-using-opencv-cpp-python/
[3] https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
[4] https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/raycast/sld017.htm

Recreate the 3D outlines of a City street in iOS SceneKit with OSM XML data

What is best strategy to recreate part of a street in iOS SceneKit using .osm XML data?
Please assume part of a street is offered in the OSM XML data and contains the necessary geopoints with latitude and longitude denoting the Nodes to describe the paths/footprints of 6 buildings (i.e. ground floor plans that line the side of a street).
Specifically, what's the best strategy to convert latitude and longitude Nodes in order to locate these building footprints/polygons on the ground floor in a scene within SceneKit iOS? (i.e. running through position 0,0,0)? Thank you.
Very roughly and briefly, based on my own experience with 3D map rendering:
Transform the XML data from lat/long to appropriate coordinates for a 2D map (that is, project it to a plane using a map projection, then apply a 2D affine transform to get it into screen pixel coordinates). Create a 2D map that's wider and taller than the actual screen, because of what's going to happen in step 2:
Using a 3D coordinate system with your map vertical (i.e., set all the Z coordinates to zero), rotate the map so that it reclines at an appropriate shallow angle, as if you're in an aeroplane looking down on it; the angle might be 30 degrees from horizontal. To rotate the map you'll need to create a 3D rotation matrix. The axis of rotation will be the X axis: that is, the horizontal line that is the bottom border of your 2D map. The rotation is exactly the same as what happens when you rotate your laptop screen away from you.
Supply the new 3D coordinates to your rendering system. I haven't used SceneKit but I had a quick look at the documentation and you can use any coordinate system you like, so you will be able to use one that is convenient for the process I have just described: something that uses units the size of a screen pixel at the viewing plane, with Y going upwards, X going right, and Z going away from the viewer.
One final caveat: if you want to add extrusions giving a rough approximation of the 3D building shapes (such data is available in OSM for some areas) note that my scheme requires the tops of buildings, and indeed anything above ground level, to have negative Z coordinates.
Pretty simple. First, convert Your CLLocationCoordinate2D to a MKMapPoint, which is exactly the same as a CGRect. Second, scale down the MKMapPoint by some arbitrary number so it fits in with how you want it on your scene graph, let's say by 200. Since scenekit's coordinate system is centered at (0,0), you'll need to make sure your location is correct. Then just create your scnvector3's with the x/y of he MKMapPoint, and you will be locked to coordinates.

Texture getting stretched across faces of a cuboid in Open Inventor

I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.

Calculating position of object so it matches screen pixels

I would like to move a 3D plane in a 3D space, and have the movement match
the screens pixels so I can snap the plane to the edges of the screen.
I have played around with the focal length, camera position and camera scale,
and I have managed to get a plane to match the screen pixels in terms of size,
however when moving the plane things are not correct anymore.
So basically my current status is that I feed the plane size with values
assuming that I am working with standard 2D graphics.
So if I set the plane size to 128x128, it more or less is viewed as a 2D sqaure with that
exact size.
I am not using and will not use Orthographic view, I am using and will be using Projection view because my application needs some perspective to it.
How can this be calculated?
Does anyone have any links to resources that I can read?
you need to grab the transformation matrices you use in the vertex shader and apply them to the point/some points that represents the plane
that will result in a set of points in -1,-1 to 1,1 (after dividing by w) which you will need to map to the viewport

Transform position of point form one perspective into another

I'm trying to convert the position of a point which was filmed with a freely moving camera (local space) into the position in a image of the same scene (global space). The position of the point is given in local space and I need to calculate it in global space. I have markers distributed all over the scene to have corresponding points in both global and local space to calculate the perspective transform.
I tried to calculate the perspective transform matrix by comparing the points of corresponding markers in gloabl and local space with the help of JavaCV (cvGetPerspectiveTransform(localMarker, globalMarker, mmat)). Then I transform the postion of the point in local space with the help of the perspective transform matrix (cvPerspectiveTransform(localFieldPoints, globalFieldPoints, mmat)).
I though that would be enough to solve my problem, but it doesn't quite work good. I also noticed that when I calculate the perspective transform matrix of different markers in one specific image of the video, i get diefferent perspective transform matrices. If I understood everything correct, this shouldn't happen, because the perspective is alway the same here, so I should always get the same perspective transform matrix, shouldn't I?
Because I'm quite new to all of this and this was my first attempt, I just wanted to know If the method I used is generally right or should it be done differently? Maybe I just missed something?
EDIT:
Again, I have one image of the complete scene i look at and a video from a camara which moves freely in the scene. Now I take every Image of the video and compare it with the image of the complete scene (I used different cameras for making the image and the video, so the camera intrinsics actually aren't the same. Could that be the Problem?
Perspective Transform Screenshot.
On the rigth side I have the image of the scene, on the left one Image of the video. The red circle in the left video image is the given point. The red square in the right image ist the calculated point with the help of perspective transform. As you can see, the calculated point isn't at the right position.
What I meant with „I get different perspective transform matrices“ is that when I calculate a perspective transform matrix with the help of marker „0E3E“ I get a different matrix than using marker „0272“.

Resources