GLKit / OpenGL-ES 2.0 - 2D Object World Space and 2D Camera - ios

I am new to OpenGL-es 2.0 and GLKit,
and would like to ask a question.
I tried to find a good example on 2D camera but couldn't find any,
so I hope that you guys can help me :D
--
1)
Firstly, I have an object, and I store its position in GLKVector2.
I would like to know how to draw it in the world space.
2)
I have a "2D Camera" class, storing as a CGRect with its world position and size.
Its size may change depending on the "zoom" I want.
Is there any way to easily draw the objects from world space into this 2D Camera?
Is any optimization required too? such as not drawing objects outside this 2D Camera,
and clipping objects that have some parts outside of 2D Camera.
3)
If the objects are drawn into this 2D camera, how do I apply effects like clip/scale/etc so that it fits on the device screen, and draw it on the screen?
--
I have seen many things about model, view, and projection matrix, but I don't get them. I have only done XNA/Android bitmap drawing calls, which is drawing them onto a Bitmap, and resizing the Bitmap onto the screen.

Related

Delphi fillpath

So first some background. Im developing a really simple 2D game, in Delphi 10.3, FMX, which at the bottom of the screen draws a random terrain for each level of the game.
Anyway, the terrain is just some random numbers which are used in Tpathdata and then i use fillpath to draw this 2d "terrain".
I want to check when a "falling" object, a trect for example, intersects with this terrain.
My idea was to get all the points of the tpathdata, every Y position of every X position of the screen width. This way i could easily check when an object intersects with the terrain.
I just cannout figure the way how to do it, or if anyone has any other solution. Id really appreciate any help. Thanks
This is not really a Delphi problem but a math problem.
You should have a math representation of your terrain. The polygon representing the boundary of the terrain. Then you need to use the math to know if a point is inside the polygon. See Wikipedia.
You may also implement it purely graphically using a B/W bitmap of the same resolution of the screen. You set the entire bitmap as white and draw the terrain on the bottom in white. Then checking the color of a pixel in that bitmap you'll know if it is outside of the terrain (black) or inside the terrain (white).

Directx shadow mapping

I have successfully implemented shadow maps in my engine but the problem is the shadow map doesn't cover the whole scene. If I make the shadowmap large shadow quality will drop. So I'm trying make my shadows move with camera. I can do this if I can calculate the 8 world space positions of camera frustum vertices.
So how can I calculate world space positions of camera frustum vertices ? I'm working with Directx if it changes the way how it's calculated.
Thanks.
The frustum (near plane, far plane, fov) is in view space so multiplying it with an inverse view matrix will move it into world space. If you use DirectXMath (which I recommend) you can utilize the bounding frustum object. An example code might look something like this:
DirectX::BoundingFrustum frustum;
DirectX::BoundingFrustum::CreateFromMatrix(frustum, camera.getProjectionMatrix());
DirectX::XMMATRIX inverseViewMatrix = DirectX::XMMatrixInverse(nullptr, camera.getViewMatrix());
frustum.Transform(frustum, inverseViewMatrix);
BoundingFrustum docs: https://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.directxmath.boundingfrustum(v=vs.85).aspx
If your frustum is big and you have to view a large area (e.g. outdoor scene) then even a large moving shadow map might not be enough (or it takes a huge amount of memory). One technique to solve this is called cascaded shadow maps (CSM). In CSM more precise shadow maps are rendered close to the camera and less precise shadows are rendered in the distance where the low quality is not visible anyway. Here is a CSM tutorial in case you are interested: https://msdn.microsoft.com/en-us/library/windows/desktop/ee416307(v=vs.85).aspx

Turn an entire SceneKit scene into an image suitable for a texture

I've written a little app using CoreMotion, AV and SceneKit to make a simple panorama. When you take a picture, it maps that onto a SK rectangle and places it in front of whatever CM direction the camera is facing. This is working fine, but...
I would like the user to be able to click a "done" button and turn the entire scene into a single image. I could then map that onto a sphere for future viewing rather than re-creating the entire set of objects. I don't need to stitch or anything like that, I want the individual images to remain separate rectangles, like photos glued to the inside of a ball.
I know about snapshot and tried using that with a really wide FOV, but that results in a fisheye view that does not map back properly (unless I'm doing it wrong). I assume there is some sort of transform I need to apply? Or perhaps there is an easier way to do this?
The key is "photos glued to the inside of a ball". You have a bunch of rectangles, suspended in space. Turning that into one image suitable for projection onto a sphere is a bit of work. You'll have to project each rectangle onto the sphere, and warp the image accordingly.
If you just want to reconstruct the scene for future viewing in SceneKit, use SCNScene's built in serialization, write(to:​options:​delegate:​progress​Handler:​) and SCNScene(named:).
To compute the mapping of images onto a sphere, you'll need some coordinate conversion. For each image, convert the coordinates of the corners into spherical coordinates, with the origin at your point of view. Change the radius of each corner's coordinate to the radius of your sphere, and you now have the projected corners' locations on the sphere.
It's tempting to repeat this process for each pixel in the input rectangular image. But that will leave empty pixels in the spherical output image. So you'll work in reverse. For each pixel in the spherical output image (within the 4 corner points), compute the ray (trivially done, in spherical coordinates) from POV to that point. Convert that ray back to Cartesian coordinates, compute its intersection with the rectangular image's plane, and sample at that point in your input image. You'll want to do some pixel weighting, since your output image and input image will have different pixel dimensions.

In opencv's solvePnP, what should I pass for objectPoints?

OpenCV docs for solvePnp
In an augmented reality app, I detect the image in the scene so I know imagePoints, but the object I'm looking for (objectPoints) is a virtual marker just stored in memory to search for in the scene, so I don't know where it is in space. The book I'm reading(Mastering OpenCV with Practical Computer Vision Projects ) passes it as if the marker is a 1x1 matrix and it works fine, how? Doesn't solvePnP needs to know the size of the object and its projection so we know who much scale is applied ?
Assuming you're looking for a physical object, you should pass the 3D coordinates of the points on the model which are mapped (by projection) to the 2D points in the image. You can use any reference frame, and the results of the solvePnp will give you the position and orientation of the camera in that reference frame.
If you want to get the object position/orientation in camera space, you can then transform both by the inverse of the transform you got from solvePnp, so that the camera is moved to the origin.
For example, for a cube object of size 2x2x2, the visible corners may be something like: {-1,-1,-1},{1,-1,-1},{1,1,-1}.....
You have to pass the 3D coordinates of the real-world object that you want to map with the image. The scaling and rotation values will depend on the coordinate system that you use.
This is not as difficult as it sounds. See this blog post on head pose estimation. for more details with code.

Draw line between two points (x1,y1,z1) to (x2,y2,z2)

I know how to draw line on 2d surface.But I can't find a way to draw a line in space.
I have wrote a demo
and now I want to draw line in space.
and finish it like this:
I have finished the 2d surface rotate in space use CATransform3D already. But I don't know how to draw line in space.
Thanks a lot.
Normal drawing on iOS is 2D. Core Animation is "2.5D", where it can draw flat images with fake 3D perspective. It doesn't let you "draw in space."
If you want real 3D perspective drawing you should use OpenGL, SceneKit, Metal, or some other 3D API.
Your trying to draw a 3d image on a 2d surface. Therefore you need some sort of mapping
https://en.m.wikipedia.org/wiki/3D_projection
Has some options for you. Orthographic projection is probably what you want though
https://upload.wikimedia.org/math/8/3/a/83a402b37056afa1dd4c8d706a9a2d75.png
Is the equation you would want to use where s is a scaling factor and c is an offset

Resources