I have depth map of an image and i want to place new object to this image. In other words,i need to scale object by using depth map information before the placement. Can you recomment any project, paper, or idea related with these?
Thanks.
Related
I'm looking for a guide on how to take a 2D image (Jpeg/PNG) and apply it to a 3D object template programmatically.
The specific use case I am trying to replicate is taking a picture and applying it to a 3D picture frame, similarly to how Cart Magician does (https://cartmagician.com/) where you can upload an image and then applies it to a picture frame object template that they provide which then renders the object with the image that can be viewed with Google AR.
Could anyone help or point me in the right direction?
Thanks in advance!
This is the AR frame with image, the image should be interchangeable.
You can create a ViewRenderable surrounded by the 3D model of a painting frame. The ViewRenderable will have the 2D image put on it.
Assume I have a multiple images of an object taken from different camera perspectives (Example). The object is either a real, physical object (non-planar surface) or a fake object (planar surface), such as a painting/picture of an object. (Example)
So, how can I determine if the object is planar or non-planar?
I'm using OpenCV. I don't need the code, just guidance in this area. Thank you.
I have images of an object and it has been recognised in the image using Neural network, with open cv, I was able to find coordinates of object in the image, How can I find it's depth i.e distance from the point where camera is, assuming I use two lenses to help in finding the depth.
Your problem is comonly known as "camera calibration".
Have for instance a look here to get the basic idea:
Camera calibration With OpenCV
Good luck
I'm developing an application that uses SceneKit API and I faced the problem that I basically cannot apply a texture to a sphere object and keep texture's pre-defined size. I'm able to either scale the texture up to the object's surface (default SceneKit's behavior) or repeat it. But what I want to achieve is similar to the billiard ball:
Let's say I have a a .png image of a white circle with the number "13" at the center of it. I want to put it like the one on the picture. Generally, I want it to be scaled up to a fixed size, not the whole surface.
I use material.diffuse.contents property of SCNGeometry to set the texture and I found contentsTransform property in the documentation which can probably help me sort it out but I didn't find an explanation how to use it with the sphere object.
Is it something that is possible with pure SceneKit? Any help would be very appreciated.
You need a preliminarily modelled geometry (polygonal sphere in your case) and its UV Mapped texture that's made in 3D modelling software (Autodesk Maya for instance).
Watch this short movie to find out how to get UV-mapped texture.
I need to reconstruct a depth map from an image sequence taken by a single static camera of a moving object.
As far as I understand I can calculate the depth of a point found in two images using a stereo camera using the intercept theorem. Is there any way to calculate depth information using only a single camera and matching points from multiple images instead?
Any comments and alternative solutions are welcome. Thanks in advance for your help!
There are some algorithms which help you get depth from a single image. A list of them is mentioned here, http://make3d.cs.cornell.edu/results_stateoftheart.html
These techniques use MRFs and assume that the scene is made up of a collection of planes.
A moving object does not provide any information about the depth (until unless you know the depth of some other moving object), however a single rotating camera can help in extracting depth.