I have a bunch of images of handtools and I would like to reorient the pictures along the "intuitive" axis. So what I am trying is to find the magenta line:
Eventually I was able to feed fitLine with the main contour of this hammer, but the middle line does not pass by the center of this hammer:
Here the sample image for the scissors:
Related
I am trying to do some image processing for which I need to remove the facial features like eyes, nose, lips etc.
I have the following contour points
I am doing inpainting on this image::
Now I have to remove the facial features namely eyes, nose, lips and have the skin in place or that. The thing is that I don't have to do just for this image but for a general image that the user uploads.
I am trying using inpainting but it does create some problem, especially around lips where its having beards in neighbouring pixels and it gives a blackish output like this ::
I tried different contour points and shapes but somewhere or the other its causing problem mainly because of hair or beards. So how to achieve what I am trying to?
Code ::
Photo.inpaint(finalImage,imageROIGRAY,imageROIDest,8,Photo.INPAINT_NS);
I have done dilation also on the mask, but doesn't work.
Showing mask for one of the shape formed using the contour points ::
I've written a little app using CoreMotion, AV and SceneKit to make a simple panorama. When you take a picture, it maps that onto a SK rectangle and places it in front of whatever CM direction the camera is facing. This is working fine, but...
I would like the user to be able to click a "done" button and turn the entire scene into a single image. I could then map that onto a sphere for future viewing rather than re-creating the entire set of objects. I don't need to stitch or anything like that, I want the individual images to remain separate rectangles, like photos glued to the inside of a ball.
I know about snapshot and tried using that with a really wide FOV, but that results in a fisheye view that does not map back properly (unless I'm doing it wrong). I assume there is some sort of transform I need to apply? Or perhaps there is an easier way to do this?
The key is "photos glued to the inside of a ball". You have a bunch of rectangles, suspended in space. Turning that into one image suitable for projection onto a sphere is a bit of work. You'll have to project each rectangle onto the sphere, and warp the image accordingly.
If you just want to reconstruct the scene for future viewing in SceneKit, use SCNScene's built in serialization, write(to:options:delegate:progressHandler:) and SCNScene(named:).
To compute the mapping of images onto a sphere, you'll need some coordinate conversion. For each image, convert the coordinates of the corners into spherical coordinates, with the origin at your point of view. Change the radius of each corner's coordinate to the radius of your sphere, and you now have the projected corners' locations on the sphere.
It's tempting to repeat this process for each pixel in the input rectangular image. But that will leave empty pixels in the spherical output image. So you'll work in reverse. For each pixel in the spherical output image (within the 4 corner points), compute the ray (trivially done, in spherical coordinates) from POV to that point. Convert that ray back to Cartesian coordinates, compute its intersection with the rectangular image's plane, and sample at that point in your input image. You'll want to do some pixel weighting, since your output image and input image will have different pixel dimensions.
Currently i am trying to read a square card by using an OCR engine. But before processing image, i want during capturing card image, user should only capture card not other surrounding noise. So for that i looked for overlay & able to create a overlay on camera screen but it is not that useful. So right now i am looking forward some help, how to draw a contour / a outline around a square card when user see it in camera eye as this example.
for ex.
Any body has done this before ?
At first use cvCanny to detect all contours on your image.
Then you can use Standard Hough Line Transform for detection of all lines on the image.
Then you can calculate their intersections and find 4 points: the leftmost and the rightmost of the top and the bottom of the image.
You can ignore small lines which are on the the left and right borders of the image by changing the property of threshold.
I am trying to crop a picture on right on along the contour. The object is detected using surf features and than i want to crop the image of extactly as detected.
When using crop some outside boundaries of other object is includes. I want to crop along the green line below. OpenCV has RotatedRect but i am unsure if its good for cropping.
Is there way to perfectly crop along the green line
I assume you get you get your example from http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html, so what you can do is to find the minimum axis aligned bounding box around the green bounding box, crop it from the image, use the inverted homography (H.inv()) matrix to transform that sub image into a new image (call cv::warpPerspective), and then crop your green bounding box (it should be axis aligned in your new image).
You can get the equations of the lines from the end points for each. Use these equations to check whether any given pixel lies within the green box or not i.e. does it lie between the left and right lines and between the top and bottom lines. Run this over the entire image and reset anything that doesn't lie within the box to black.
Not sure about in-built functionality to do this, but this simple methodology is guaranteed to work. For higher accuracy, you may want to consider sub-pixel checks.
I am trying to make double chin in fat image as mentioned in my desired result image below.
I have morphed the normal face to fat face by wrapping an image on mesh and deformed the mesh.
Original image
Wrapped image on mesh grid with vertex points displaced
Current result image
I tried a lot by arranging mesh points but could not get the result like I have shown in first image.
Any ideas how to achieve this by open GL or open CV in iOS?
It's obvious from the first image that there is an added effect to produce the double or triple chin.
This actually looks like a either a preset image blended into the original or a scale and stretched version of the original chin blended into the warped image.