Blending artifacts in OpenCV image stitching - opencv

I am using OpenCV to blend a set of pre-warped images. As input I have some 4-channel images (*.png or *.tif) from where I can extract a bgr image and an alpha mask with the region related to the image (white) and the background (black). Both image and mask are the inputs of the Blender module cv::detail::Blender::blend.
When I use feather (alpha) blending the result is ok, however, I would like to avoid ghosting effects. When I use multi-band, some artifacts are appearing on the edges of the images:
The problem is similar to the one raised here, and solved here. The thing is, if the solution is creating a binary mask (that I already extract from the alpha channel), it does not work for me. If I add padding to the ovelap between both images, it takes pixels from the background and messes up even more the result.
I guess that probably it has to do with the functions pyrUp and pyrDown, because maybe the blurring to create the Gaussian and Laplacian pyramids is applied to the whole image, and not only to the positive alpha region. In any case, I don't know how to fix the problem using these functions, and I cannot find another efficient solution.
When I use another implementation of multiresolution blending, it works perfectly, however, I am very interested in integrating the multi-band implementation of OpenCV. Any idea of how to fix this issue?

Issue has been already reported and solved here:
http://answers.opencv.org/question/89028/blending-artifacts-in-opencv-image-stitching/

Related

How to do a perspective transformation of an image which is missing corners using opencv java

I am trying to build a document scanner using openCV. I am trying to auto crop an uploaded image. I have few use cases where there is a gap in the border when the document is out of frame(captured image).
Ex image
Below is the canny edge detection of the given image.
The borders are missing here and findContours does not return me proper results due to this.
How can I handle such images.
Both automatic canny edge detection as well as dilate does not work in such cases because it can join only small edges.
Also few documents might have only 2 sides or 3 sides captured using camera and how can we crop the other areas which is not required.
Example Image:
Is there any specific technique for handling such documents?
Please suggest few ideas.
Your problem is unusual. One way to solve this problem which comes to my mind is to:
Add white borders around image.
https://docs.opencv.org/3.4/dc/da3/tutorial_copyMakeBorder.html
Find lines in edges
http://www.robindavid.fr/opencv-tutorial/chapter5-line-edge-and-contours-detection.html
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
Make Probablistic HoughLines
Crop image by these lines. It will for sure work for image like 1st one.
For image like 2nd one you can use perpendicular and parallel lines.
For sure your algorithm must be pretty complex to works good. The easiest way is to take a picture of whole document if it is possible.
Good luck!

Is there Inter_area texture minFilter in three.js like in opencv?

I have a three.js canvas to which I uploaded a 2730x4096 resolution image. I have given options to download different resolutions of images from the canvas. When I download a lower resoltuion image - say 960x1440 image, I get a very blurry image with jagged edges.
I tried to increase the sharpness by setting anisotropy to max(16 in my case) and also tried using map.minFilter= THREE.LinearFilter, which also sharpened the image further but the edges are still jagged.
I tried to run it through an FXAA anti aliasing composer, but the anti-aliasing is not great. May be I'm not giving correct parameters.
All the while antialias from renderer is active (renderer.antialias=true)
When I try to do the same in opencv, I used cv2.INTER_AREA interpolation for downsizing the 2730x4096 image which gives me a very sharp images with absolutely no jagged edges.
So I was thinking if implementing INTER_AREA interpolation for the minFilter instead of THREE.LinearFilter might yield better results. Is there something existing already in three.js that I'm not utilizing, or if I have to use this new interpolation method, how to go about it?
Illustration:
PFA two files - one file is downloaded using three.js canvas directly at 960x1440 resolution (bottom one) and other is an image which is downsized from 2730x4096 to 960x1440 using opencv (top one). In the opencv downsized image, the details are sharper and the edges are cleaner than the three.js image. I'm starting to believe this is because of the INTER_AREA interpolation for downsizing in opencv. Is that replicable in three.js?
The original high resolution image can be downloaded from here
Not sure what your setup is, but by enabling THREE.LinearFilter, you're no longer using Mipmapping, which is very effective at downsampling large textures. The default is LinearMipMapLinearFilter but you have several other options, as outlined in the texture constants docs page. I've found that using anything but the default gives you crunchy textures.
Also, make sure you're enabling antialiasing with renderer.antialias = true; if you need it. Not sure if you're already doing this step, given the limited scope of the code in your question.

OpenCV : Removing sharp lines

Before saying anything I will share my test image :
As you can see the forehead having a half-circle, and between the circle and the rest of the face in the boundary there is a sharp transition which is quite visible.
If I want to make it smooth, then how should I do it.
I have tried with median blurring inpainting etc. But none giving good results.
Following are some of the result that I got :
So what else can be used to solve this problem?
You need to use blending, simplest way is to use cross-fade like here, but instead of white image use your face image.
You also can use pyramid blending like in in opencv sample, or Poisson blending there are also many examples of implementation in network.

Preventing JPEG Compression artefacts during binarization

I am trying to use an binarize images similar to the following image:
Basically, I want all non white to become black but threshold in OpenCV is giving fringing (JPEG Artifacts). I even tried Otsu thresholding but some parts of the colors don't work so well.
Is there any simple way of doing this binarization properly?
Turn to greyscale, apply 5x5 blur filter, and binarize? The blur will smooth the ringing artifacts.
After quite some trial and error, turns out Morphological Closing before thresholding using a large value turns out to be most suitable for the next stage of what I am working on. But it does cause some loss of shape info.
Given that you have to use JPEG for this project, the one thing you can do use all one quantization tables. That is usually done through "quality" settings. You want an encoder that will allow you to do no quantization.

iOS Performance troubles with transparency

I just generated a gradient with transparency programmatically by adding a solid color and a gradient to an image mask. I then applied the resulting image to my UIView.layer.content. The visual is fine, but when I scroll object under the transparency, the app gets chunky. Is there a way to speed up?
My minital thought was caching the resulting gradient. Another thought was to create a gradient that is only one pixel wide and stretch it to cover the desired area. Will either of these approaches help the performance?
Joe
I recall reading (though I don't remember where) that core graphics gradients can have a noticeable effect on performance. If you can, using a png for your gradient instead should resolve the issue that you are seeing.

Resources