How to remove 'wood grain' (noise) background from image? - image-processing

I have been stuck on attempting to remove the background from a borehole log image for a week or so (new to image processing). I want to eventually develop a code which can automatically detect the horizontal sinusoidal features in the image (attached). I think for this I can use a Hough transform. However, all of the algorithms I have used (Hough transform, edge detections, thresholding) do not work because of the background of the image which has this 'wood grain' appearance. I also tried recreating a mask through finding the image gradient, but because the color values of the features I want (horizontal sinusiodal shapes) are so similar to the background I want to remove I am having a difficult time. The ultimate goal is to take two images taken at different times (before and after a scientific experiment) and to subtract them to see where the sinusoidal patterns differ. If I can get rid of this background that should be easier.
I so far have worked the image to better quality through taking the FFT and applying a high-pass filter. This at least homogenizes the image and leaves me with the attached result. However, I am not having much luck to remove this vertical 'wood grain'. Does anyone have a thought about how it could be done? This is driving me a little crazy.
Thank you so much!

Related

Background Subtraction in OpenCV

I am trying to subtract two images using absdiff function ,to extract moving object, it works good but sometimes background appears in front of foreground.
This actually happens when the background and foreground colors are similar,Is there any solution to overcome this problem?
It may be description of the problem above not enough; so I attach images in the following
link .
Thanks..
You can use some pre-processing techniques like edge detection and some contrast stretching algorithm, which will give you some extra information for subtracting the image. Since color is same but new object should have texture feature like edge; if the edge gets preserved properly then when performing image subtraction you will obtain the object.
Process flow:
Use edge detection algorithm.
Contrast stretching algorithm(like histogram stretching).
Use the detected edge top of the contrast stretched image.
Now use the image subtraction algorithm from OpenCV.
There isn't enough information to formulate a complete solution to your problem but there are some tips I can offer:
First, prefilter the input and background images using a strong
median (or gaussian) filter. This will make your results much more
robust to image noise and confusion from minor, non-essential detail
(like the horizontal lines of your background image). Unless you want
to detect a single moving strand of hair, you don't need to process
the raw pixels.
Next, take the advice offered in the comments to test all 3 color
channels as opposed to going straight to grayscale.
Then create a grayscale image from the the max of the 3 absdiffs done
on each channel.
Then perform your closing and opening procedure.
I don't know your requirements so I can't take them into account. If accuracy is of the utmost importance. I'd use the median filter on input image over gaussian. If speed is an issue I'd scale down the input images for processing by at least half, then scale the result up again. If the camera is in a fixed position and you have a pre-calibrated background, then the current naive difference method should work. If the system has to determine movement from a real world environment over an extended period of time (moving shadows, plants, vehicles, weather, etc) then a rolling average (or gaussian) background model will work better. If the camera is moving you will need to do a lot more processing, probably some optical flow and/or fourier transform tests. All of these things need to be considered to provide the best solution for the application.

Area of interest and Hough Line Detection for distorted lines accuracy

I am trying to do segmentation of book spines stacked both horizontal and vertically. I have came across a problem when the picture is too big.
Only part of the image can be seen in the whole window, meaning it does not process the original image it is suppose to process:
The image it processed
The image it should process instead
I cannot even view the whole image which is suppose to be processed. Hence, I tried to minimise the window just for this picture using=>
cv::resize(image, image, cv::Size2i(image.cols/6, image.rows/6) ); // resize to 1/6 of the image
which lead to another problem, when the picture is small, it become too small that the straight lines cannot even be detected.
Hence, I tried =>
cv::resize(image, image, cv::Size2i(750, 400) );
this lead to another problem. While the image above is above to display the whole window, for smaller pictures, my houghline detection becomes more unstable.
Do anybody have an idea on how to solve this sizing problem? And also how to improve my Hough Line detection which is pretty unstable now to separate the books? I wish to draw a line in between the stack of books.
Hope to hear from you guys soon. Thanks!!!
It looks like you're resizing the image before you perform the Hough Transform, I think what you want to do it afterwards. This allows you to get enough resolution in your picture to get decent lines detected, and you can still view it on your monitor.
Secondly you want to improve detecting the separation between the books. My advice would be to perform a bit of pre-processing to the image. There are plenty of methods to do this. Mean Shift Segmentation to separate the picture by colours is one for example.
Filtering the results of the transform is another approach. Only keeping lines passing through dark areas - since it is more likely to be dark between the books - is one such way. There are plenty more methods.
Also don't forget to tweak the parameters of the Hough Transform to see what works best with your test set. It may reveal some interesting results!
Good luck!
IMO first you have to improve edge detected output.It consists of very less edges.You can use cvCanny or cvSobel for the same.Also use Probabilistic Hough lines, that will give better results.You can tweak into the parameters of cvHoughLines such as threshold, minLinLength, maxLineGap as in the fig the lines are coming too close.
Please check the details here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html

Matlab image processing - replacing dark pixels with neighboring pixels

I am doing some image processing of the retina images.. I need to replace the blood vessels with background pixels so that I can focus on other aspects of retina. I could not figure out a way to do this. I am using matlab. any suggestions?
Having worked extensively with retinal images, I can tell you that what you're proposing is a complex problem in itself. Sure, if you just want a crude method, you can use imdilate. But that will affect your entire image, and other structures in the image will change appearance. Something, that is not desirable.
However, if you want to do it properly, you will first need to segment all the blood vessels and create a binary mask. Once you have a binary mask, it's up to you how to fill up the vessel regions. You can either interpolate from the boundaries or calculate a background image and replace the vessel regions with pixels from the background image, etc.
Segmentation of the blood vessels is a challenging problem and you will find a lot of literature concerning that on the internet. Ultimately, you will have to choose how accurate a segmentation you want and build your algorithm accordingly.
imdilate should do what you want, since it replaces each pixel with the maximum of its neighbors. For more detailed suggestions, I'd need to see images.

Auto-Detecting blurry regions of an image

I am working on images that are partially blur on some sections. These are noises that should be taken care of, but here is the problem:
Are there methods to detect whether an image is blur or partially blur at some sections of an image? For instance, take a look at sample image below:
You can see in the image that there are 3 sections that are visually blur: bottom-left, near center region and top-right. Now, is it possible to detect that any portion of an image is blur programming-wise or mathematically?
As lain_b pointed out, with an image like this you can use an edge detector and look for an absence of edges. I tried it on your image and it seems to work pretty well. First I used the kernel
[0,1,0,
1,-4,1,
0,1,0]
Which is a simple edge detector. Its result was
Then I used a threshold to get
Then I closed the image and opened it to get
This is obviously not a finished version, the top right portion did not recognize well at all. Perhaps you could improve it by blurring before performing thresholding, or by choosing better values for the threshold and the radii of the opening and closing operations. A lot of the decisions you will need to make depend on the constraints you can put on your problem. I think this technique will work for you though.
Edit
If you are looking for blur detection of arbitrary images you are going to have to investigate a wide variety of techniques. Things are much easier if you can make assumptions about your set of input images. Without any assumptions I don't know what will work best for you. Here is some reading on the topic
Image Blur Metrics
Reserach paper on using the Harr wavelet transform
Similar SO Question and look at the question that question links to
Blur detection is a very active research field, there is no one answer. You will just need to try all the methods you can find (these were found by googling detect blur in image).
This paper may be of some help. It does blur estimation (mostly for out of focus, but I think it also does blur) to recreate a similarly blurred object in the image.
I think you should be able to use it to detect the blurred areas, and how blurred they are. It should be especially relevent to your problem as it is designed to work with real-world images.

Adaptive threshold Binarization's bad effects

I implemented some adaptive binarization methods, they use a small window and at each pixel the threshold value is calculated. There are problems with these methods:
If we select the window size too small we will get this effect (I think the reason is because of window size is small)
(source: piccy.info)
At the left upper corner there is an original image, right upper corner - global threshold result. Bottom left - example of dividing image to some parts (but I am talking about analyzing image's pixel small surrounding, for example window of size 10X10).
So you can see the result of such algorithms at the bottom right picture, we got a black area, but it must be white.
Does anybody know how to improve an algorithm to solve this problem?
There shpuld be quite a lot of research going on in this area, but unfortunately I have no good links to give.
An idea, which might work but I have not tested, is to try to estimate the lighting variations and then remove that before thresholding (which is a better term than "binarization").
The problem is then moved from adaptive thresholding to finding a good lighting model.
If you know anything about the light sources then you could of course build a model from that.
Otherwise a quick hack that might work is to apply a really heavy low pass filter to your image (blur it) and then use that as your lighting model. Then create a difference image between the original and the blurred version, and threshold that.
EDIT: After quick testing, it appears that my "quick hack" is not really going to work at all. After thinking about it I am not very surprised either :)
I = someImage
Ib = blur(I, 'a lot!')
Idiff = I - Idiff
It = threshold(Idiff, 'some global threshold')
EDIT 2
Got one other idea which could work depending on how your images are generated.
Try estimating the lighting model from the first few rows in the image:
Take the first N rows in the image
Create a mean row from the N collected rows. You know have one row as your background model.
For each row in the image subtract the background model row (the mean row).
Threshold the resulting image.
Unfortunately I am at home without any good tools to test this.
It looks like you're doing adaptive thresholding wrong. Your images look as if you divided your image into small blocks, calculated a threshold for each block and applied that threshold to the whole block. That would explain the "box" artifacts. Usually, adaptive thresholding means finding a threshold for each pixel separately, with a separate window centered around the pixel.
Another suggestion would be to build a global model for your lighting: In your sample image, I'm pretty sure you could fit a plane (in X/Y/Brightness space) to the image using least-squares, then separate the pixels into pixels brighter (foreground) and darker than that plane (background). You can then fit separate planes to the background and foreground pixels, threshold using the mean between these planes again and improve the segmentation iteratively. How well that would work in practice depends on how well your lightning can be modeled with a linear model.
If the actual objects you try to segment are "thinner" (you said something about barcodes in a comment), you could try a simple opening/closing operation the get a lighting model. (i.e. close the image to remove the foreground pixels, then use [closed image+X] as threshold).
Or, you could try mean-shift filtering to get the foreground and background pixels to the same brightness. (Personally, I'd try that one first)
You have very non-uniform illumination and fairly large object (thus, no universal easy way to extract the background and correct the non-uniformity). This basically means you can not use global thresholding at all, you need adaptive thresholding.
You want to try Niblack binarization. Matlab code is available here
http://www.uio.no/studier/emner/matnat/ifi/INF3300/h06/undervisningsmateriale/week-36-2006-solution.pdf (page 4).
There are two parameters you'll have to tune by hand: window size (N in the above code) and weight.
Try to apply a local adaptive threshold using this procedure:
convolve the image with a mean or median filter
subtract the original image from the convolved one
threshold the difference image
The local adaptive threshold method selects an individual threshold for each pixel.
I'm using this approach extensively and it's working fine with images having non uniform background.

Resources