Lane detection with brightness change and shades on lanes? - image-processing

I am currently working on a lane detection project, where the input is an RGB road image "img" from a racing game, and the output is the same image annotated with drawn colored lines on detected lanes.
The steps are:
Convert the RGB image "img" to HSL image, then use a white color mask on it (white lanes only are expected in the image) with a white color range to discard any parts of the image with colors outside this range (put their values as zeros), let the output of this step by "white_img".
convert "white_img" to Grayscale producing "gray_img".
Apply Gaussian blurring to "gray_img" to make edges smoother, so less noisy edges can be detected, producing "smoothed_img".
Apply edge detection on "smoothed_img", producing "edge_img".
Crop "edge_img" by selecting a region of interest ROI, which is approximately within the lower half of image, producing "roi_img".
Finally, apply Hough transform on "roi_img" to detect the lines which will be considered as the detected lanes.
The biggest problems I am facing now are the brightness change and shades on lanes. For a dark image with shades on lanes, the lanes color can become very dark. I tried to increase the accepted white color range in step 1, which worked well for this kind of images. But for a bright image with no shades on lanes, most of the image is not discarded after step 1, which produces an output containing many things irrelevant to lanes.
Examples of input images:
Medium brightness without shades on lanes
Low brightness with shades on lanes
High brightness without shades on lanes
Any help to deal with these issues will be appreciated. Thanks in advance.

Related

Image - detect low contrast edge

I have a picture with high and low contrast transitions.
I need to detect edges on the above picture. I need binary image. I can easily detect the black and "dark" blue edges with Sobel operator and thresholding.
However, the edge between "light" blue and "light" yellow color is problematic.
I start with smooth image with median filter for each channel to remove noise.
What I have tried already to detect edges:
Sobel operator
Canny operator
Laplace
grayscale, RGB, HSV, LUV color spaces (with multichannel spaces, edges are detected in each channel and then combined together to create one final edge image)
Preprocessing RGB image with gamma correction (the problem with preprocessing is the image compression. The source image is JPG and if I use preprocessing edge detection often ends with visible grid caused by JPG macroblocks.)
So far, Sobel on RGB works best but the low-contrast line is also low-contrast.
Further thresholding remove this part. I consider edge everything that is under some gray value. If I use high threshold vales like 250, the result for low contrast edge is better but the remaining edges are destroyed. Also I dont like gaps in low-contrast edge.
So, if I change the threshold further and say that all except white is edge, I have edges all over the place.
Do you have any other idea how to combine low and high contrast edge detection so that the edges are without gaps as much as possible and also not all over the place?
Note: For test I use mostly OpenCV and what is not available in OpenCV, I programm myself
IMO this is barely doable, if doable at all if you want an automated solution.
Here I used binarization in RGB space, by assigning every pixel to the closest color among two colors representative of the blue and yellow. (I picked isolated pixels, but picking an average over a region would be better.)
Maybe a k-means classifier could achieve that ?
Update:
Here is what a k-means classifier can give, with 5 classes.
All kudos and points to Yves please for coming up with a possible solution. I was having some fun playing around experimenting with this and felt like sharing some actual code, as much for my own future reference as anything. I just used ImageMagick in Terminal, but you can do the same thing in Python with Wand.
So, to get a K-means clustering segmentation with 5 colours, you can do:
magick edge.png -kmeans 5 result.png
If you want a swatch of the detected colours underneath, you can do:
magick edge.png \( +clone -kmeans 5 -unique-colors -scale "%[width]x20\!" \) -background none -smush +10 result.png
Keywords: Python, ImageMagick, wand, image processing, segmentation, k-means, clustering, swatch.

What color feature is important for white-like tones?

I am building an image segmentation model using SVM as a pixel-wise classifier. I am segmentating pressure ulcers. They have different kind of tissues inside. I want to get the region of the ulcer. I have features like RGB intensities, red, green and blue chromatocities. A type of tissue inside the ulcer has white like colors, but most of the ulcer is reddish. I am getting correct results for the reddish colors but not the whites.
Can anyone point me a feature or set of features that can captures white textures or white color information to include in the feature vector?
Thanks in advance...

How to remove the local average color from an image with OpenCV

I have an image with a gentle gradient background and sharp foreground features that I want to detect. I wish to find green arcs in the image. However, the green arcs are only green relative to the background (they are semitransparent). The green arcs are also only one or two pixels wide.
As a result, for each pixel of the image, I want to subtract the average color of the surrounding pixels. The surrounding 5x5 or 10x10 pixels would be sufficient. This should leave the foreground features relatively untouched but remove the soft background, and I can then select the green channel for further processing.
Is there any way to do this efficiently? It doesn't have to be particularly accurate, but I want to run it at 30Hz on a 1080p image.
You can achieve this by doing a simple box blur on the image and then subtracting that from the original image:
blur(img,blurred,Size(15,15));
diff=img-blurred;
I tried this on a 1920x1080 image and the blur took 25ms and the subtract took 15ms on a decent spec iMac.
If your background is not changing fast, you could calculate the blur over the space of a few frames in a second thread and keep re-using it till you recalculate it again a few frames later then you would only have 15ms subtraction to do for each of your 30fps rather than 45ms of processing.
Depending on how smooth the background gradient is, edge detection followed by dilation might work well for you.
Edge detection will output the 3 lines that you've shown in the sample and discard the background gradient (again, depending on smoothness). Dilating the image then will thicken the edges so that your line contours are more significant.
You can superimpose this edge map on your original image as:
Mat src,edges;
//perform edge detection on src and output in edges.
Mat result;
cvtColor(edges,edges,cv2.COLOR_GRAY2BGR);//change mask to a 3 channel image
subtract(edges,src,result);
subtract(edges,result);
The result should contain only the 3 lines with 3 colours. From here on you can apply color filtering to select the green one.

How to remove the unwanted black region from binary image?

I have binarized an image and calculate its black and white pixels. After pixel calculation, calculate the ratio of black pixels i.e. R= (no:of black pixels/ no: of white pixel+no: of black pixels)*100. I have to used this result to declare an eye is open or closed if R>20% then eye is in open state else closed. But when I calculate this ratio it is not coming what i want. I think there might be some error in calculation of black and white pixel, due to unwanted black region in image or might be problem in thresholding an image. I am using Otsu's method for thresholding image.
While finding on this topic I also try `openInput=bwareaopen(bw, 80) but this is not work well to remove unwanted black area. Kindly help me out in removing the unwanted area.
close all
clear all
I=imread('op.jpg');
I=rgb2gray(I);
thres_level=graythresh(I); % find the threshold level of image
bw=im2bw(I,thres_level); % converts an image into binary
figure, imshow(bw);
totnumpix=numel(bw); % calculate total no of pixels in image
nwhite_open=sum(bw(:)); % calculate the black pixels in image;
nblack_open=totnumpix-nwhite_open; %calculate white pixels in image;
R=(nblack_open/(nblack_open+nwhite_open))*100
The area opening erase areas having a surface smaller than the parameter you enter. In your case, it will not likely help.
I think that the black/white ratio is not the solution. I would either:
Detect the iris (multiple effective papers/posts on the subject, mainly using hough transform), and if the detection fails, then the eye is closed.
If you have the original color image, then use a skin detection (simple color thresholding in the HSV color space). Because the ratio of skin you will determine when the eye is closed.

Line detection against darker shades

I wants to detect lines from an image with black line drawing over white papers.
It could be easy if its ideal 'black and white', using histogram threshold would do.
But, as the image attached shows, some lines (e.g. in the light red circle) are in gray lighter than the shades (e.g. in the dark red circle). So some shades are obtained before light lines using histogram threshold.
Is there any ideas to divide lines from shades with some 'knowledge'? Thanks!
Edit:
Here are the raw images, a bit small because they are of original resolution.
Thanks :-)
I would add another method using Gaussian blur other than erosion+dilation, for your reference:
file='http://i.stack.imgur.com/oEjtT.png';
I=imread(file);
h = fspecial('gaussian',5,4);
I1=imfilter(I,h,'replicate');
h = fspecial('gaussian',5);
I2=imfilter(I,h,'replicate');
I3=I1-I2;
I3=double(I3(:,:,1));
I3=I3.*(I3>5);
imshow(I3)

Resources