Parameter to isolate frames with colored lines - opencv

I'm writing a code that should detect frames in a video that have colored lines. I'm new to openCV and would like to know if I should evaluate saturation, entropy, RBG intensity, etc. The lines, as shown in the pictures, come in every color and density. When black and white, but they are all the same color inside a given frame. Any advice?
Regular frame:
Example 1:
Example 2:

You can use something like this to get the mean Saturation and see that it is lower for your greyscale image and higher for your colour ones:
#!/usr/bin/env python3
import cv2
# Open image
im =cv2.imread('a.png',cv2.IMREAD_UNCHANGED)
# Convert to HSV
hsv=cv2.cvtColor(im,cv2.COLOR_BGR2HSV)
# Get mean Saturation - I use index "1" because Hue is index "0" and Value is index "2"
meanSat = hsv[...,1].mean()
Results
first image (greyish): meanSat = 78
second image (blueish): meanSat = 162
third image (redish): meanSat = 151
If it is time-critical, I guess you could just calculate for a small extracted patch since the red/blue lines are all over the image anyway.

Related

Is it possible automate to changing all colors in an image to random colors?

I'm trying to come up with a new image augmentation that changes colors like above automatically, but to everything not just the flower. The above is a photoshop example but I'd like to automatically switch all colors in the image to a different random color. Is this possible to create a function for this? There are augmentations like random convolution and random jitter, but those aren't quite what I want. I'd like to keep the saturations and values the same just change the color.
You can add a random 0 to 255 shift to each channel in BGR and take modulo 256.
In python it would look like:
shift = numpy.random.randint(256, size=(1,1,3))
color_shifted_img = img + shift
color_shifted_img = numpy.mod(color_shifted_img, 256)

Remove color cast using libvips

I have sRGB images with color casts. To remove it manually I usually use Photoshop Level Adjustments. Photoshop also have tools for that: Auto Contrast or even better Auto Tone which also takes shadows, midtones & highlights into account.
If I remove the cast manually I adjust each of the RGB channels individually so that the darkest pixels are set to pure black and the lightest to pure white and then redistribute all other values (spreading the histogram). This is a simple approach but shows good results for my images.
In my node.js app I'm using sharp for image processing which uses libvips as its processing engine. I tried to remove the cast with .normalize() but this command works on all channels together and not individual for each of the RGB channels. So it doesn't work for me.
I also asked this question on the sharp project page. I tested the suggestion from lovell to try it with hist_local but the results are not useable for me.
Now I would like to find out how this could be done using the native libvips. I've played around with nip2 GUI and different commands but could not figure out how it could be achieved:
Histogram > Equalise Histogram > Global => Picture looks over saturated
Image > Levels > Scale to 0 - 255 => Channels ar not all spreading from 0 - 255 (I don't understand exactly what this command does?)
Thanks for every hint!
Addition
Here is a example with pictures from Photoshop to show what I want.
The source image is a picture of a frame from a film negative.
Image before processing
Step1 Invert image
Image after inversion
Step2 using Auto tone in Photoshop (works the same way as my description above about manually remove the color cast)
Image after Auto Tone
This last picture is ok for me.
nip2 has a menu item for this.
Load your image and mark a region on it containing the area you'd like to be neutral. It can be any lightness, it doesn't need to be white.
Use File / Open to get the file dialog and you should see the image loaded in your workspace as a thumbnail.
Doubleclick on the thumbnail to open an image view window.
In the view window, zoom and pan to the right spot. The user guide (press F1) has a section on image navigation.
Hold down CTRL and click and drag down and right to mark a rectangular region.
Back in the main window, click Toolkits / Tasks / Capture / White balance. You should see something like:
You can drag an resize your region to change the neutral point. Use the colour picker to set what white means. You can make other whites with (for example) Colour / New / Colour from CCT and link them together.
Click Colour / New / Colour from CCT to make a colour picker from CCT (correlated colour temperature) -- the temperature in Kelvin of that white.
Set it to something interesting, like 4800 for warm white.
Click on the formula for A5.white to edit it, and enter the cell of your CCT widget (A7 in this case).
Now you can drag the region to adjust the pixels to set the neutral from, and drag the CCT slider to set the temperature.
It can be annoying to find things in the toolkit menu. There's a thing for searching toolkits: in the main window, click View / Toolkit browser. You can enter something like "white" and it'll show related toolkit entries.
Here's another answer, but using pyvips and responding to the previous comments. I didn't want to delete the first answer as it still seemed useful.
This version finds the image histogram, searches for thresholds which will select 0.5% and 99.5% of pixels in each image band, then rescales the image so that those pixel values become 0 and 255.
import sys
import pyvips
# trim off this percentage of pixels from the top and bottom
trim_percent = 0.5
def percent(hist, percentage):
"""From a histogram, find the threshold above which lie
#percentage of pixels."""
# normalised cumulative histogram
norm = hist.hist_cum().hist_norm()
# column and row profile over percentage
c, r = (norm > norm.width * percentage / 100).profile()
return r.avg()
image = pyvips.Image.new_from_file(sys.argv[1])
# photographic negative
image = image.invert()
# find image histogram, split to set of separate bands
bands = image.hist_find().bandsplit()
# for each band, the low and high thresholds
low = [percent(band, trim_percent) for band in bands]
high = [percent(band, 100 - trim_percent) for band in bands]
# rescale image
scale = [255.0 / (h - l) for h, l in zip(high, low)]
image = (image - low) * scale
image.write_to_file(sys.argv[2])
It seems to give roughly similar results to the PS button. If I run:
$ ./autolevel.py ~/pics/before.jpg x.jpg
I see:
In the meantime I've found the Simplest Color Balance Algorithm which exactly describes the problem with color casts and there you can also find a C source code.
It is exactly the same solution as John describes in his second answer but as a small piece of c-code.
I'm now trying to use it as C/C++ addon with N-API under node.js.

Eliminating various backgrounds from image and segmenting object?

Let say I have this input image, with any number of boxes. I want to segment out these boxes, so I can eventually extract them out.
input image:
The background could anything that is continuous, like a painted wall, wooden table, carpet.
My idea was that the gradient would be the same throughout the background, and with a constant gradient. I could turn where the gradient is about the same, into zero's in the image.
Through edge detection, I would dilate and fill the regions where edges detected. Essentially my goal is to make a blob of the areas where the boxes are. Having the blobs, I would know the exact location of the boxes, thus being able to crop out the boxes from the input image.
So in this case, I should be able to have four blobs, and then I would be able to crop out four images from the input image.
This is how far I got:
segmented image:
query = imread('AllFour.jpg');
gray = rgb2gray(query);
[~, threshold] = edge(gray, 'sobel');
weightedFactor = 1.5;
BWs = edge(gray,'roberts');
%figure, imshow(BWs), title('binary gradient mask');
se90 = strel('disk', 30);
se0 = strel('square', 3);
BWsdil = imdilate(BWs, [se90]);
%figure, imshow(BWsdil), title('dilated gradient mask');
BWdfill = imfill(BWsdil, 'holes');
figure, imshow(BWdfill);
title('binary image with filled holes');
What a very interesting problem! Here's my solution in an attempt to solve this problem for you. This is assuming that the background has the same colour distribution throughout. First, transform your image from RGB to the HSV colour space with rgb2hsv. The HSV colour space is an ideal transform for analyzing colours. After this, I would look at the saturation and value planes. Saturation is concerned with how "pure" the colour is, while value is the intensity or brightness of the colour itself. If you take a look at the saturation and value planes for the image, this is what is shown:
im = imread('http://i.stack.imgur.com/1SGVm.jpg');
out = rgb2hsv(im);
figure;
subplot(2,1,1);
imshow(out(:,:,2));
subplot(2,1,2);
imshow(out(:,:,3));
This is what I get:
By taking a look at some locations in the gray background, it looks like the majority of the saturation are less than 0.2 as well as the elements in the value plane are greater than 0.3. As such, we want to find the opposite of those pixels to get our objects. As such, we find those pixels whose saturation is greater than 0.2 or those pixels with a value that is less than 0.3:
seg = out(:,:,2) > 0.2 | out(:,:,3) < 0.3;
This is what we get:
Almost there! There are some spurious single pixels, so I'm going to perform an opening with imopen with a line structuring element.
After this, I'll perform a dilation with imdilate to close any gaps, then use imfill with the 'holes' option to fill in the gaps, then use erosion with imerode to shrink the shapes back to their original form. As such:
se = strel('line', 3, 90);
pre = imopen(seg, c);
se = strel('square', 20);
pre2 = imdilate(pre, se);
pre3 = imfill(pre2, 'holes');
final = imerode(pre3, se);
figure;
imshow(final);
final contains the segmented image with the 4 candy boxes. This is what I get:
Try resizing the image. When you make it smaller, it would be easier to join edges. I tried what's shown below. You might have to tune it depending on the nature of the background.
close all;
clear all;
im = imread('1SGVm.jpg');
small = imresize(im, .25); % resize
grad = (double(imdilate(small, ones(3))) - double(small)); % extract edges
gradSum = sum(grad, 3);
bw = edge(gradSum, 'Canny');
joined = imdilate(bw, ones(3)); % join edges
filled = imfill(joined, 'holes');
filled = imerode(filled, ones(3));
imshow(label2rgb(bwlabel(filled))) % label the regions and show
If you have a recent version of MATLAB, try the Color Thresholder app in the image processing toolbox. It lets you interactively play with different color spaces, to see which one can give you the best segmentation.
If your candy covers are fixed or you know all the covers that are coming into the scene then Template matching is best for this. As it is independent of the background in the image.
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html

Automatic approach for removing colord object shadow on white background?

I am working on some leaf images using OpenCV (Java). The leaves are captured on a white paper and some has shadows like this one:
Of course, it's somehow the extreme case (there are milder shadows).
Now, I want to threshold the leaf and also remove the shadow (while reserving the leaf's details).
My current flow is this:
1) Converting to HSV and extracting the Saturation channel:
Imgproc.cvtColor(colorMat, colorMat, Imgproc.COLOR_RGB2HSV);
ArrayList<Mat> channels = new ArrayList<Mat>();
Core.split(colorMat, channels);
satImg = channels.get(1);
2) De-noising (median) and applying adaptiveThreshold:
Imgproc.medianBlur(satImg , satImg , 11);
Imgproc.adaptiveThreshold(satImg , satImg , 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 401, -10);
And the result is this:
It looks OK, but the shadow is causing some anomalies along the left boundary. Also, I have this feeling that I am not using the white background to my benefit.
Now, I have 2 questions:
1) How can I improve the result and get rid of the shadow?
2) Can I get good results without working on saturation channel?. The reason I ask is that on most of my images, working on L channel (from HLS) gives way better results (apart from the shadow, of course).
Update: Using the Hue channel makes threshdolding better, but makes the shadow situation worse:
Update2: In some cases, the assumption that the shadow is darker than the leaf doesn't always hold. So, working on intensities won't help. I'm looking more toward a color channels approach.
I don't use opencv, instead I was trying to use matlab image processing toolbox to extract the leaf. Hopefully opencv has all the processing functions for you. Please see my result below. I did all the operations in your original image channel 3 and channel 1.
First I used your channel 3, threshold it with 100 (left top). Then I remove the regions on the border and regions with the pixel size smaller than 100, filling in the hole in the leaf, the result is shown in right top.
Next I used your channel 1, did the same thing as I did in channel 3, the result is shown in left bottom. Then I found out the connected regions (there are only two as you can see in the left bottom figure), remove the one with smaller area (shown in right bottom).
Suppose the right top image is I1, and the right bottom image is I, the leaf is extracted by implement ~I && I1. The leaf is:
Hope it helps. Thanks
I tried two different things:
1. other thresholding on the saturation channel
2. try to find two contours: shadow and leaf
I use c++ so your code snippets will look a little different.
trying otsu-thresholding instead of adaptive thresholding:
cv::threshold(hsv_imgs,mask,0,255,CV_THRESH_BINARY|CV_THRESH_OTSU);
leading to following images (just OTSU thresholding on saturation channel):
the other thing is computing gradient information (i used sobel, see oppenCV documentation), thresholding that and after an opening-operator I used findContours giving something like this, not useable yet (gradient contour approach):
I'm trying to do the same thing with photos of butterflies, but with more uneven and unpredictable backgrounds such as this. Once you've identified a good portion of the background (e.g. via thresholding, or as we do, flood filling from random points), what works well is to use the GrabCut algorithm to get all those bits you might miss on the initial pass. In python, assuming you still want to identify an initial area of background by thresholding on the saturation channel, try something like
import cv2
import numpy as np
img = cv2.imread("leaf.jpg")
sat = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,1]
sat = cv2.medianBlur(sat, 11)
thresh = cv2.adaptiveThreshold(sat , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh.jpg", thresh)
h, w = img.shape[:2]
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
grabcut_mask = thresh/255*3 #background should be 0, probable foreground = 3
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("GrabCut1.jpg", img*grabcut_mask[...,None])
This actually gets rid of the shadows for you in this case, because the edge of the shadow actually has high saturation levels, so is included in the grab cut deletion. (I would post images, but don't have enough reputation)
Usually, however, you can't trust shadows to be included in the background detection. In this case you probably want to compare areas in the image with colour of the now-known background using the chromacity distortion measure proposed by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection". This measure takes account of the fact that for desaturated colours, hue is not a relevant measure.
Note that the pdf of the preprint you find online has a mistake (no + signs) in equation 6. You can use the version re-quoted in Rodriguez-Gomez et al (2012), equations 1 & 2. Or you can use my python code below:
def brightness_distortion(I, mu, sigma):
return np.sum(I*mu/sigma**2, axis=-1) / np.sum((mu/sigma)**2, axis=-1)
def chromacity_distortion(I, mu, sigma):
alpha = brightness_distortion(I, mu, sigma)[...,None]
return np.sqrt(np.sum(((I - alpha * mu)/sigma)**2, axis=-1))
You can feed the known background mean & stdev as the last two parameters of the chromacity_distortion function, and the RGB pixel image as the first parameter, which should show you that the shadow is basically the same chromacity as the background, and very different from the leaf. In the code below, I've then thresholded on chromacity, and done another grabcut pass. This works to remove the shadow even if the first grabcut pass doesn't (e.g. if you originally thresholded on hue)
mean, stdev = cv2.meanStdDev(img, mask = 255-thresh)
mean = mean.ravel() #bizarrely, meanStdDev returns an array of size [3,1], not [3], so flatten it
stdev = stdev.ravel()
chrom = chromacity_distortion(img, mean, stdev)
chrom255 = cv2.normalize(chrom, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
cv2.imwrite("ChromacityDistortionFromBackground.jpg", chrom255)
thresh2 = cv2.adaptiveThreshold(chrom255 , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh2.jpg", thresh2)
grabcut_mask[...] = 3
grabcut_mask[thresh==0] = 0 #where thresh == 0, definitely background, set to 0
grabcut_mask[np.logical_and(thresh == 255, thresh2 == 0)] = 2 #could try setting this to 2 or 0
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("final_leaf.jpg", grabcut_mask[...,None]*img)
I'm afraid with the parameters I tried, this still removes the stalk, though. I think that's because GrabCut thinks that it looks a similar colour to the shadows. Let me know if you find a way to keep it.

Why is my bicubic interpolation of discrete data looking ugly?

i have a 128x128 array of elevation data (elevations from -400m to 8000m are displayed using 9 colors) and i need to resize it to 512x512. I did it with bicubic interpolation, but the result looks weird. In the picture you can see original, nearest and bicubic. Note: only the elevation data are interpolated not the colors themselves (gamut is preserved). Are those artifacts seen on the bicubic image result of my bad interpolation code or they are caused by the interpolating of discrete (9 steps) data?
http://i.stack.imgur.com/Qx2cl.png
There must be something wrong with the bicubic code you're using. Here's my result with Python:
The black border around the outside is where the result was outside of the palette due to ringing.
Here's the program that produced the above:
from PIL import Image
im = Image.open(r'c:\temp\temp.png')
# convert the image to a grayscale with 8 values from 10 to 17
levels=((0,0,255),(1,255,0),(255,255,0),(255,0,0),(255,175,175),(255,0,255),(1,255,255),(255,255,255))
img = Image.new('L', im.size)
iml = im.load()
imgl = img.load()
colormap = {}
for i, color in enumerate(levels):
colormap[color] = 10 + i
width, height = im.size
for y in range(height):
for x in range(width):
imgl[x,y] = colormap[iml[x,y]]
# resize using Bicubic and restore the original palette
im4x = img.resize((4*width, 4*height), Image.BICUBIC)
palette = []
for i in range(256):
if 10 <= i < 10+len(levels):
palette.extend(levels[i-10])
else:
palette.extend((i, i, i))
im4x.putpalette(palette)
im4x.save(r'c:\temp\temp3.png')
Edit: Evidently Python's Bicubic isn't the best either. Here's what I was able to do by hand in Paint Shop Pro, using roughly the same procedure as above.
While bicubic interpolation can sometimes generate interpolating values outside the original range (can you verify if this is happening to you?) It really seems like you may have a bug, but it is hard to say without looking at the code. As a general rule the bicubic solution should be smoother than the nearest neighbor solution.
Edit: I take that back, I see no interpolating values outside the original range in your images. Still, I think the strange part is the "jaggedness" you get when using bicubic, you may want to double check that.

Resources