Cropping the liquid region of a bottle for processing - opencv

Basically what I want to do is to filter out only the liquid region of the bottle for further processing. So the next processes would apply only for that region.
I've tried various methods for months but didn't have any luck. I can filter out the region between the top liquid boundary and the top of the bottom dark region. But that doesn't serve my purpose as I need the areas at the sides of the dark region at the bottom of the bottles too. Im trying to do this in openCV/EmguCV.
help please...

You may upload the images you have already obtained (code as well), along with the regions that your result failed to include. Currently I am not quite sure which part of liquid would you like to get. I tried some easy processing, and get a liquid region. Please let me know if there is some problem.
(1) Remove the region with (i) intensity of 255 at either R,G or B space, or (ii) all the 3 space with intensity of less than 100, shown in upper left as I0;
(2) HSV space. Remove the region with Hue value equal to 0.5 or 1, shown in upper middle as I1;
(3) Remove the region with Saturation value no less than 0.5, shown in upper right as I2;
(4) I2&I1, remove the region with small areas, fill in the holes, shown in lower left as I3;
(5) I0(:,:,1)&I3 where I0(:,:,1) is its channel 1. Fill in the holes, smooth the edges, shown in lower middle;
(6) Use (5)'s result as a mask on the original image, shown in the lower right.
I think you can also get the liquid region with the dark side at the bottom shown as a hole, you can use cvFloodFill() to fill the holes, and get a intact liquid region.

Related

Remove color cast using libvips

I have sRGB images with color casts. To remove it manually I usually use Photoshop Level Adjustments. Photoshop also have tools for that: Auto Contrast or even better Auto Tone which also takes shadows, midtones & highlights into account.
If I remove the cast manually I adjust each of the RGB channels individually so that the darkest pixels are set to pure black and the lightest to pure white and then redistribute all other values (spreading the histogram). This is a simple approach but shows good results for my images.
In my node.js app I'm using sharp for image processing which uses libvips as its processing engine. I tried to remove the cast with .normalize() but this command works on all channels together and not individual for each of the RGB channels. So it doesn't work for me.
I also asked this question on the sharp project page. I tested the suggestion from lovell to try it with hist_local but the results are not useable for me.
Now I would like to find out how this could be done using the native libvips. I've played around with nip2 GUI and different commands but could not figure out how it could be achieved:
Histogram > Equalise Histogram > Global => Picture looks over saturated
Image > Levels > Scale to 0 - 255 => Channels ar not all spreading from 0 - 255 (I don't understand exactly what this command does?)
Thanks for every hint!
Addition
Here is a example with pictures from Photoshop to show what I want.
The source image is a picture of a frame from a film negative.
Image before processing
Step1 Invert image
Image after inversion
Step2 using Auto tone in Photoshop (works the same way as my description above about manually remove the color cast)
Image after Auto Tone
This last picture is ok for me.
nip2 has a menu item for this.
Load your image and mark a region on it containing the area you'd like to be neutral. It can be any lightness, it doesn't need to be white.
Use File / Open to get the file dialog and you should see the image loaded in your workspace as a thumbnail.
Doubleclick on the thumbnail to open an image view window.
In the view window, zoom and pan to the right spot. The user guide (press F1) has a section on image navigation.
Hold down CTRL and click and drag down and right to mark a rectangular region.
Back in the main window, click Toolkits / Tasks / Capture / White balance. You should see something like:
You can drag an resize your region to change the neutral point. Use the colour picker to set what white means. You can make other whites with (for example) Colour / New / Colour from CCT and link them together.
Click Colour / New / Colour from CCT to make a colour picker from CCT (correlated colour temperature) -- the temperature in Kelvin of that white.
Set it to something interesting, like 4800 for warm white.
Click on the formula for A5.white to edit it, and enter the cell of your CCT widget (A7 in this case).
Now you can drag the region to adjust the pixels to set the neutral from, and drag the CCT slider to set the temperature.
It can be annoying to find things in the toolkit menu. There's a thing for searching toolkits: in the main window, click View / Toolkit browser. You can enter something like "white" and it'll show related toolkit entries.
Here's another answer, but using pyvips and responding to the previous comments. I didn't want to delete the first answer as it still seemed useful.
This version finds the image histogram, searches for thresholds which will select 0.5% and 99.5% of pixels in each image band, then rescales the image so that those pixel values become 0 and 255.
import sys
import pyvips
# trim off this percentage of pixels from the top and bottom
trim_percent = 0.5
def percent(hist, percentage):
"""From a histogram, find the threshold above which lie
#percentage of pixels."""
# normalised cumulative histogram
norm = hist.hist_cum().hist_norm()
# column and row profile over percentage
c, r = (norm > norm.width * percentage / 100).profile()
return r.avg()
image = pyvips.Image.new_from_file(sys.argv[1])
# photographic negative
image = image.invert()
# find image histogram, split to set of separate bands
bands = image.hist_find().bandsplit()
# for each band, the low and high thresholds
low = [percent(band, trim_percent) for band in bands]
high = [percent(band, 100 - trim_percent) for band in bands]
# rescale image
scale = [255.0 / (h - l) for h, l in zip(high, low)]
image = (image - low) * scale
image.write_to_file(sys.argv[2])
It seems to give roughly similar results to the PS button. If I run:
$ ./autolevel.py ~/pics/before.jpg x.jpg
I see:
In the meantime I've found the Simplest Color Balance Algorithm which exactly describes the problem with color casts and there you can also find a C source code.
It is exactly the same solution as John describes in his second answer but as a small piece of c-code.
I'm now trying to use it as C/C++ addon with N-API under node.js.

Tweaking display of quality histogram, exporting the colormap

I have a couple of questions, which get tied back to a simple need - I want to use the quality histogram as a colorbar in my publication. To export it along with labels for publication, I tried just taking a snapshot with the appropriate tool, but if I use alpha/ solid white background the text/ colorbars is not visible. If I use the solid black or meshlab background, the text is white, or can not be used directly in publication.
My questions are as follows:
I know how to change the text color on meshlab window. Is there a similar function to change the text font size on meshlab window?
As a more demanding question, is there a way I can import the quality map file into matlab or some other software, and plot a custom colorbar. I will append my .qmap file here, but it seems that the color field is empty, and I can not reproduce the colors without them.
%%%%%QMAP FILE TO FOLLOW%%%%%
// COLOR BAND FILE STRUCTURE - first row: RED CHANNEL DATA - second row GREEN CHANNEL DATA - third row: BLUE CHANNEL DATA
// CHANNEL DATA STRUCTURE - the channel structure is grouped in many triples. The items of each triple represent respectively: X VALUE, Y_LOWER VALUE, Y_UPPER VALUE of each node-key of the transfer function
0;0.5;0.125;1;0.375;1;0.625;0;0.875;0;1;0;
0;0;0.125;0;0.375;1;0.625;1;0.875;0;1;0;
0;0;0.125;0;0.375;0;0.625;1;0.875;1;1;0.5;
//THE FOLLOWING 4 VALUES REPRESENT EQUALIZER SETTINGS - the first and the third values represent respectively the minimum and the maximum quality values used in histogram, the second one represent the position (in percentage) of the middle quality, and the last one represent the level of brightness as a floating point number (0 copletely dark, 1 original brightness, 2 completely white)
-0.001;0.714286;0.0004;1;

How to remove non-periodic lines from binary image

Example Image
I want to remove the lines (shown in RED color) as they are out of order. Lines shown in black color are repeating at same period (approximately). Period is not known beforehand. Is there any way of deleting non-periodic lines( shown in red color) automatically?
NOTE: Image is binary ( back & while).. lines shown in red color only for illustration.
Of course there is any way. There is almost always some way to do something.
Infortunately you have not provided any particular problem. The entire thing is too broad to be answered here.
To help you getting started: (I highly recommend you start with pen, paper and your brain)
Detect the lines -> google or think, there are many standard ways to detect lines in an image. if you don't have noise in your binary image its trivial.
find any aequidistant sets -> think
delete the rest -> think ( you know what is good so everything else has to go away)
I assume, your lines are (almost) vertical.
The following should work
turn the image to a column sum histogram
try a Fourier transformation on the signal (potentially padding the image appropriately)
pick the maximum/peak from the Fourier spectrum as your base period
If you need the lines rather than the position of the lines, generate a mask with lines at appropriate intervals (as determined by your analysis before) and apply to the image.

Edge detection on pool table

I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.

OpenCV: Generating points from image after thinning

I've ran in to an issue concerning generating floating point coordinates from an image.
The original problem is as follows:
the input image is handwritten text. From this I want to generate a set of points (just x,y coordinates) that make up the individual characters.
At first I used findContours in order to generate the points. Since this finds the edges of the characters it first needs to be ran through a thinning algorithm, since I'm not interested in the shape of the characters, only the lines or as in this case, points.
Input:
thinning:
So, I run my input through the thinning algorithm and all is fine, output looks good. Running findContours on this however does not work out so good, it skips a lot of stuff and I end up with something unusable.
The second idea was to generate bounding boxes (with findContours), use these bounding boxes to grab the characters from the thinning process and grab all none-white pixel indices as "points" and offset them by the bounding box position. This generates even worse output, and seems like a bad method.
Horrible code for this:
Mat temp = new Mat(edges, bb);
byte roi_buff[] = new byte[(int) (temp.total() * temp.channels())];
temp.get(0, 0, roi_buff);
int COLS = temp.cols();
List<Point> preArrayList = new ArrayList<Point>();
for(int i = 0; i < roi_buff.length; i++)
{
if(roi_buff[i] != 0)
{
Point tempP = bb.tl();
tempP.x += i%COLS;
tempP.y += i/COLS;
preArrayList.add(tempP);
}
}
Is there any alternatives or am I overlooking something?
UPDATE:
I overlooked the fact that I need the points (pixels) to be ordered. In the method above I simply do scanline approach to grabbing all the pixels. If you look at the 'o' for example, it would grab first the point on the left hand side, then the one on the right hand side. I would need them to be ordered by their neighbouring pixels since I want to draw paths with the points later on (outside of opencv).
Is this possible?
You should look into implementing your own connected components labelling. The concept is very simple: you scan the first line and assign unique labels to each horizontally connected strip of pixels. You basically check for every pixel if it is connected to its left neighbour and assign it either that neighbour's label or a new label. In the second row you do the same, but you also check against the pixels above it. Sometimes you need a label merge: two strips that were not connected in the previous row are joined in the current row. The way to deal with this is either to keep a list of label equivalences or use pointers to labels (so you can easily do a complete label change for an object).
This is basically what findContours does, but if you implement it yourself you have the freedom to go for 8-connectedness and even bridge a single-pixel or two-pixel gap. That way you get "almost-connected components labelling". It looks like you need this for the "w" in your example picture.
Once you have the image labelled this way, you can push all the pixels of a single label to a vector, and order them something like this. Find the top left pixel, push it to a new vector and erase it from the original vector. Now find the pixel in the original vector closest to it, push it to the new vector and erase from the original. Continue until all pixels have been transferred.
It will not be very fast this way, but it should be a start.

Resources