I just started working in the image processing field and got a problem. I hope somebody could suggest me the way to approach it.
Input: 1 image or a 3D stack of images.
Output: The same image or a stack but with continuous white lines (boundaries).
Just for a broader perspective, the general task is to segment the image on black "grains" and white "boundaries".
The process of segmentation of similar pictures but with continuous boundaries is already working just fine.
Please let me know your thoughts on this. Any reference on algorithms, software or even theories are very welcome.
Related
I am trying to do segmentation of book spines stacked both horizontal and vertically. I have came across a problem when the picture is too big.
Only part of the image can be seen in the whole window, meaning it does not process the original image it is suppose to process:
The image it processed
The image it should process instead
I cannot even view the whole image which is suppose to be processed. Hence, I tried to minimise the window just for this picture using=>
cv::resize(image, image, cv::Size2i(image.cols/6, image.rows/6) ); // resize to 1/6 of the image
which lead to another problem, when the picture is small, it become too small that the straight lines cannot even be detected.
Hence, I tried =>
cv::resize(image, image, cv::Size2i(750, 400) );
this lead to another problem. While the image above is above to display the whole window, for smaller pictures, my houghline detection becomes more unstable.
Do anybody have an idea on how to solve this sizing problem? And also how to improve my Hough Line detection which is pretty unstable now to separate the books? I wish to draw a line in between the stack of books.
Hope to hear from you guys soon. Thanks!!!
It looks like you're resizing the image before you perform the Hough Transform, I think what you want to do it afterwards. This allows you to get enough resolution in your picture to get decent lines detected, and you can still view it on your monitor.
Secondly you want to improve detecting the separation between the books. My advice would be to perform a bit of pre-processing to the image. There are plenty of methods to do this. Mean Shift Segmentation to separate the picture by colours is one for example.
Filtering the results of the transform is another approach. Only keeping lines passing through dark areas - since it is more likely to be dark between the books - is one such way. There are plenty more methods.
Also don't forget to tweak the parameters of the Hough Transform to see what works best with your test set. It may reveal some interesting results!
Good luck!
IMO first you have to improve edge detected output.It consists of very less edges.You can use cvCanny or cvSobel for the same.Also use Probabilistic Hough lines, that will give better results.You can tweak into the parameters of cvHoughLines such as threshold, minLinLength, maxLineGap as in the fig the lines are coming too close.
Please check the details here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html
I am trying to copy a small image to a bigger image, both images are from the same scene and they are aligned very well. I am using laplacian blending which makes it look seamless. I have one problem that i couldn’t solve yet, which is Illumination problem. Both photos are from same scene and they have taken with very small time difference however there is still some color changes because of lightining differences. I have tried to solve this problem with using ExposureCompansation class from opencv stitching module, unfortunately i couldn’t make it to work and it is poorly documented and when i search for it i find similar problems asked on stackoverflow and none of them answered yet. So it seems i need to develop my own solution for this Illumination problem and i don’t know where to start. Please tell me where to start.
Source Image
Destination Image
Result Image with problem
Exclude the region that has changed (the stamp) and do a histogram matching to match the histogram of the source image to that of the destination. The histogram matching will make the colors in the two image match.
I'm trying to do an application which, among other things, is able to recognize chess positions on a computer screen from screenshots. I have very limited experience with image processing techniques and don't wish to invest a great amount of time in studying this, as this is just a pet project of mine.
Can anyone recommend me one or more image processing techniques that would yield me a good result?
The conditions are:
The image is always crispy clean, no noise, poor light conditions etc (since it's a screenshot)
I'm expecting a very low impact on computer performance while doing 1 image / second
I've thought of two modes to start the process:
Feed the piece shapes to the program (so that it knows what a queen, king etc. looks like)
just feed the program an initial image which contains the startup position, from which the program can (after it recognizes the position of the board) pick each chess piece
The process should be relatively easy to understand, as I don't have a very good grasp of image processing techniques (yet)
I'm not interested in using any specific technology, so technology-agnostic documentation would be ideal (C/C++, C#, Java examples would also be fine).
Thanks for taking the time to read this, and I hope to get some good answers.
It' an interesting problem, but you need to specify a lot more than in your original question in order to find an acceptable answer.
On the input images: "screenshots" is quote vague a category. Can you assume that the chessboard will always be entirely in view? Will you have multiple views of the same board? Can you assume that no pieces will be partially or completely occluded in all views?
On the imaged objects and the capture system: will the same chessboard and pieces be used, under very similar illumination? Will the same lens/camera/digitization pipeline be used?
Salut Andrei,
I have done a coin counting algorithm from a picture so the process should be helpful.
The algorithm is called Generalized Hough transform
Make the picture black and white, it is easier that way
Take the image from 1 piece and "slide it over the screenshot"
For each cell you calculate the nr of common pixel in the 2 images
Where you have the largest number there you have the piece
Hope this helps.
Yeah go with Salut Andrei,
Convert the picture into greyscale
Slice into 64 squares and store in array
Using Mat lab can identify the pieces easily
Color can be obtained from Calculating the percentage of No. dot pixels(black pixels)
threshold=no.black pixels /no. of black pixels + no. of white pixels,
If ur value is above threshold then WHITE else BLACK
I'm working on a similar project in c# finding which piece is which isn't the hard part for me. First step is to find a rectangle that shows just the board and cuts everything else out. I first hard-coded it to search for the colors of the squares but would like to make it more robust and reliable regardless of the color scheme. Trying to make it find squares of pixels that match within a certain threshold and extrapolate the board location from that.
I'm looking for an algorithm to detect circles in an image. The image is black and white. The background is white, and the circles don't overlap each other, or any other element in the image.
The image includes some other shapes and some text.
If there is some open source .NET library to do this, I would also like to know about it.
Maybe the "Hough Transform" is useful for you. You have to know the circle's size in advance to make it efficient though.
http://www.cis.rit.edu/class/simg782/lectures/lecture_10/lec782_05_10.pdf
http://en.wikipedia.org/wiki/Hough_Transform
There was a similar question yesterday, where the "Hough Transform", and some image processing libraries (though not for .NET) were proposed:
Image Processing Programming
I was looking for the same thing and what I have found to work best for now is using Mathlab (Image Processing Toolbox). It has good amount of options that lets you try different processing algorithms, threshold level and range of circle's radius.
So I'm using openCV to do square recognition on this image. I compiled the squares.c file on an image that I took and here are the results:
http://www.learntobe.org/urs/index1.php
The image on the left is the original and on the right is the image that is a result of running the square detection.
The results aren't bad, but I really need this to detect ALL of the squares and I'm really new to this openCV and image processing stuff. Does anyone know of how I can edit the squares.c file to possibly get the detection to be more inclusive so that all of the squares are highlighted?
Thanks a lot ahead of time.
All the whitish colors are tough to detect. Nothing separates it from the page itself. Try doing some kind of edge detection (check cvCanny or cvSobel).
You should also "pre-process" the image. That is, increase the contrast, make the colors more saturated, etc.
Also check this article http://www.aishack.in/2010/01/an-introduction-to-contours/ It talks about how the squares.c sample works. Then you'll understand a bit about how to improves the detection in your case.
Hope this helps!