I want to achieve coloring picture effect on the photo and asking for help with computer vision techniques as I am newbie in this area. I started out with bilaterial filter and min filter. But I am stucked what to do next. I use opencv with python for prototyping. Is there any patterns that might help me to achieve this? Appreciate your help and time.
One of the possible solutions:
Convert image to grayscale
Apply Canny edge detector or Sobel filter
Hope it helps.
What you are looking for is called "Non-photorealistic rendering". In its simplest form, it is what Astor suggested you. But there are many studies in this field, starting with automatic creation of sketches to complex cartoon-like effects and more.
Here is a paper on this topic, and another one here, but you can find more by googling "Non photorealistic rendering"
Related
Please consider a sample image shown below
The cave paintings are vaguely seen here. Can you please suggest a probable image processing technique I can use here to extractthe regions of the painting?
I tried OTSU threshold, as it is a type of adaptive threshold, but it did not work. Something as simple as color segmentation can be looked into. Apart from that, any pointers please?
You can use decorrelation stretching for this. Take a look at this. You'll find pre-processing techniques they are using in combination with decorrelation stretching to segment rock paintings. Here in my blog post you'll find an implementation of decorrelation stretching using OpenCV.
I'm trying to figure out how to do this programatically, but despite all of my Googling I cannot figure out how this is down.
The lens blur is different than the Gaussian blur which looks very computer generated.
Thanks for the help!
I found an interesting blog post on the subject. I haven't read through the whole thing, but it seems quite descriptive and might be of some help.
You don't state what language you're after, but Java can do a lot of image processing, check out this link:
jhlabs blurring examples
It even includes the lens blur effect you are after.
I am trying to identify static hand signs. Confused with the libraries and algorithms I can use for the project.
What need to it identify hand signs and convert in to text. I managed to get the hand contour.
Can you please tell me what is the best method to classify hand signs.
Is it haar classifier, adaboost classifier, convex hull, orientation histograms, SVM, shift algorithm, or any thing else.
And also pls give me some examples as well.
I tried opencv and emugcv both for image processing. what is best c++ or c# for a real time system.
Any help is highly appreciated.
Thanks
I have implemented a handtracking for web applications in my master deggree. Basically, you should follow those steps:
1 - Detect features of skin color in a Region of Interest. Basically, put a frame in the screen and ask for the user put the hand.
2 - You should have a implementation of a lucas kanade tracker method. Basically, this alghorithm will ensure that your features are not lost through the frames.
3 - Try get more features for each 3 frames interval.
The people use many approaches, so I cannot give a unique. You could make some research using Google Scholar and use the keywords "hand sign", "recognition" and "detection".
Maybe you find some code with the help of Google. An example, the HandVu: http://www.movesinstitute.org/~kolsch/HandVu/HandVu.html
The haar classifier (method of Viola-Jones) help to detect hand, not to recognize them.
Good luck in your research!
I have made the following with OpenCV. Algorithm:
Skin detection made in HSV
Thinning (if pixel has zero neighbor than set zero)
Thicking (if pixel has neighbor nonzero then set it nonzero)
See this Wikipedia page for the details of these.
You can find the best trained cascade to detect hand using OpenCV from the GitHub...
https://github.com/Aravindlivewire/Opencv/blob/master/haarcascade/aGest.xml
Good luck...
first my final goal is to process the following image with tesseract:
http://ubuntuone.com/72m0ujsL9RhgfMIlugRDWP
(I wiped out the second and the third column...)
However tesseract has problems with the dotted background. So my idea is to pre-process the image with OpenCV. The best would be if I could somehow detect each line, because I need to remove the dotted background by applying a different threshold than to even lines. Is there any solution to solve my problem? So far I have found Hough Transformation and maybe segmentation, but the results weren't very good (maybe because of wrong parameter)... But I'm not sure, if these are possible approaches and what I invest my time best on.
Column detection would be fine, too, because the second column contains numbers and the third characters, only. Passing this "knowledge" to tesseract could improve its detection rate even more.
I would be really thankful if somebody could give me some hints how to solve this issue and which OpenCV functions are used best, with which paremeters. Some snippets that give me a fair idea about the different steps would be helpful, too.
Thank in advance!!!
Kind regards.
I would suggest you use something like erosion, as the dots seem to be rather small as compared to the width of the letters.
OR I would Canny edge detection with proper thresholds so that I would discard the rather short and thin edges of the dots.
Hope this helps, have fun!
I want to make a program for checking the printed paper for errors.
PDF File: please refer to the second page, top right picture
As you see, that system could identify the errors made by printer.
I want to know how was it achieved. What are existing documents about this?
Or any ideas you have?
Thank you
This can be very easy or very difficult.
if your images are black white and your scan is quite precise you can try with a simple subtraction between the images (scanned and pattern)
if your scan will read the image with a possible deformation or translation the you will need first an image registration algorithm.
if your scan present background noise you will have some trouble with the subtraction and then it turns very difficult.
may be some image samples can help to suggest you a more specific algorithm.
I think you need to some how compare two images in a way that is robust to deformation. As mentioned before, substracting the two images can be a first step. Another more sophisticated way can be to use distance transform (or chamfering based methods for template matching) to compare how similar the two images are in the presence of some deformation. More sophisticated solutions can use methods like shape contexts.