Elegant way of getting gray scale pixels from a scanned image - image-processing

I will have scanned images with gray toned handwriting on white background.
What is the elegant way of selecting, and getting pixels of gray level(non-white) contigous areas?
Which image processing library should i use?
So far i research for a class and method in Leptonica, but found method names like: seedfill, i do not want to fill the area i want to get pixel coordinates that make the contigous area.
So can you also share class name with library name?
Thanks for reading and possible response.

You could use OpenCV. Maybe the findContours function is what you want.

Related

Is anyone has idea about Palm line detection in iOS Swift?

Currently, I am doing Image COLOUR filtering operation second MEDIAN filtering then CANNY EDGE DETECTION ALGORITHM.
Then, I read pixels using for loop and I draw lines using pixel, but I do not getting proper result for palm scanning and showing lines on human Palm.
So if anybody has any types of idea regarding this then please let me know.
Currently i am getting this type of result:
but I need this type of output:
Oh I got your problem, You can do this by following steps.
1.process your hand image with canny edge detection algo lets name that cannyImage.
2.now create the bitmap of cannyImage and remove black pixels from the image and replace them with transparent pixels, black only because canny image will be filled with black color and objects lines in white when you process the image through the algo, now you have extracted the image with palm lines white in color, lets name that palmLineImage.
3.now the main part is MASKING you need to mask the palmLineImage on the original image.
These three steps will give you your desire O/P.
Tools you can use GPUImage awsesome library by BradLarson for this https://github.com/BradLarson/GPUImage2
For refining the palm image from background which I'm sure you have to use in future you can use GrabCut algo
LINK - https://github.com/naver/grabcutios
and now the apple has launched Photos captured in Portrait Mode on iOS 12 contain an embedded person segmentation matte that made it easy to create creative visual effects like background replacement.
Links - https://developer.apple.com/videos/play/wwdc2019/260/ , https://developer.apple.com/videos/play/wwdc2019/225/
Looks like you need to use something like the douglas peucker algorithm - to simplify the number of data points and smooth the lines. link - https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm

OpenCV Separate background and foreground of an image based on red and green marker

I just start exploring computer-vision field and I'm trying to create something like this (this image is what I'm trying to achieve not what I've already achieve)
My approach is (just a logical solution, havent try it yet):
Color detection.
First, get pixel position of lines with red and green color then add all that value to arrayRed and arrayGreen.
Segmentation
Get base image from cache then get all pixel with value that close to arrayRed and label it as label background. Do the same for arrayGreen
Convert color space to RGBA and set the Alpha of label background to 0
My question :
Am I on the right path?
Is this possible to achieve with OpenCV library?
If my approach is wrong, what's the efficient and actually right approach(in pseudo-code or python) to achieve the goal?

Cropping image By selecting Object and color matching

We are developing an app where we need to crop an image according to the selecting object area. User will draw a line and we need to select the object and crop it .This crop need to be like the app: YourMoji
So far we have tried to get the color of the pixels along the line and then comparing those with the color of every pixel in the image and making a path from it to clip the image. But the almost going no where.
Is it possible through this way to crop an image or we are going in the wrong way? Can anyone provide a way to do this Or suggest a way to modify the way we have worked so far? Any advice and suggestions will be greatly appreciated!
Thanks in advance.
I guess what you want is the image segmentation algorithm called Graph Cut.
Here are two Github repositories, hope these would help:
GraphCut
GrabCutIOS
I'm not exactly clued up on image manipulation, but the first algorithm that comes to mind is something like this:
Take the average of the pixels in the line (as you have)
Since you appear to want faces, you might want to weight reds and blues over green. Not much green in faces of any skin tone.
For each pixel, if the colour is within a given threshold outside of your selected average, remove it / make transparent.
Perhaps the closer to the original line (or centroid), the less strict the threshold becomes.
I'd then provide the user with some tools for:
Sensitivity: how large the threshold is
Eraser: to remove parts of the image that your algorithm missed
Paintbrush: to replace parts of the image that your algorithm incorrectly removed.

flood fill performance issue on iPad

I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];

how to fill solid color into the shape of a transparent PNG file with actionscript3?

For example, I have a transparent png file, the shape is a car.
In the png file, I only draw the white border shape.
Outside and inside the border are all transparent.
I want to use actionscript3 code to show the car object with different color, it means only fill color inside the border, and for the outside of the border, keep transparent.
How to do that?
So far, the simplest workaround is to prepare many images with PhotoShop, but it's not good enough for me. When I have many shapes and use many colors, I've to prepare many many images.
Add more details:
(Because I'm using white border, you may not see the basic png file if your background of browser is white)
Change my boarder of shape to black, hope this is helpful to understand my question.
Since you're working with loaded images/pixels you can make use of BitmapData's floodFill() which pretty much does what you need. There's an example in bellow the method description as well.
It does pretty much what you need, although in some cases it might not be perfect. It's worth having a look at Jan's optimizing the floodFill() method article which goes more in depth.
A simple solution is to use multiple layers. The top layer would contain just the border. The lower layer would contain just the car with no border. You can adjust the colour of the car layer using a ColorTransform or ColorMatrixFilter.

Resources