How to use imageSearch with transparent image? - image-processing

I'm trying to find a transparent image on the screen. But it can't find the picture because its edges are transparent. Is there any solution method? it may like a library or win api. Maybe my use is wrong.
Used image file:
CoordMode, Pixel, Screen
ImageSearch, FoundX, FoundY, 0,0, A_ScreenWidth, A_ScreenHeight,*50 *TransBlack C:\Users\PC\Desktop\light_PNG14440.png
if (ErrorLevel = 2)
MsgBox Could not conduct the search.
else if (ErrorLevel = 1)
MsgBox Icon could not be found on the screen.
else
MsgBox The icon was found at %FoundX%x%FoundY%.
I expect the output to be The icon was found at 100x100., but the actual output is Icon could not be found on the screen..

Instead of using the entire image, search for a part of the image that does not have transparency.
In this case the central part of the image seems like a good candidate for this.
After you get the position of the central part, getting the position of the entire image is just a simple calculation.

Alternatively to using ImageSearch you could use FindText

Related

Draw a horizontal line with just a click using Adobe Photoshop

I am looking to find a way to draw a horizontal line of full page width on a PNG file with a single click. Thing is I am trying to draw a lot of lines and if I've to click and drag it becomes extremely inefficient.
I am working on a Windows platform.
How can I achieve this using Adobe Photoshop? Any ideas?
Thanks
Here is a sample of page. Green lines are the ones drawn.
Draw a line as you want it on a big new canva. Trim the image when done. Then Create a brush from the open file (edit > define brush preset), save it, done. Now you can use yout line as a brush.

OpenCV Separate background and foreground of an image based on red and green marker

I just start exploring computer-vision field and I'm trying to create something like this (this image is what I'm trying to achieve not what I've already achieve)
My approach is (just a logical solution, havent try it yet):
Color detection.
First, get pixel position of lines with red and green color then add all that value to arrayRed and arrayGreen.
Segmentation
Get base image from cache then get all pixel with value that close to arrayRed and label it as label background. Do the same for arrayGreen
Convert color space to RGBA and set the Alpha of label background to 0
My question :
Am I on the right path?
Is this possible to achieve with OpenCV library?
If my approach is wrong, what's the efficient and actually right approach(in pseudo-code or python) to achieve the goal?

Cropping image By selecting Object and color matching

We are developing an app where we need to crop an image according to the selecting object area. User will draw a line and we need to select the object and crop it .This crop need to be like the app: YourMoji
So far we have tried to get the color of the pixels along the line and then comparing those with the color of every pixel in the image and making a path from it to clip the image. But the almost going no where.
Is it possible through this way to crop an image or we are going in the wrong way? Can anyone provide a way to do this Or suggest a way to modify the way we have worked so far? Any advice and suggestions will be greatly appreciated!
Thanks in advance.
I guess what you want is the image segmentation algorithm called Graph Cut.
Here are two Github repositories, hope these would help:
GraphCut
GrabCutIOS
I'm not exactly clued up on image manipulation, but the first algorithm that comes to mind is something like this:
Take the average of the pixels in the line (as you have)
Since you appear to want faces, you might want to weight reds and blues over green. Not much green in faces of any skin tone.
For each pixel, if the colour is within a given threshold outside of your selected average, remove it / make transparent.
Perhaps the closer to the original line (or centroid), the less strict the threshold becomes.
I'd then provide the user with some tools for:
Sensitivity: how large the threshold is
Eraser: to remove parts of the image that your algorithm missed
Paintbrush: to replace parts of the image that your algorithm incorrectly removed.

UIImage sliding indicator size

I have 2 UIImage's one on top of another and the top one slides and reveals the coloured one. The issue is that I want the indicator (purple line, which is a view) to indicate in the left side only by the end of the image (image`s background is clear).
What I want is to resize that purple line so it wold be visible in the left side and on the image but not on the right side, as it is now.
Any ideas of what I can try ? I have no idea where to start, thank you!
What I did so far:
what I want:
It is possible to implement programmatically using CGImageCreateWithMask, but it is very hard. Also such solution will be not very fast, if you want to do it in real time.
I offer next solution: create 3-d image, as shown below. That picture must be a white mask, which will limit your line. Place that picture above pic 1 and 2. Pink line should also be placed below that picture.
Sorry, if I wrote something unclear.
Thanks kelin, the idea that you gave me was interesting, but I did not had time for that, so I found another way of doing this. I used a third image for the mask witch is an image with the lady`s shape cut off and a white background but only in the right side of the picture (the left half side image and lady shape is transparent), and the line crosses on top of the pictures shown previously and under the image I added with mask...
Simple and efficient, and it uses way less memory for a real time displaying with an animation .

flood fill performance issue on iPad

I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];

Resources