Xcode, iOS - Image line/shape recognition - ios

I want to identify squares/rectangles inside my UIImageView (or UIImage).
I looked at "Very simple image recognition on iOS", but that's not quite what I'm looking.
At the moment I have an UIImageView which is given a UIImage from time to time.
Most of the UIImagees has black squares/rectangles like this:
.
But the corners may (or may not) have rounded edges.
How can I identify the first black square/rectangle's size?
The end result would be to resize my UIImageView to make the first black square in the UIImage fill the screen. Like so:

If your images will always be sharp black squares in a horizontal row, you could use corner detection to identify the rectangles, then pick out the four leftmost corners. I have three variants of corner detectors in my open source GPUImage framework based on the Harris, Noble, and Shi-Tomasi corner detection methods.
Running a GPUImageHarrisCornerDetectionFilter against your boxes with a threshold of 0.4 and sensitivity of 4.0 yields the following result:
They're a little hard to see, but red crosshairs mark where the detector found the corners of your boxes. Again, you just need to take the leftmost four points to find your target rectangle, and then simply scale your image or view so that this rectangle now fills your view.
An example of how to run such feature detection can be found in either the FilterShowcase or FeatureExtractionTest example within my framework. I describe the process by which I do this in this answer over at Signal Processing.

It seems easiest solution would be:
sum up all pixels vertically to the top-most row (like an excel table)
rows with the smallest/biggest value are your "gap" region
width can be derived from (2).

From what I understood about your question, you need to implement the Canny Edge Detection Algorithm for detecting the edges of the black borders in your image.
For this you should use the image processing framework available at the following links
Google
Github
Use the ImageWrapper *Image::cannyEdgeExtract(float tlow, float thigh)function from the Image.m file.

Related

Is anyone has idea about Palm line detection in iOS Swift?

Currently, I am doing Image COLOUR filtering operation second MEDIAN filtering then CANNY EDGE DETECTION ALGORITHM.
Then, I read pixels using for loop and I draw lines using pixel, but I do not getting proper result for palm scanning and showing lines on human Palm.
So if anybody has any types of idea regarding this then please let me know.
Currently i am getting this type of result:
but I need this type of output:
Oh I got your problem, You can do this by following steps.
1.process your hand image with canny edge detection algo lets name that cannyImage.
2.now create the bitmap of cannyImage and remove black pixels from the image and replace them with transparent pixels, black only because canny image will be filled with black color and objects lines in white when you process the image through the algo, now you have extracted the image with palm lines white in color, lets name that palmLineImage.
3.now the main part is MASKING you need to mask the palmLineImage on the original image.
These three steps will give you your desire O/P.
Tools you can use GPUImage awsesome library by BradLarson for this https://github.com/BradLarson/GPUImage2
For refining the palm image from background which I'm sure you have to use in future you can use GrabCut algo
LINK - https://github.com/naver/grabcutios
and now the apple has launched Photos captured in Portrait Mode on iOS 12 contain an embedded person segmentation matte that made it easy to create creative visual effects like background replacement.
Links - https://developer.apple.com/videos/play/wwdc2019/260/ , https://developer.apple.com/videos/play/wwdc2019/225/
Looks like you need to use something like the douglas peucker algorithm - to simplify the number of data points and smooth the lines. link - https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm

Detect digits rectangle then crop using ImageMagick or CoreImage in iOS

I'm developing an OCR app that reads the digits and copy them to clipboard automatically instead of manually typing...
I'm using (TesseractOCR) ... But before recognizing and in the image manipulating I'm improving the image for better recognition.
I used ImageMagick library and the filtered image looks like this :
But the Output of recognition is :
446929231986789 //The first and last numbers (4 & 9) were added
So I Want to detect only the white box to crop ...
I know that OpenCV do the trick but unfortunately it's C++ library and I don't speak that language :(
And I knew that iOS8 has a new CIDetector of type Rectangles but I don't want to neglect the previous versions of iOS
MY IMAGEMAGICK Filter CODE :
//Starting
MagickWandGenesis();
magick_wand = NewMagickWand();
//Reading the image....
NSString *tempFilePath = //Path of image
// Monochrome image
MagickQuantizeImage(magick_wand,2,GRAYColorspace,1,MagickFalse,MagickFalse);
// Write to temporary file
MagickWriteImage(magick_wand,
[tempFilePath cStringUsingEncoding:NSASCIIStringEncoding]
);
DestroyMagickWand(magick_wand);//Free up memory
// Load UIImage from temporary file
UIImage *imgObj = [UIImage imageWithContentsOfFile:tempFilePath];
// Display on device
Many thanks ..
I would go with simple pixel search. Since you want to crop the white area with digits all you need to do is to find left, right, top and bottom borders of the rectangle. Provided that rectangle is axis aligned and has enough white space around digits you should find first row or column that has continuous number of white pixels. For example to find left border (which I guess would be around 78th column) start searching from column 0 and go right. For each column count continuous white pixels (single for-loop from top to bottom). By continuous I mean series that is not interrupted by black one. If count will reach, say, 80% of height you have your left border. Do the rest accordingly starting from right side, top or bottom and move in the opposite direction. I guess there are some fancy procedures to detect the rectangle but your input has quite distinguishable characteristics. So instead of linking to some lib I suggest DIY. To speed things up you could increase your row by 2 or more. Or you could scale your image down, treshold it do 2 colors.
There is also one more way to do this. Flood-fill with white starting from one of the corners.

How can I show image in laptop screen, with appropriate image rotation and perspective?

I have been searching from last two days on internet, I have checked many source codes on net but none of them has provided the result I want.
The image rotation would have perspective but still there would be no changes in the heights of both left and right sides of an image.
I want to set image inside the laptop screen
Please help me out, Thanks.
So you want to 2D pespective drawing of a laptop screen (on an iOS device?) and put a 2D image on that screen, but with the image transformed so its perspective looks correct on the laptop screen, right?
What you need to do is to add an image view on top of your laptop image view. Lets call it laptopScreenImageView.
Then apply a CATransform3D to that the laptopScreenImageView's layer.
The trick to get 3D perspective out of a CALayer is to modify the .m34 value of the transform. Typically you set the .m34 value to a very small negative number, somewhere around -1/200 to -1/500 (the denominator in the fraction is the z coordinate of the "eye position" for viewing the perspective image, in pixels, or how many pixels "above" the image the viewer's eye should seem to be. I don't fully understand it, to be honest. I fiddle with the .m34 value until I get something that looks right.)
Alternately you could try adding a CATransformLayer to your laptop image view's layer, and then adding a CALayer containing your image as a sublayer of the CATransformLayer. I haven't used CATransformLayers before, but the docs say they are supposed to support layers with 3D perspective, giving you the same effect as modifying the .m34 component of a layer's transform.

flood fill performance issue on iPad

I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];

Drawing a non rectangular part of a picture in delphi canvas

Can anyone share a sample code to draw a non-rectangular part of a picture in delphi canvas?
You're looking for GDI paths. Start here, which explains what paths are in this context, and provides links on the left to explain the functionality available with them.
Google can turn up lots of examples of using paths in Delphi. If you can't find them, post a comment back here and I'll see what I can turn up for you.
Your question is pretty vague. But I suspect what you are looking for is clipping regions. Read up on them. Set the clipping region on the target device to the shape you want, and then draw the image onto the device. Only the part of the image that would be within the clipping region will be drawn.
Canvas.Ellipse(0, 0, 10, 20); // not a rectangle
I use so called runlists for this feature (generalized shapes and blitting them). I've seen them called warplists too. A shape is encoded as a runlist by defining it as a set of horizontal lines, and each line is two integer values (skip n pixels,copy n pixels).
This means you can draw entire lines, leaving you with only "height" draw operations.
So a rectangle is defined (the first "skip" pixels from top level corner to the left corner (xorg,yorg). The rectangle is width_rect wide, and width_pixels goes a line further. width_pixels can be wider than the width of the picture (alignment bytes)
(yorg*width_pixels+xorg , width_rect),
(width_pixels-width_rect , width_rect),
(width_pixels-width_rect , width_rect),
(width_pixels-width_rect , width_rect),
..
..
This way you can make your drawing routines pretty generic, and for simple, regular shapes (rects, circles) it takes only minor math to precalculate these lists. It simplified my shape handling enormously.
However I draw directly to bitmaps, not to canvasses, so I can't help with that part. A primitive that efficiently draws a row, and a way to extract a row from a graphic should be enough.

Resources