Trim and find position of result with rmagick - ruby-on-rails

I'm working on a jigsaw puzzle webapp, and one of the requirements is automatically generating puzzle pieces from any image. I'm using RMagick for the image processing. I've got some sets of blank puzzle pieces to use as masks, and I can handle that part, but then I need to trim the whitespace (er, transparentspace) out of the resulting images.
Now, I know I can use trim for this - I might have to put a one-pixel border on it to make sure all four corners are the right color, but that's easy and I can just subtract one pixel from the final number. The only problem is that I also need to record the position of the piece. According to the documentation on trim, the function will "retain the offset information", which sounds like exactly what I need. But I can't find anything about how to retrieve the offset information! Does anyone know how to do that?
If worst comes to worst, I suppose I could always just look through pixel-by-pixel, find the boundaries myself, and use crop to trim the picture, but that wouldn't exactly be good for performance.

Aha, found it. image.page.x and image.page.y give the upper left corner, and then image.rows and image.columns have the height and width.

Related

Draw text along a path on a map (like f.e. a street-name)

I am currently trying to label lines that I draw in my Map (in my iOS app, but I guess it applies to all maps).
So what I currently am doing, I simplify my path so that I get rid of most small curves and then just draw my glyphs along that line. Currently that looks like this:
On some parts of the line that's already ok. If the line is quite straight and the corners aren't too spiky.
But in some parts you can just not read anything... So what are strategies to make that look nicer?
Does anybody know an algorithm or a strategy on how to make my path look like the red line here:
I am happy about any ideas on how to improve my drawing :)
I do it, in my commercial map rendering system, by finding a portion of the line without sharp corners. There is no way to make the label look good if it turns corners of a right angle or greater. If there's no section long enough I abbreviate the label (e.g., Link Road becomes Link Rd), or split it on to two lines. If there's still nowhere to draw the label I don't draw it.
Another thing that's important is to adjust the spacing so that ascenders and descenders don't clash, so you need to look at the bounding box of each adjacent pair of letters as you draw the text and add small amounts of space as necessary.
I don't bother to smooth my lines, as you suggest with your red line. It really doesn't seem to matter, at least with street labelling.

Trim transparency of an UIImage

I was wondering what would be the best way to trim the "canvas" of an UIImage (pretty much like any image editor allows out there)
Now, the previous example is not a single UIImage. It's actually 2 UIViews. So clipping the superview against the blue box would do the trick, but I guess I am looking into the best possible way to do this. Given that there could be several blue boxes in the "canvas".
Is there a faster way than going through every pixel?
Thanks!
Thinking about it algorithmically, I would say no. You need to find the pixel that extends furthest to the left, right, top and bottom. Unless you look at every pixel from each direction you could miss non-transparent pixels.
You could speed things up if you figure out how to map your image into memory and then index into memory directly rather than using a high level function that fetches pixels. I would suggest searching from the top down (which would be sequential memory accesses) until you find a non-clear pixel. Then search from the end of the image backwards, which would give you the bottom-most pixel.
You would then want to limit your search from each side to only look starting at the first non-transparent pixel from the top and ending at the last non-transparent pixel on the bottom.
For anything other than a very large image this should take a fraction of a second.
Ok, I was being dumb. The union of the subviews is all I really needed, so its just a simple loop over the subviews and doing a CGRect union against their frames.

Cropping image By selecting Object and color matching

We are developing an app where we need to crop an image according to the selecting object area. User will draw a line and we need to select the object and crop it .This crop need to be like the app: YourMoji
So far we have tried to get the color of the pixels along the line and then comparing those with the color of every pixel in the image and making a path from it to clip the image. But the almost going no where.
Is it possible through this way to crop an image or we are going in the wrong way? Can anyone provide a way to do this Or suggest a way to modify the way we have worked so far? Any advice and suggestions will be greatly appreciated!
Thanks in advance.
I guess what you want is the image segmentation algorithm called Graph Cut.
Here are two Github repositories, hope these would help:
GraphCut
GrabCutIOS
I'm not exactly clued up on image manipulation, but the first algorithm that comes to mind is something like this:
Take the average of the pixels in the line (as you have)
Since you appear to want faces, you might want to weight reds and blues over green. Not much green in faces of any skin tone.
For each pixel, if the colour is within a given threshold outside of your selected average, remove it / make transparent.
Perhaps the closer to the original line (or centroid), the less strict the threshold becomes.
I'd then provide the user with some tools for:
Sensitivity: how large the threshold is
Eraser: to remove parts of the image that your algorithm missed
Paintbrush: to replace parts of the image that your algorithm incorrectly removed.

Finding word's bounding box on a low quality image

I'm trying to get a bounding box for the word "ЛИЛИЯ" in this image, using opencv.
(source: litprom.ru)
I am already experimenting with cv::findContours() and different thresholding alogrithms for couple of days, but can not get any satisfying results.
So, what do I know about this word:
letters are of similar size;
letters' height is in range: 40px — 90px;
word is oriented horizontaly (±5˚);
there is one and only one word on this image;
this word does not intersect image's border (it's fully visible);
different parts of image may have different luminosity;
hotspots (totally white areas) may be present on an image.
English is not my native language, so I'm sorry if the question is not properly explained.
If someone needs more images to answer this question, I have at least a dozen more.
Check out stroke width transform. That is used to text detection.
You can preprocess your image with adaptiveThreshold. You should use a blocksize a little bit bigger than your biggest character. I tried on your image with 91 and it gave good results. Then you can use FindContours and filter the blobs/contours using their height. Note that the letters will still be connected one to another so you cannot really filter using the width.

Checking for overlapping images with a hole in an image

I have two image views. They are "puzzle pieces" I want to test if one fits inside the other. Not that the frames overlap. I guess its a CGRect thing... but seems like they test the outer boundaries. Any ideas would be appreciated? Thanks.
Just brainstorming here... Maybe this will get you thinking of something that will work for you. If the images do not overlap, then drawing image A on top of image B will result in the same image as drawing image B on top of image A. If they overlap, that will result in different images. You could do something like draw image A, then B. Create a checksum of the result, draw A again, and checksum that. If the checksums match, the puzzle piece fits.
If you have a 1-bit mask that represents each image, then ORing them together and XORing them together will have the same result if they don't overlap and different results if they do.
Do you know the correct order of pieces beforehand? May be it's better assign the tag to each UIImageView which will represent the image's index number. Then you just create a kind of mesh and check in which cell the piece was placed. If the cell number and UIImageView tag match - then this is the right place.
If you have only two images and one must fit to the specific area in another, you could store the frame of this hole and check if the piece is placed somewhere around the centre of this frame. It'll be more user-friendly because when you're checking pixels or bit masks you want the user be extremely precise. Or your comparison code should allow some shifts and will be very complicated.
But if you don't want to hardcode the hole frame you could calculate it dynamically (just find transparent areas in the image). Anyway, this solution will be more effective then checking bit match on the fly.

Resources