Slicing the image by coding - ios

I saw some apps which is something like a puzzle. It first asks to select an image, and slice them and put into 4x4 or 5x5 squares randomly. One square will be empty and user can rearrange them by slide the image to empty slot.
I know how to slide the image. But main task is how to slick them into smaller images?
Is it possible?

Matt Long discusses this in his article Subduing CATiledLayer.

Not sure if this is the most efficient way but I think it should work.
Load the image into a UIImageView of the size that want the slice to be, so lets use a 480x320 image as an example. Lets slice this into 9 parts, a 3x3 grid. That would make each piece 160x106 (roughly, 320 does not evenly divide by 3)
So create your UIImageView with the size 160x106, set the image property to your full-sized image. And set the image view's contentMode to UIViewContentModeScaleAspectFill so that the regions not in the view are clipped. I haven't tried it but I assume this would give you the top left corner of your grid, position 1,1.
Now here is where you make a choice depending on memory efficiency/performance, you will have to test which is best.
Option 1
Continue to do this for each slice, but set the UIImage's frame so that the correct portion of the image is being displayed and the rest is being clipped. So your frame for position 1,2 would look like CGRectMake(106, 0, 106, 160) for position 2,2 it would be CGRectMake(106*2, 160, 106, 160) and so forth.
This is result in the full image being loaded into memory 9 times, this may or may not be a problem. You will have to do some analysis on memory usage.
Option 2
Once you create your "slice" in the UIImageView, save that context off as a file that you will later load into appropriate UIImageViews.
The following code demonstrates one way to save a UIView's context to a UIImage.
UIGraphicsBeginImageContext(yourView.frame.size);
[[yourView layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imageOfView = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You would then save off this UIImage to disk with an appropriate name to be loaded into the appropriate slice later.

Related

Get pixel values of image and crop the black part of the image : Objective C

Can I get pixel value of image and crop its black part. For instance, I have the this image:
.
And I want something like this
without the black part.
Any possible solution on how to do this? Any libraries/code?
I am using Objective C.
I have seen this solution to the similar question but I don't understand it in detail. Please kindly provide steps in detail. Thanks.
Probably the fastest way of doing this is iterating through the image and find the border pixels which are not black. Then redraw the image to a new context clipping the rect received by border pixels.
By border pixels I mean the left-most, top-most, bottom-most and right-most. You can find a way to get the raw RGBA buffer from the UIImage through which you may then iterate through width and height and set the border values when appropriate. That means for instance to get leftMostPixel you would first set it to some large value (or to the image width) and then in the iteration if the pixel is not black and if leftMostPixel > x then leftMostPixel = x.
Now that you have the 4 bounding values you can create a frame from it. To redraw just the target rectangle you may use various tools with contexts but probably the easiest is creating the view with size of bounding rect and put an image view with the size of the original image on it and create a screenshot of the view. The image view origin must be minus the origin of the bounded rect though (we put it offscreen a bit).
You may encounter some issues with the orientation of the image though. If the image will have some orientation other then up the raw data will not respect that. So you need to take that into account when creating the bounded rect... Or redraw the image first to make it oriented correctly... Or you can even create a sub buffer with RGBA data and create the CGImage from those data and applying the same orientation to the output UIImage as with input.
So after getting the bounds there are quite a few procedures. Some are slower, some take more memory, some are simply hard to code and have edge cases.

Objective-C How does snapchat make the text on top of an image/video so sharp and not pixelated?

In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.

iOS: Keep real photo resolution when making screen capture with UIGraphicsGetImageFromCurrentImageContext

I want to make a basic photo editing in my application and now I need to be able to add a text over a photo. Original photo have something like >2000 pixels width and height so it will be scaled to fit in screen without modifying its ratio.
So , I put the image in an UIImageView, dragged a Label over it and then save the image on screen with UIGraphicsGetImageFromCurrentImageContext. The problem is I will get a small image (320 X some height).
What is the best approach to accomplish this task but not shrink the resolution?
Thanks a lot!
I had this exact same problem in an app.
The thing I realised is that you can't do this by doing a screen capture. In turn, this means that dragging labels and text onto the image can't really be done (it can but bear with me) with UILabels etc...
What you need to do is keep a track of everything that's going on data-wise.
At the moment you have the frame of your UIImageView. This, in reality is irrelevant. It is purely there to show the user a representation of what is going on.
This is the same for the UILabel. It has a frame too. Again, this is irrelevant to the final image.
What you need is to store the data behind it in terms that are not absolute and then convert those values into frames for displaying on the device.
So, if you have an image that is 3200x4800 pixels (just making it easy for me) and this is displayed on the device and "shrunk" down to 320x480. Now, the user places a label with a frame of 10, 10, 100, 21 with the text "Hello, world" at a particular font size.
Storing the frame 10, 10, 100, 21 is useless because what you need when the image is output is... 100, 100, 1000, 210 (i.e. ten times the size).
So, really you should be storing information in the background like...
frame = 0.031, 0.021, 0.312, 0.044
// these are all percentages
Now, you have percentage values of where the label should be and how big it should be based on the size of the image.
So, for the shrunk image size it will return 10, 10, 100, 21 and for the full image size it will be 100, 100, 1000, 210 and so will look the same size when printed out.
You could create a compound UIView by having a UIView with a UIImageView and a UILabel then you just have to resize it to the full image size before rendering it. That would be the easy but naive way of approaching it initially.
Or you could create a UIView with CALayers backing it that display the image and text.
Or you could render out the image and text with some sort of draw method.
Either way, you can't just use a screen capture.
And yes, this is a lot more complex than it first appears.

Image Drawing on UIView

I'm trying to create an application where I can draw a lot of pictures at a specific point (determined for each image) on one view. I have a coordinates where I need draw a picture, width and height of it
For example:
I have 2 billion jpeg's images. for each images I have a specific origin point and size.
In 1 second I need draw on view 20-50 images in specific point.
I have already tryid solve that in the next way:
UIGraphicsBeginImageContextWithOptions(self.previewScreen.bounds.size, YES, 0);
[self.previewScreen.image drawAtPoint:CGPointMake(0, 0)];
[image drawAtPoint:CGPointMake(nRect.left, nRect.top)];
UIImage *imagew = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.previewScreen setImage:imagew];
but in this solution I have a very big latency with displaying images and big CPU usage
WBR
Maxim Tartachnik
So I guess your question is, how to make it faster?
Why draw the images using ImageContext? You could just add UIImageViews containing your images to your main view and position them like you need it.

Again edit of edited UIImage in iOS app

In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.

Resources