Image Drawing on UIView - ios

I'm trying to create an application where I can draw a lot of pictures at a specific point (determined for each image) on one view. I have a coordinates where I need draw a picture, width and height of it
For example:
I have 2 billion jpeg's images. for each images I have a specific origin point and size.
In 1 second I need draw on view 20-50 images in specific point.
I have already tryid solve that in the next way:
UIGraphicsBeginImageContextWithOptions(self.previewScreen.bounds.size, YES, 0);
[self.previewScreen.image drawAtPoint:CGPointMake(0, 0)];
[image drawAtPoint:CGPointMake(nRect.left, nRect.top)];
UIImage *imagew = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.previewScreen setImage:imagew];
but in this solution I have a very big latency with displaying images and big CPU usage
WBR
Maxim Tartachnik

So I guess your question is, how to make it faster?
Why draw the images using ImageContext? You could just add UIImageViews containing your images to your main view and position them like you need it.

Related

App crashed when I display a large image by UIImageView

I set a image with 10000 * 10000 pixels to UIImageView from network by SDWebImage, and App crashed because it allocated too much memory. I tried to resize the image that had been loaded by SDWebImage, so I add the code below:
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, CGRectMake(0, 0, size.width, size.height));
CGContextSetInterpolationQuality(context, 0.8);
[self drawInRect:drawRect blendMode:kCGBlendModeNormal alpha:1];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Although the image size was smaller, my app crashed due to the same reason.
It seems that there are some rendering action during the resizing action, the memory would rose to 600M and fell to 87M in a little while.
How can I resize a image without rendering?
It seems that display the image by UIWebView did not exist the problem. How it works?
Any help and suggestions will be highly appreciable。
Resolution:
https://developer.apple.com/library/ios/samplecode/LargeImageDownsizing/
The Resolution does work for jpg but not for png
You can't unpack the image into memory because it's too big. This is what image tiling is for, so you would download a set of tiles for the image based on the part you're currently looking at (the zoom position and scale).
I.e. if you're looking at the whole image you get 1 tile which is zoomed out and therefore low quality and small size. As you zoom in you get back other small size images which show less of the image 'area'.
The web view is likely using the image format to download a relatively small image size that is a scaled down version of the whole image, so it doesn't need to unpack the whole image to memory. It can do this because it knows your image is 10,000x10,000 but that it is going to be displayed on the page at 300x300 (for example).
Did you try to use : UIImageJPEGRepresentation (or UIImagePNGRepresentation)
You can make your image size smaller with it.

Objective-C How does snapchat make the text on top of an image/video so sharp and not pixelated?

In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.

iOS Drawing image with objects (arrows) and saving to disk

I am creating a iOS drawing application that allows the user to place arrows on the page and position/stretch/rotate them. When the user saves the image, I must modify the background image with the png arrows at the exact coordinate they were dragged to.
The 3 drag-able parts will be UIImage's with CapInsets to keep the arrow looking normal when stretched. The canvas UIImageView will have the same aspect as the actual image ensuring no black space is visible.
My questions is this. After a drag-able part has been stretched, how do I save the results of the UIImageView (the smaller drag-able parts) to disk/memory for later modifying of the background image (the background UIImageView canvas). It is important to ensure what image I used to modify the canvas looks exactly the same as the UIImageView drag-able part that represented it. That means that it must retain exactly how the image is displayed, including the CapInsets.
I am ignoring rotation of the drag-able parts for now because I suspect that would be the easy part.
Try using this code to get a layer screenshot from a desired view:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}

iOS - How to properly scale UIButton with varying image sizes (like Facebook News Feed)?

I am attempting to replicate Facebook's News Feed for an app, and will have images that should be displayed. The images will be a variety of sizes, so how should I properly scale them to fit the cell without looking weird?
The cell contains a UIButton which then has the image set to it. I am using a button as I want to open the full size image when it is pressed. If it is better to change to a UIImageView with a tap gesture I can do that too, just thought the button was simpler.
Ideas?
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Use this to scale the image based on size.
I assume that you are going to be using a UITableView in order to show these images.
This is going to create problems when you scroll the table view because the images get resized as the rows are loaded and it takes too much processing time, causing the scrolling to not be smooth.
There are two ways around this:
Create images that are appropriately sized when you create the full-sized images, and save them to disk that way. If these are static photos, then you would create the thumbnails in an image editing program (i.e. Photoshop) and include them along with the full resolution photos. If the photos are downloaded/generated dynamically, then you would need to create the smaller version when you save the full-size image. This is actually more complicated than it sounds, and I can provide some code if this is the route that you need to go. The key point is that you will not need to resize the images while the tableview rows are being loaded, saving a lot of time.
Resize the images in a background thread and add them to the table view rows as the processing finishes. There are numerous examples around of doing this, including the following example project from Apple: https://developer.apple.com/library/ios/#samplecode/LazyTableImages/Introduction/Intro.html
The best possible performance would be to do both, but from my experience, only one or the other is really needed.
Why not just handle all the image sizing through the built in functions of a UIImageView and then put a transparent button over them that is the same size (frame) as them... (*To make a UIButton transparent just the background color to clear and the button type to Custom (I think the default is rounded rect or something)
On a different note if you wanted to just detect clicks on a UIImageView without putting a transparent UIButton over the images just use touchesBegan and detect if the touch was on a UIImage in your array.
You can use [CGAffineTransformMakeScale][1] in Quartz 2D framework to scale view very easily.
#define X_SCALE_FACTOR 2.5
#define Y_SCALE_FACTOR 2.5
UIImageView *imageView = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"myimage.png"]];
imageView.transform = CGAffineTransformMakeScale(X_SCALE_FACTOR, Y_SCALE_FACTOR);

Slicing the image by coding

I saw some apps which is something like a puzzle. It first asks to select an image, and slice them and put into 4x4 or 5x5 squares randomly. One square will be empty and user can rearrange them by slide the image to empty slot.
I know how to slide the image. But main task is how to slick them into smaller images?
Is it possible?
Matt Long discusses this in his article Subduing CATiledLayer.
Not sure if this is the most efficient way but I think it should work.
Load the image into a UIImageView of the size that want the slice to be, so lets use a 480x320 image as an example. Lets slice this into 9 parts, a 3x3 grid. That would make each piece 160x106 (roughly, 320 does not evenly divide by 3)
So create your UIImageView with the size 160x106, set the image property to your full-sized image. And set the image view's contentMode to UIViewContentModeScaleAspectFill so that the regions not in the view are clipped. I haven't tried it but I assume this would give you the top left corner of your grid, position 1,1.
Now here is where you make a choice depending on memory efficiency/performance, you will have to test which is best.
Option 1
Continue to do this for each slice, but set the UIImage's frame so that the correct portion of the image is being displayed and the rest is being clipped. So your frame for position 1,2 would look like CGRectMake(106, 0, 106, 160) for position 2,2 it would be CGRectMake(106*2, 160, 106, 160) and so forth.
This is result in the full image being loaded into memory 9 times, this may or may not be a problem. You will have to do some analysis on memory usage.
Option 2
Once you create your "slice" in the UIImageView, save that context off as a file that you will later load into appropriate UIImageViews.
The following code demonstrates one way to save a UIView's context to a UIImage.
UIGraphicsBeginImageContext(yourView.frame.size);
[[yourView layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imageOfView = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You would then save off this UIImage to disk with an appropriate name to be loaded into the appropriate slice later.

Resources