I am creating a iOS drawing application that allows the user to place arrows on the page and position/stretch/rotate them. When the user saves the image, I must modify the background image with the png arrows at the exact coordinate they were dragged to.
The 3 drag-able parts will be UIImage's with CapInsets to keep the arrow looking normal when stretched. The canvas UIImageView will have the same aspect as the actual image ensuring no black space is visible.
My questions is this. After a drag-able part has been stretched, how do I save the results of the UIImageView (the smaller drag-able parts) to disk/memory for later modifying of the background image (the background UIImageView canvas). It is important to ensure what image I used to modify the canvas looks exactly the same as the UIImageView drag-able part that represented it. That means that it must retain exactly how the image is displayed, including the CapInsets.
I am ignoring rotation of the drag-able parts for now because I suspect that would be the easy part.
Try using this code to get a layer screenshot from a desired view:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Related
In my app, it allows users to place text on top of images like snapchat, then they are allowed to save the image to their device. I simply add the text view on top of the image and take a picture of the image using the code:
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* savedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But when I compare the text on my image, to the text from a snapchat image...it is significantly different. Snapchat's word text on top of image is significantly sharper then mine. Mine looks very pixelated. Also I am not compressing the image at all, just saving the image as is using ALAssetLibrary.
Thank You
When you use UIGraphicsBeginImageContext, it defaults to a 1x scale (i.e. non-retina resolution). You probably want:
UIGraphicsBeginImageContextWithOptions(imageView.layer.bounds.size, YES, 0);
Which will use the same scale as the screen (probably 2x). The final parameter is the scale of the resulting image; 0 means "whatever the screen is".
If your imageView is scaled to the size of the screen, then I think your jpeg will also be limited to that resolution. If setting the scale on UIGraphicsBeginImageContextWithOptions does not give you enough resolution, you can do your drawing in a larger offscreen image. Something like:
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0,0,imageSize.width,imageSize.height)];
CGContextScaleCTM(UIGraphicsGetCurrentContext(),scale,scale);
[textOverlay.layer renderInContext:UIGraphicsGetCurrentContext()];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You need to set the "scale" value to scale the textOverlay view, which is probably at screen size, to the offscreen image size.
Alternatively, probably simpler, you can start with a larger UIImageView, but put it within another UIView to scale it to fit on screen. Do the same with your text overlay view. Then, your code for creating composite should work, at whatever resolution you choose for the UIImageView.
I'm trying to create a feature in my app that allows the user to extract a specified area of an existing image, and save it as a png with alpha enabled.
I've put a UIView ontop of a UIImageView - the imageView displays the image, while you are drawing your mask on the transparent view. For drawing, I'm using UIBezierPath. The user are able to draw around the object, and the inside will temporarily fill in with black.
The user picks the image from photo roll, and it's presented in the underlying UIImageView as shown on the left image, and when the user has drawn a shape (which automatically closes), on the overlying UIView, it looks like the right image:
This is working as expected, but when the user then clicks "Crop", then the magic should start. So far, I've only been able to create a "mask" and save it as an image on the roll, as displayed here (never mind the aspect ratios, I'll fix that later):
This is just a normal image, created from the path/shape, with colors(black on white, not black on transparent).
What I need is some way to use this "image" as the alpha channel for the original image.
I know that these are two completely separate things, that an alpha-channel isn't the same as an image, but I have the shape, so I would like to know if there's any possible way to "crop" or "alpha out" with my data. What I want to end up with, is a png of this cat's face, with the surroundings 100% transparent (or not even there), so that I can change the background, like this:
It's important to note that I'm not talking about showing a UIImage in a UIImageView with applied mask, I'm talking about creating a new image, based on an existing image, combined with another image that I want to somehow convert to the first image's alpha channel, thus saving one image like above, with transparency.
I'm not too familiar with handling the data of images, so I was wondering if anyone know how to create a new image based on two images, one acting as alpha for the other, when neither of the images necessarily have an alpha channel to begin with.
The method below will take your original image and the mask image (the black shape) and return a UIImage that includes only the content of the image covered by the mask:
+ (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *) mask
{
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskReference),
CGImageGetHeight(maskReference),
CGImageGetBitsPerComponent(maskReference),
CGImageGetBitsPerPixel(maskReference),
CGImageGetBytesPerRow(maskReference),
CGImageGetDataProvider(maskReference),
NULL, // Decode is null
YES // Should interpolate
);
CGImageRef maskedReference = CGImageCreateWithMask(imageReference, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
The areas outside the mask will be transparent. You can then combine the resulting UIImage with a background color.
I am attempting to replicate Facebook's News Feed for an app, and will have images that should be displayed. The images will be a variety of sizes, so how should I properly scale them to fit the cell without looking weird?
The cell contains a UIButton which then has the image set to it. I am using a button as I want to open the full size image when it is pressed. If it is better to change to a UIImageView with a tap gesture I can do that too, just thought the button was simpler.
Ideas?
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Use this to scale the image based on size.
I assume that you are going to be using a UITableView in order to show these images.
This is going to create problems when you scroll the table view because the images get resized as the rows are loaded and it takes too much processing time, causing the scrolling to not be smooth.
There are two ways around this:
Create images that are appropriately sized when you create the full-sized images, and save them to disk that way. If these are static photos, then you would create the thumbnails in an image editing program (i.e. Photoshop) and include them along with the full resolution photos. If the photos are downloaded/generated dynamically, then you would need to create the smaller version when you save the full-size image. This is actually more complicated than it sounds, and I can provide some code if this is the route that you need to go. The key point is that you will not need to resize the images while the tableview rows are being loaded, saving a lot of time.
Resize the images in a background thread and add them to the table view rows as the processing finishes. There are numerous examples around of doing this, including the following example project from Apple: https://developer.apple.com/library/ios/#samplecode/LazyTableImages/Introduction/Intro.html
The best possible performance would be to do both, but from my experience, only one or the other is really needed.
Why not just handle all the image sizing through the built in functions of a UIImageView and then put a transparent button over them that is the same size (frame) as them... (*To make a UIButton transparent just the background color to clear and the button type to Custom (I think the default is rounded rect or something)
On a different note if you wanted to just detect clicks on a UIImageView without putting a transparent UIButton over the images just use touchesBegan and detect if the touch was on a UIImage in your array.
You can use [CGAffineTransformMakeScale][1] in Quartz 2D framework to scale view very easily.
#define X_SCALE_FACTOR 2.5
#define Y_SCALE_FACTOR 2.5
UIImageView *imageView = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"myimage.png"]];
imageView.transform = CGAffineTransformMakeScale(X_SCALE_FACTOR, Y_SCALE_FACTOR);
I have a very simple UIView containing a few black and white UIImageViews. If I take a screenshot via the physical buttons on the device, the resulting image looks exactly like what I see (as expected) - if I examine the image at the pixel level it is only black and white.
However, if I use the following snippet of code to perform the same action programmatically, the resulting image has what appears to be anti-aliasing applied - all the black pixels are surrounded by faint grey halos. There is no grey in my original scene - it's pure black and white and the dimensions of the "screenshot" image is the same as the one I am generating programmatically, but I can not seem to figure out where the grey haloing is coming from.
UIView *printView = fullView;
UIGraphicsBeginImageContextWithOptions(printView.bounds.size, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[printView.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIGraphicsEndImageContext();
I've tried adding the following before the call to renderInContext in an attempt to prevent the antialiasing, but it has no noticeable effect:
CGContextSetShouldAntialias(ctx, NO);
CGContextSetAllowsAntialiasing(ctx, NO);
CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh);
Here is a sample of the two different outputs - the left side is what my code produces and the right side is a normal iOS screenshot:
Since I am trying to send the output of my renderInContext to a monochrome printer, having grey pixels causes some ugly artifacting due to the printer's dithering algorithm.
So, how can I get renderInContext to produce the same pixel-level output of my views as a real device screenshot - i.e. just black and white as is what is in my original scene?
It turns out the problem was related to the resolution of the underlying UIImage being used by the UIImageView. The UIImage was a CGImage created using a data provider. The CGImage dimensions were specified in the same units as the parent UIImageView however I am using an iOS device with a retina display.
Because the CGImage dimensions were specified in non-retina size, renderInContext was upscaling the CGImage and apparently this upscaling behaves differently than what is done by the actual screen rendering. (For some reason the real screen rendering upscaled without adding any grey pixels.)
To fix this, I created my CGImage with double the dimension of the UIImageView, then my call to renderInContext produces a much better black and white image. There are still a few grey pixels in some of the white area, but it is a vast improvement over the original problem.
I finally figured this out by changing the call to UIGraphicsBeginImageContextWithOptions() to force it to do a scaling of 1.0 and noticed the UIImageView black pixel rendering had no grey halo anymore. When I forced UIGraphicsBeginImageContextWithOptions() to a scale factor of 2.0 (which is what it was defaulting to because of the retina display), then the grey haloing appeared.
I would try to set the
printView.layer.magnificationFilter
and
printView.layer.minificationFilter
to
kCAFilterNearest
Are the images displayed in UIImageView instances? Is printView their superview?
My application allows users to composite images (add and then rotate, scale, move) on top of a background image and save the final edit to the camera roll.
The main editing ViewController's view is the top of a UIView hierarchy. To this view I add as a subview a UIImageView with an image from the Camera Roll or from the Camera. The user than adds additional characters (dogs, cats, etc.), each implemented as a seperate UIView which can be rotated, moved and scaled.
When the user is done editing, i want to save the entire scene to the camera roll at the highest resolution available - the resolution of the background image 1936X2592 pixels (the UIImageView) on iphone4.
At the moment i use the code below, however this only gives me the display resolution of the scene and not the full resolution of the image in memory. any suggestions? (i tried increasing the context size, in the code below, to the full background size, but no luck there)
// create a CG context
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, [UIScreen mainScreen].scale);
// render into the new context
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
// get the image out of the context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The answer seems to be simple enough and is also explained in this thread https://stackoverflow.com/a/11177322/207616
Simple scale up the drawing context before calling renderInContext to achieve the desired output image size
CGContextScaleCTM(context, scaleFactor, scaleFactor);