Best performance solution for background image of 100 UIButtons - ios

I have a UICollectionView with approximately 100 cells with a rounded button inside each cell. 5 cells per row, so I have to scroll down and up to select the buttons.
When the buttons are selected, the background image changes. I've done this in several ways which I expose below. Maybe it is not a very demanding view, but I was wondering what is the less expensive approach in terms of performance.
One solution I found is using an extension of UIImage and setting the button.layer.cornerRadius like this:
extension UIImage {
class func imageWithColor(color: UIColor?) -> UIImage! {
let rect = CGRectMake(0.0, 0.0, 1.0, 1.0)
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0)
let context = UIGraphicsGetCurrentContext();
if let color = color {
color.setFill()
}
else {
UIColor.whiteColor().setFill()
}
CGContextFillRect(context, rect);
let image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image
}
and then, setting the button image background with:
button.layer.cornerRadius = (cell.bounds.width - 8) / 2
button.clipsToBounds = true
button.setBackgroundImage(UIImage.imageWithColor(UIColor.greenColor()), forState: UIControlState.Selected)
I've heard that setting the layer.cornerRadius is pretty expensive.
Another approach would be just designing an squared image, in Photoshop or similar, with a circle in the middle and letting the rest transparent and setting it as button background.
Or another option, that I still haven't tried, I think it could be making an 1 x 1 pixels image with a color and setting the filling of the background as tile(I still haven checked the code for this one). But I think this is pretty similar to the first way.
Would you solve this questions measuring the performance with any software or just knowing more deeply the Swift language?

Try applying mask to image. It's better because you do it once for each image. This should not affect scroll performance. Only what you need is mask image. It should be square image with white background and black circle in middle. Here you can find example (obj-c). Swift:
extension UIImage {
func maskedImage(mask: UIImage) -> UIImage {
let maskImgRef = mask.CGImage
let maskRef = CGImageMaskCreate(CGImageGetWidth(maskImgRef), CGImageGetHeight(maskImgRef), CGImageGetBitsPerComponent(maskImgRef), CGImageGetBitsPerPixel(maskImgRef), CGImageGetBytesPerRow(maskImgRef), CGImageGetDataProvider(maskImgRef), nil, false)
if let maskedRef = CGImageCreateWithMask(self.CGImage, maskRef) {
let maskedIm = UIImage(CGImage: maskedRef)
UIGraphicsBeginImageContext(maskedIm.size)
maskedIm.drawInRect(CGRect(origin: CGPointZero, size: maskedIm.size))
let img = UIGraphicsGetImageFromCurrentImageContext()
return img
}
return self
}
}
UPD: code above is helpful if your images isn't monochrome. If they are monochrome you can use UIButtons with system type instead of ImageViews. Just disable userInteraction, set circled image for this buttons and manipulate tintColor.

Related

UINavigationBar Custom Color Hairline Border

First of all, I did search this before posting this question but if there is an answer out there, it is buried under the millions of questions about how to remove the default bottom border of a navigation bar.
I do NOT want to remove the bottom border ("shadow") of the navigation bar.
I am trying to "theme" my app by the usual method of using appearance proxies.
I can globaly change most visual attributes of UINavigationBar with code like the following:
let navigationBarProxy = UINavigationBar.appearance()
navigationBarProxy.isTranslucent = false
navigationBarProxy.barTintColor = myBarBackgroundColor
navigationBarProxy.tintColor = myBarTextColor
Regarding the 'hairline' bottom border of the bar (or as it is known, the "shadow"), I can either set the default one by doing nothing or specifying nil:
navigationBarProxy.shadowImage = nil
...or I can specify a custom color by assigning a solid image of the color I'm after:
navigationBarProxy.shadowImage = UIImage.withColor(myBorderColor)
(uses helper extension:)
extension UIImage {
public static func withColor(_ color: UIColor?, size: CGSize = CGSize(width: 1, height: 1)) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: size)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let actualColor = color ?? .clear
context?.setFillColor(actualColor.cgColor)
context?.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
However, the approach above gives me (on retina devices) a 1pt, 2px border, whereas the default, light gray one is actually 0.5pt, 1px (a.k.a. "hairline").
Is there any way to achieve a 0.5 pt (1px), custom-colored bottom border (shadow) for UINavigationBar?
I guess I could use a runtime generated, background image that is for the most part solid, but has a 1px border of my choice color "baked in" at the bottom. But this seems inelegant at best, and I'm not sure how it would work when that navigation bar height changes: is the image sliced, or simply stretched, or what?
Based on the chosen answer found here (with small changes because it was old):
How to change the border color below the navigation bar?
// in viewDidLoad
UIView * navBorder = [[UIView alloc] initWithFrame:CGRectMake(0,
self.navigationController.navigationBar.frame.size.height, // <-- same height, not - 1
self.navigationController.navigationBar.frame.size.width,
1/[UIScreen mainScreen].scale)]; // <-- 5/5S/SE/6 will be 0.5, 6+/X will be 0.33
// custom color here
[navBorder setBackgroundColor:customColor];
[self.navigationController.navigationBar addSubview:navBorder];
Credit here for finding scale programmatically:
Get device image scale (e.g. #1x, #2x and #3x)
*NOTE:
iPhone 6+/X are x3, so 1px height will be 0.33pt
iPhone 5/5S/SE/6 are x2, so 1px height will be 0.5pt
Tested in simulator, may need to verify on actual devices.
Visually same as default nav bar with a custom color.
I believe you want to remove the shadow. This should help with that.
[[UINavigationBar appearance] setShadowImage:[UIImage new]];
If you want to have different coloured shadow then you can create a image with your desired colour and use it instead of
[UIImage new]
You can use something like this for generating an image yourself
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
const CGFloat alpha = CGColorGetAlpha(color.CGColor);
const BOOL opaque = alpha == 1;
UIGraphicsBeginImageContextWithOptions(rect.size, opaque, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

UIImage size doubled when converted to CGImage?

I am trying to crop part of an image taken with the iPhone's camera via the cropping(to:) method on a CGImage but I am encountering a weird phenomenon where my UIImage's dimensions are doubled when converted with .cgImage which, obviously, prevents me from doing what I want.
The flow is:
Picture is taken with the camera and goes into a full-screen imageContainerView
A "screenshot" of this imageContainerView is made with a UIView extension, effectively resizing the image to the container's dimensions
imageContainerView's .image is set to now be the "screenshot"
let croppedImage = imageContainerView.renderToImage()
imageContainerView.image = croppedImage
print(imageContainerView.image!.size) //yields (320.0, 568.0)
print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) //yields (640, 1136) ??
extension UIView {
func renderToImage(afterScreenUpdates: Bool = false) -> UIImage {
let rendererFormat = UIGraphicsImageRendererFormat.default()
rendererFormat.opaque = isOpaque
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: rendererFormat)
let snapshotImage = renderer.image { _ in
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
}
return snapshotImage
}
}
I have been wandering around here with no success so far and would gladly appreciate a pointer or a suggestion on how/why the image size is suddenly doubled.
Thanks in advance.
This is because print(imageContainerView.image!.size) prints the size of the image object in points and print(imageContainerView.image!.cgImage!.width, imageContainerView.image!.cgImage!.height) print the size of the actual image in pixels.
On iPhone you are using there are 2 pixels for evert point in both horizontal and vertical. The UIImage scale property will give you the factor which in your case will be 2.
See this link iPhone Resolutions

renderInContext does not work correctly when changing layer Z position

My application allow user to switch UIImageView back and front and then user can capture that screen. Here is my code to capture the screen into UIImage
-(UIImage *)imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 2.0f);
// I even tried view.layer.presentationLayer but still not working
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I do not use [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES] because it is slower and sometimes (when view is not visible) drawn the screenshot in black.
But problem with renderInContext is the Z position that change between UIImageView ( imageView.layer.zPosition = 0.01;etc ). In my iPhone screen this work correctly when I assigned value to zPosition but in the capture screen it turns out wrong.
Are there anyway I can resolve this problem ? Thanks in advance
Edited:
Here is what I tried to do before capture the screenshot. I use this code to make one ImageView display in front of another one.
-(void)bringOneLevelUp:(UIImageView*)imageView
{
//_imgArray is sorted by Z position order (small to big)
NSUInteger currentObjectIndex = [_imgArray indexOfObject:imageView];
if(currentObjectIndex+1< _imgArray.count){
UIImageView *upperImageView = [_imgArray objectAtIndex:currentObjectIndex+1];
CGFloat currentZIndex = imageView.layer.zPosition;
CGFloat upperZIndex= upperImageView.layer.zPosition;
imageView.layer.zPosition = upperZIndex;
upperImageView.layer.zPosition = currentZIndex;
// swap position in array
[_imgArray exchangeObjectAtIndex:currentObjectIndex withObjectAtIndex:currentObjectIndex+1];
}
}
And liked I explained earlier the result of this code in the phone screen is correct. The newImageView is in front of lastImageView. But when I captured screenshot by renderInContext. They are not.
Swift 4.2
The answer of #RameshVel worked for me translating it to Swift, maybe someone needs this:
UIGraphicsBeginImageContext(drawingCanvas.frame.size)
self.renderInContext(ct: UIGraphicsGetCurrentContext()!)
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {return}
UIGraphicsEndImageContext()
And in the renderInContext function:
func renderInContext(ct:CGContext){
ct.beginTransparencyLayer(auxiliaryInfo: nil)
// descendant ordered list of views based on its z-position
let orderedLayers = self.view.subviews.sorted(by: {
$0.layer.zPosition < $1.layer.zPosition
})
for l in orderedLayers {
l.layer.render(in: ct)
}
ct.endTransparencyLayer()
}
The same can be donde with CALayers if you are drawing and adding layers:
UIGraphicsBeginImageContext(drawingCanvas.frame.size)
//Canvas is a custom UIView for drawing
self.renderInContext(ct: UIGraphicsGetCurrentContext()!, canvas: canvas)
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {return}
UIGraphicsEndImageContext()
renderInContext function:
func renderInContext(ct:CGContext, canvas:CustomCanvas){
ct.beginTransparencyLayer(auxiliaryInfo: nil)
let layers:[CALayer] = canvas.layer.sublayers!
let orderedLayers = layers.sorted(by: {
$0.zPosition < $1.zPosition
})
for v in orderedLayers {
v.render(in: ct)
}
ct.endTransparencyLayer()
}
I had this issue as well. I wanted text to draw on top of an image after using renderInContext(). I solved this by positioning my views when adding them to the window rather than using the layer to set the z-position.
I used to have a text field that would set its z-position upward:
layer?.zPosition = 1
I removed this code and instead, when adding my image to the window, I used:
addSubview(image, positioned: .Below, relativeTo: nil)
This solved the problem.
I faced the same issue and its a bummer.
I know this question was asked about 2 years ago, so heres the answer anyway if it helps anyone incase
So changing subviews zPosition renders perfectly fine in the app but in fact affects the renderByContext when used to create a screenshot. I don't know why, I assume it's a bug.
So in order to get the screenshot to appear correctly as it renders in the app, we need to manually call the renderByContext on all subviews in the correct order as specified by zPosition.
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[self renderInContext:ctx];
UIImage *screenShotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
and
-(void) renderInContext:(CGContextRef)ctx {
CGContextBeginTransparencyLayer(ctx, NULL);
//self.layers is a desc ordered list of views based on its zPosition
for(UIView *view in self.layers){
[view.layer renderInContext:ctx];
}
CGContextEndTransparencyLayer(ctx);
}
This is an annoying bug of Apple's, but the solution is easy, simply re-order the view you are snapshotting's subviews (and their subviews (and so on)) recursively, based on their zPositions, using a comparator.
Just call the following method, passing the view you are about to snapshot, directly before snapshotting it.
///UIGraphicsGetCurrentContext doesn't obey layer zPositions so have to adjust view heirarchy (integer index) to match zpositions (float values) for all nested subviews.
-(void)recursivelyAdjustHeirarchyOfSubviewsBasedOnZPosition:(UIView *)parentView {
NSArray *sortedSubviews = [[parentView subviews] sortedArrayUsingComparator:
^NSComparisonResult(UIView *view1, UIView *view2)
{
float zpos1 = view1.layer.zPosition;
float zpos2 = view2.layer.zPosition;
return zpos1 > zpos2;
}];
for (UIView *childView in sortedSubviews) {
[parentView bringSubviewToFront:childView];
[self recursivelyAdjustHeirarchyOfSubviewsBasedOnZPosition:childView];
}
}

Get an image from two imageViews in Swift

I'm developing an app where I have an area where there is an image in the background, and another image that I can move, like a sticker.
My goal is to create and save an image with the background image and the "sticker" above, using Swift. Here's my function that allows me to do what I want (my background view is called "imageView", my imageview sticker is called "jacques") :
let newSize = CGSizeMake(imageView.image!.size.width, imageView.image!.size.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0);
imageView.image!.drawInRect(CGRectMake(0,0,newSize.width,newSize.height))
let jacquesX = ((jacques.frame.origin.x - imageView.frame.origin.x) * (imageView.image?.size.width)!) / UIScreen.mainScreen().bounds.width
let jacquesY = ((jacques.frame.origin.y - imageView.frame.origin.y) * (imageView.image?.size.height)!) / UIScreen.mainScreen().bounds.height
let jacquesWidth: CGFloat = jacques.image!.size.width
let jacquesHeight: CGFloat = jacques.image!.size.height
jacques.image!.drawInRect(CGRectMake(jacquesX, jacquesY, jacquesWidth, jacquesHeight))
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(newImage, self, "image:didFinishSavingWithError:contextInfo:", nil)
Unfortunately, my sticker isn't the right size. I think it's well placed in the image, but it's size is way too small. I don't know if my solution is the best one, i'm open to all suggestion. And I'm new, so if you have good practice to share, i'm all ears :)
You could do this in the storyboard. First you could size the picture however you want. To create a sticker effect you could just add imageView, size it to background size,and then put jacques on the storyboard, and finally resize them.

Adding a layer of color over a UIImage

I was hoping to make my UIImage "highlight" briefly upon being tapped. Not sure of the color yet, but let's say blue for arguments sake. So you tap the image, it briefly looks blue and then it navigates you to a details page to edit something on another screen.
From some initial reading it seems the right course of action is to use the Quartz framework and do this:
imageView.layer.backgroundColor = UIColor.blueColor()
imageView.layer.opacity = 0.7
I guess the idea would be you change the background of the layer behind the image, and then by setting the opacity of the image, the blue "bleeds through" a little bit, giving you a slightly blue image?
When I try the above, however, a blue border goes around the image itself, and based upon the opacity, the blue is either dark or light. The actual image does not become any more blue, but it does react to the opacity (meaning if I set it to something like .1, the image is very faded and barely visible). Why does the image react correctly, but not show blue?
Thanks so much!
As far as I know changing the opacity will change the opacity for the WHOLE view, meaning not just the UIImage that the UIImageView holds. So instead of fading to reveal the UIImageView's background color, instead the opacity of the whole view is just decreased as you're seeing.
Another way you could do it though would be to add an initially transparent UIView on top of your UIImageView and change its opacity instead:
UIView *blueCover = [[UIView alloc] initWithFrame: myImageView.frame];
blueCover.backgroundColor = [UIColor blueColor];
blueCover.layer.opacity = 0.0f;
[self.view addSubview: blueCover];
[UIView animateWithDuration:0.2f animations^{
blueCover.layer.opacity = 0.5f
}];
Here's how I use tint and tint opacities in IOS 9 with Swift -
//apply a color to an image
//ref - http://stackoverflow.com/questions/28427935/how-can-i-change-image-tintcolor
//ref - https://www.captechconsulting.com/blogs/ios-7-tutorial-series-tint-color-and-easy-app-theming
func getTintedImage() -> UIImageView {
var image : UIImage
var imageView : UIImageView
image = UIImage(named: "someAsset")!
let size : CGSize = image.size
let frame : CGRect = CGRectMake((UIScreen.mainScreen().bounds.width-86)/2, 600, size.width, size.height)
let redCover : UIView = UIView(frame: frame)
redCover.backgroundColor = UIColor.redColor()
redCover.layer.opacity = 0.75
imageView = UIImageView()
imageView.image = image.imageWithRenderingMode(UIImageRenderingMode.Automatic)
imageView.addSubview(redCover)
return imageView
}
Is this tap highlight perhaps something you could do with a UIButton? UIButton has all these states off the bat and might be a bit easier to work with, specially if there's something that actually needs to happen after you tap it. Worst case scenario is you have a UIButton that does not trigger any method when tapped.
You can try changing the tint color instead:
UIImage *image = [UIImage imageNamed:#"yourImageAsset"];
image = [image imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
imageView.tintColor = [UIColor blueColor];
imageView.image = image;

Resources