view offset by Custom navigationBar setBackgroundImage - ios

I set Custom navigationBar by KVO in iOS8,and the custom navigationBar setBackgroundImage.
I found that the viewContoller.view.frame.origin.y is 64 and viewController is Navigation's rootViewController.
why viewContoller.view.frame.origin.y is 64 in iOS 8?
the following is demo code:
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
UINavigationBar *temp = [[UINavigationBar alloc]init];
[temp setBackgroundImage:[UIImage imageNamed:#"navbar_bg"] forBarMetrics:UIBarMetricsDefault];
[self.navigationController setValue:temp forKey:#"navigationBar"];
}
- (void)viewDidAppear:(BOOL)animated
{
NSLog(#"view : %#",self.view); // print :<UIView: 0x7ff8fa72cfa0; frame = (0 64; 375 603); autoresize = RM+BM; layer = <CALayer: 0x7ff8fa72b2b0>>
}
If I cancel [temp setBackgroundImage:[UIImage imageNamed:#"navbar_bg"] forBarMetrics:UIBarMetricsDefault]; , the view.origin.y is 0 correctly.
what i should do correctly to set custom navigationBar which setBackgroundImage and keep self.view.orgin is (0,0)?

Please check the image assets.
The image "navbar_bg#2x" should exists in the assets, If you are tested on device or simulators ,which has UIKit Scale factor 2.0, such as a iPhone 8 .
The image "navbar_bg#3x" should exists in the assets, if tested on iPhone 8 Plus. And so on.
It should be OK, according to Apple's Demo Code Customizing Your App’s Navigation Bar.
Two cases , your issue happens.
In my case , It does not work, when converting color to image.
Firstly I thought it is image size issue.
Accoring to Debug View Hierarchy, I got the BackgroundImageView's size, 414 X 88.Tested in simulator of iPhone XR.
When debugging, I think maybe the image should be smaller than imageView'size.
Still not work, after many times of adjusting the image size.
the code of converting color to image:
public static func color(_ color: UIColor, width w: CGFloat, height h: CGFloat) -> UIImage {
let rect = CGRect(x: 0.0, y: 0.0, width: w, height: h)
UIGraphicsBeginImageContextWithOptions(rect.size, false, UIScreen.main.scale)
let ctx = UIGraphicsGetCurrentContext()
ctx?.setFillColor(color.cgColor)
ctx?.fill(rect)
var image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
let imageData = UIImageJPEGRepresentation(image, 1.0)!
image = UIImage(data: imageData)!
return image
}
case two: image#2x ("navbar_bg#2x") doesn't exist in assets when tested in simulator of iPhone XR, which has UIKit Scale factor 2.0.
Then it doesn't work either in my experiment.

You can set title of navigationController.navigationItem instead of setting up UINavigationBar and providing with KVO.
- (void)viewDidLoad {
[super viewDidLoad];
UIImageView *img = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"navbar_bg"]];
//Here you can create your own Custom View and provide as title of self.navigationController.navigationItem
[self.navigationController.navigationItem setTitleView:img];
}
- (void)viewDidAppear:(BOOL)animated
{
NSLog(#"view : %#",self.view); // print :<UIView: 0x79ea0a30; frame = (0 0; 375 603); autoresize = W+H; layer = <CALayer: 0x79ea0ac0>>
}

I did an experiment too.
I think the problem is caused by color space.
According to the answer above, I created three images of white with imageMagick.
And use white image#2x with simulator iPhone XR .
The issue happens too.
The example image of Apple's Demo Code Customizing Your App’s Navigation Bar is of SRBG, my color image is simply gray scale.
I think it is the difference.
from image magick community
-colorspace changes the way the image is stored in memory
-set colorspace just sets the colorsapce of the image without conversion
Many image formats (like JPG) do not use difference colorspaces
without using some time of color profile, as such the image is
converted back to RGB colorspace unless a color profile has been
defined.
That is why you don't see a difference. It is only different IN
MEMORY!
Maybe iOS did some optimization because of this.

Related

UINavigationBar Custom Color Hairline Border

First of all, I did search this before posting this question but if there is an answer out there, it is buried under the millions of questions about how to remove the default bottom border of a navigation bar.
I do NOT want to remove the bottom border ("shadow") of the navigation bar.
I am trying to "theme" my app by the usual method of using appearance proxies.
I can globaly change most visual attributes of UINavigationBar with code like the following:
let navigationBarProxy = UINavigationBar.appearance()
navigationBarProxy.isTranslucent = false
navigationBarProxy.barTintColor = myBarBackgroundColor
navigationBarProxy.tintColor = myBarTextColor
Regarding the 'hairline' bottom border of the bar (or as it is known, the "shadow"), I can either set the default one by doing nothing or specifying nil:
navigationBarProxy.shadowImage = nil
...or I can specify a custom color by assigning a solid image of the color I'm after:
navigationBarProxy.shadowImage = UIImage.withColor(myBorderColor)
(uses helper extension:)
extension UIImage {
public static func withColor(_ color: UIColor?, size: CGSize = CGSize(width: 1, height: 1)) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: size)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let actualColor = color ?? .clear
context?.setFillColor(actualColor.cgColor)
context?.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
However, the approach above gives me (on retina devices) a 1pt, 2px border, whereas the default, light gray one is actually 0.5pt, 1px (a.k.a. "hairline").
Is there any way to achieve a 0.5 pt (1px), custom-colored bottom border (shadow) for UINavigationBar?
I guess I could use a runtime generated, background image that is for the most part solid, but has a 1px border of my choice color "baked in" at the bottom. But this seems inelegant at best, and I'm not sure how it would work when that navigation bar height changes: is the image sliced, or simply stretched, or what?
Based on the chosen answer found here (with small changes because it was old):
How to change the border color below the navigation bar?
// in viewDidLoad
UIView * navBorder = [[UIView alloc] initWithFrame:CGRectMake(0,
self.navigationController.navigationBar.frame.size.height, // <-- same height, not - 1
self.navigationController.navigationBar.frame.size.width,
1/[UIScreen mainScreen].scale)]; // <-- 5/5S/SE/6 will be 0.5, 6+/X will be 0.33
// custom color here
[navBorder setBackgroundColor:customColor];
[self.navigationController.navigationBar addSubview:navBorder];
Credit here for finding scale programmatically:
Get device image scale (e.g. #1x, #2x and #3x)
*NOTE:
iPhone 6+/X are x3, so 1px height will be 0.33pt
iPhone 5/5S/SE/6 are x2, so 1px height will be 0.5pt
Tested in simulator, may need to verify on actual devices.
Visually same as default nav bar with a custom color.
I believe you want to remove the shadow. This should help with that.
[[UINavigationBar appearance] setShadowImage:[UIImage new]];
If you want to have different coloured shadow then you can create a image with your desired colour and use it instead of
[UIImage new]
You can use something like this for generating an image yourself
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
const CGFloat alpha = CGColorGetAlpha(color.CGColor);
const BOOL opaque = alpha == 1;
UIGraphicsBeginImageContextWithOptions(rect.size, opaque, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

Fit image in UIImageView using UIViewContentModeScaleAspectFit

I'm facing a really weird problem with UIImageView, I was trying to set an image - which created by take the screenshot of the current view - to an ImageView with content mode is UIViewContentModeScaleAspectFit.
It worked fine when I set the image by the interface builder in the xib file or when I set the image created by [UIImage imageNamed:]. They both worked fine with UIViewContentModeScaleAspectFit.
But when I take the snap shot of a view and set the image to the image view, the image did not fit to the UIImageView. I've tried all the solutions I found on here like .ClipsToBound = YES but they didn't work at all. I'm really confused by now.
Here's the code when I take the screen shot and create the UIImage:
- (UIImage *)screenshotWithRect:(CGRect)captureRect
{
CGFloat scale = [[UIScreen mainScreen] scale];
UIImage *screenshot;
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureRect);
{
if(UIGraphicsGetCurrentContext() == nil)
{
NSLog(#"UIGraphicsGetCurrentContext is nil. You may have a UIView (%#) with no really frame (%#)", [self class], NSStringFromCGRect(self.frame));
}
else
{
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
screenshot = UIGraphicsGetImageFromCurrentImageContext();
}
}
UIGraphicsEndImageContext();
return screenshot;
}
And when I set the image to the image view
UIImage* snap = [[UIImage alloc] init];
// start snap shot
UIView* superView = [self.view superview];
CGRect cutRect = [superView convertRect:self.cutView.frame fromView:_viewToCut];
snap = [superView screenshotWithRect:cutRect];
[self.view addSubview:self.editCutFrameView];
// end snap shot -> show edit view
[self.editCutFrameView setImage:snap];
Here's a picture compare the 2 results:
Many thanks for your help.
UPDATE: As #Saheb Roy mentioned about the size, I checked the image size and it's about 400x500px and the thumbnail.png's size is 512x512px so I think it's not about the size of the image.
This is because in the second case, the snapshot image is itself exactly that size as you can see. Hence the image is not being stretched or fitted accordingly.
Earlier images are fitting to screen accordingly as the images were bigger than the imageview but with different ratio or same than that of the image.
But the one where it is not fitting to the imageview, the image itself is of that much size, i.e. smaller than that of the imageview, hence it is NOT being fitted to the bounds.

Place a full sized image to fit the entire screen in a CALayer

I have an image (png) which must fill the entire screen of my app. I'm using CALayers and doing everything programatically but still this sounds like something that should be trivial to but I can't get it to work. I have two versions of the image a retina version (2048px x 1536px) and a non-retina version 1024px x 768px). The image is listed a universal image in the Asset catalogue
The code is simple enough I think:
// CREATE FULL SCREEN CALAYER
CALayer *myLayer = [[CALayer alloc] init];
[myLayer setBounds:CGRectMake(0, 0, bounds.size.width, bounds.size.height)];
[myLayer setPosition:CGPointMake(bounds.size.width/2, bounds.size.height/2)];
[self.view.layer addSublayer:myLayer];
// LOAD THE IMAGE INTO THE LAYER —— AM EXPECTING IT TO FILL THE LAYER
UIImage *layerImage = [UIImage imageNamed:#"infoScreen"];
CGImageRef image = [layerImage CGImage];
[myLayer setContents:(__bridge id)image];
[myLayer setContentsGravity:kCAGravityCenter]; /* IT WORKS FINE IF I USE setContentsGravity:kCAGravityResizeAspectFill */
This code works fine on a non-iPad retina. However on the Retina iPad, the image is always loaded at twice the actual size (so it appears zoomed in). I'm using the Simulator and iOS 8. What am I doing wrong?
Beging your image processing with
func UIGraphicsBeginImageContextWithOptions(size: CGSize, opaque: Bool, scale: CGFloat)
The last parameter in the above function determines the scaling for the graphics. You can set
this value by retrieving the scale property of the main screen. In swift I would do it this way:
var screen = UIScreen.mainScreen()
var scale = screen.scale
Hope it helps.
Edit: - Code for doing this in swift, you can modify it to suit your need.
UIGraphicsBeginImageContextWithOptions(rect.size, true, 0.0)
var ctx : CGContextRef = UIGraphicsGetCurrentContext()
<UIImage>.drawInRect(rect)
I had this same problem, was solved by setting the contentsScale value on the CALayer - for some reason the default scale on CALayers is always 1.0, even on Retina devices.
i.e.
layer.contentsScale = [UIScreen mainScreen].scale;
Also, if you're drawing a shape using CAShapeLayer and wondering its edges look a little jagged on retina devices, try:
shapeLayer.rasterizationScale = [UIScreen mainScreen].scale;
shapeLayer.shouldRasterize = YES;

Translucent Modal ViewController - how to handle rotation

I would like to display a UIViewController modally and be able to see a blurred version of the view that presented it.
Following a number of similar questions such as this:
iOS 7 Translucent Modal View Controller
I have added a background to my controller's view that is based on the captured view of the presenting controller. The problem I am facing is that my app supports multiple orientations and when the modal view is presented and rotated, the underlying background image no longer matches.
I tried grabbing a fresh snapshot of the presenting viewController in didRotateFromInterfaceOrientation: of the modal viewController, but it appears that the UI of the presenting viewController is not being updated and the resulting image is still the wrong orientation. Is there any way to force redrawing of a view that is being hidden by the modal one?
After long considerations, I have come up with a passable way to handle it. How well it will work depends a bit on the type of content you have in the presenting viewController.
The general idea is to take not one, but two screenshots before presenting a new viewController - one for portrait, one for landscape. This is achieved by changing the frames of the top viewController and navigation bar (if applicable) to emulate a different orientation, taking the screenshot of the result, and changing it back. The user never sees this change on device, but the screen grab still displays a new orientation.
The exact code will depend on where you are calling it from, but the main logic is the same. My implementation runs from AppDelegate because it is reused by several subclasses of UIViewController.
The following is the code that will grab the appropriate screenshots.
// get references to the views you need a screenshot of
// this may very depending on your app hierarchy
UIView *container = [self.window.subviews lastObject]; // UILayoutContainerView
UIView *subview = container.subviews[0]; // UINavigationTransitionView
UIView *navbar = container.subviews[1]; // UINavigationBar
CGSize originalSubviewSize = subview.frame.size;
CGSize originalNavbarSize = navbar.frame.size;
// compose the current view of the navbar and subview
UIImage *currentComposed = [self composeForeground:navbar withBackground:subview];
// rotate the navbar and subview
subview.frame = CGRectMake(subview.frame.origin.x, subview.frame.origin.y, originalSubviewSize.height, originalSubviewSize.width);
// the navbar has to match the width of the subview, height remains the same
navbar.frame = CGRectMake(navbar.frame.origin.x, navbar.frame.origin.y, originalSubviewSize.height, originalNavbarSize.height);
// compose the rotated view
UIImage *rotatedComposed = [self composeForeground:navbar withBackground:subview];
// change the frames back to normal
subview.frame = CGRectMake(subview.frame.origin.x, subview.frame.origin.y, originalSubviewSize.width, originalSubviewSize.height);
navbar.frame = CGRectMake(navbar.frame.origin.x, navbar.frame.origin.y, originalNavbarSize.width, originalNavbarSize.height);
// assign the variables depending on actual orientations
UIImage *landscape; UIImage *portrait;
if (originalSubviewSize.height > originalSubviewSize.width) {
// current orientation is portrait
portrait = currentComposed;
landscape = rotatedComposed;
} else {
// current orientation is landscape
portrait = rotatedComposed;
landscape = currentComposed;
}
CustomTranslucentViewController *vc = [CustomTranslucentViewController new];
vc.backgroundSnap = portrait;
vc.backgroundSnapLandscape = landscape;
[rooVC presentViewController:vc animated:YES completion:nil];
The method composeForeground:withBackground: is a convenience method that generates an appropriate background image based on two input views (navigation bar + view controller). Aside from composing the two view together, it does a bit more magic to make the result look more natural when rotating the presented viewController. Specifically, it extends the screenshot to a 1024x1024 square and fills the extra space with a mirrored copy of the composed image. In many cases, once blurred this looks good enough since the animation of the views re-drawing for the orientation change is not available.
- (UIImage *)composeForeground:(UIView *)frontView withBackground:(UIView *)backView {
UIGraphicsBeginImageContextWithOptions(backView.frame.size, 0, 0);
[backView.layer renderInContext:UIGraphicsGetCurrentContext()];
// translation is necessary to account for the extra 20 taken up by the status bar
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), frontView.frame.origin.x, frontView.frame.origin.y);
[frontView.layer renderInContext:UIGraphicsGetCurrentContext()];
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -frontView.frame.origin.x, -frontView.frame.origin.y);
// this is the core image, would have left it at this if we did not need to use fancy mirrored tiling
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// add mirrored sections
CGFloat addition = 256; // 1024 - 768
if (newImage.size.height > newImage.size.width) {
// portrait, add a mirrored image on the right
UIImage *horizMirror = [[UIImage alloc] initWithCGImage:newImage.CGImage scale:newImage.scale orientation:UIImageOrientationUpMirrored];
UIGraphicsBeginImageContextWithOptions(CGSizeMake(newImage.size.width+addition, newImage.size.height), 0, 0);
[horizMirror drawAtPoint:CGPointMake(newImage.size.width, 0)];
} else {
// landscape, add a mirrored image at the bottom
UIImage *vertMirror = [[UIImage alloc] initWithCGImage:newImage.CGImage scale:newImage.scale orientation:UIImageOrientationDownMirrored];
UIGraphicsBeginImageContextWithOptions(CGSizeMake(newImage.size.width, newImage.size.height+addition), 0, 0);
[vertMirror drawAtPoint:CGPointMake(0, newImage.size.height)];
}
// combine the mirrored extension with the original image
[newImage drawAtPoint:CGPointZero];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// for ios 6, crop off the top 20px
if (SYSTEM_VERSION_LESS_THAN(#"7")) {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(newImage.size.width, newImage.size.height-20), NO, 0);
[newImage drawAtPoint:CGPointMake(0, -20)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newImage;
}
The resulting landscape and portrait images can be blurred and tinted as desired, and set as background for the presented viewController. Use willRotateToInterfaceOrientation:duration: method of this viewController to select the appropriate image.
Note: I have tried to reduce the amount of work done on images and graphics contexts as much as possible, but there is still a slight delay when generating the background (around 30-90 ms per composeForeground:withBackground: iteration, depending on the content, on a vintage slow iPad 2). If you know of a way to further optimize or simplify the above solution, please share!

resizableImageWithCapInsets images always misaligned?

Is it possible to create a properly aligned UIImageView that has a resizable image?
I tried everything (e.g. images power of two, etc) and cannot make it work.
The coordinates of the UIImageView are:
po _Background
(UIImageView *) $1 = 0x14c42bd0
<UIImageView: 0x14c42bd0;
frame = (16 38; 992 672);
opaque = NO;
autoresize = LM+RM+TM+BM;
userInteractionEnabled = NO;
layer = <CALayer: 0x14c42ba0>> - (null)
The UIImage itself is 96x96 (retina, i.e. png is 192x192 with a scale of 2). The resizable UIImage is created using:
UIEdgeInsets edgeInsets = UIEdgeInsetsMake(32, 32, 32, 32);
UIImage* resizableImage = [originalImage resizableImageWithCapInsets:edgeInsets];
[_Background setImage:resizableImage];
When turning on 'Color misaligned images' in the simulator the UIImageView _Background is highlighted with yellow. Removing its UIImage in the debugger:
[_Background setImage:nil];
removes the yellow highlight, i.e. no image -> no mis-alignment.
Anyone knows what is going on and how I can make sure it aligns?
thanks.
According to this answer: What does yellow tinting represent when using "color misaligned images" on iPhone/iOS, the yellow highlight indicates the image is stretched. Since that's what you're asking for with the resizable image, it makes sense that it's always the case.

Resources