Change UIImage render mode at runtime - ios

In an app I am working on I have some UIImageViews that may (or may not) need to be customised. How can I change the rendering mode of an image that was loaded as original to template at runtime?

You can init a UIImage from another cgImage, then you can render it as you need
ExampleCode
let originalImage = UIImage(named: "TimeClock2Filled")?.withRenderingMode(.alwaysOriginal)
if let original = originalImage?.cgImage {
let image2 = UIImage(cgImage: original).withRenderingMode(.alwaysTemplate)
}
Example code Objective-C
UIImage * image = [[UIImage imageNamed:#"TimeClock2Filled"] imageWithRenderingMode:UIImageRenderingModeAlwaysOriginal];
if(image.CGImage != nil) {
UIImage * image2 = [[UIImage imageWithCGImage:image.CGImage]imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
}
this works just fine was tested

Related

get the vertical length of Url Image in Swift 4 [duplicate]

I'm using SDWebImage for showing images inside cells. But it is perfect mached to frame of UImageView that I'm doing in code below:
NSString * s =[NSString stringWithFormat:#"url of image to show"];
NSURL *url = [NSURL URLWithString:s];
[cell.shopImageView sd_setImageWithURL:url];
My UIImageView is size 50x50.
For example image from url is size 990x2100 and my image is not displaying well in the frame given.
In this case when hight is bigger, I want to resize my image by proper height ratio to match width 50.
Is there a way to check image's size from url without downloading it and allocating memory in awful way?
You can fetch this data from URL header, in Swift 3.0 use below code
if let imageSource = CGImageSourceCreateWithURL(url! as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
print("the image width is: \(pixelWidth)")
print("the image height is: \(pixelHeight)")
}
}
Try messing with the different contentMode options to get the look you want. One example could be cell.shopImageView.contentMode = UIViewContentModeScaleAspectFit; That will just make the image fit nicely, but won't actually resize the image view.
Here are your contentMode options: UIViewContentModes
An alternative could be something along the lines of this:
NSData *data = [[NSData alloc]initWithContentsOfURL:URL];
UIImage *image = [[UIImage alloc]initWithData:data];
CGFloat height = image.size.height;
CGFloat width = image.size.width;
You can then set your imageView height/width depending on the ratio of the image's height/width.
I don't know how to get the size of your image from URL without downloading the image.
But I can provide you some code snippet to make your UIImageView frame proportionally after the image is downloaded.
NSData *data = [[NSData alloc]initWithContentsOfURL:URL]; // -- avoid this.
If you use the above method to download image it will block your UI. So please avoid it.
[cell.shopImageView ....]; // -- avoid this method.
As you're using SDWebImage, I guess it will have some dedicated methods to download the image first. So you can use that method to download the image instead of using UIImageView categories method as you used above.
After you downloaded the image. Try something like below.
Code Snippet
Assume the image is downloaded and the object is 'theImage', and 'imageView' as your cell imageview
float imageRatio = theImage.size.width/theImage.size.height;
float widthWithMaxHeight = imageView.frame.size.height * imageRatio;
float finalWidth, finalHeight;
if (widthWithMaxHeight > imageView.frame.size.width) {
finalWidth = imageView.frame.size.width;
finalHeight = imageView.frame.size.width/imageRatio;
} else {
finalHeight = imageView.frame.size.height;
finalWidth = imageView.frame.size.height * imageRatio;
}
[imageView setFrame:CGRectMake(xOffset, yOffset, finalWidth, finalHeight)];
[imageView setImage:theImage];
let ImageArray = ((arrFeedData[indexPath.row] as AnyObject).value(forKey: "social_media_images") as? NSArray)!
var ImageURL: String = ((ImageArray[0] as AnyObject) as? String)!
ImageURL = ImageURL.addingPercentEscapes(using: String.Encoding.ascii)!
let imageUrl = URL(string: ImageURL)
let imageData = try Data(contentsOf: imageUrl!)
let image = UIImage(data: imageData)
print("image height: \(image?.size.height)"
print("image Width: \(image?.size.width)")

iOS 10: CIKernel's ROI function did not allow tiling

In my iPhone app, I've always used the following function to horizontally mirror an image.
-(UIImage*)mirrorImage:(UIImage*)img
{
CIImage *coreImage = [CIImage imageWithCGImage:img.CGImage];
coreImage = [coreImage imageByApplyingTransform:CGAffineTransformMakeScale(-1, 1)];
img = [UIImage imageWithCIImage:coreImage scale:img.scale orientation:UIImageOrientationUp];
return img;
}
With iOS 10.0.1 though, this function still runs with no errors, but when I try to use the UIImage from this function, the following warning appears, and the image just doesn't seem to be there.
Failed to render 921600 pixels because a CIKernel's ROI function did not allow tiling.
This error actually appears in the Output window when I attempt to use the UIImage (in the second line in this code) :
UIImage* flippedImage = [self mirrorImage:originalImage];
UIImageView* photo = [[UIImageView alloc] initWithImage:flippedImage];
After calling mirrorImage, the flippedImage variable does contain a value, it's not nil, but when I try to use the image, I get that error message.
If I were to not call the mirrorImage function, then the code works fine:
UIImageView* photo = [[UIImageView alloc] initWithImage:originalImage];
Is there some new quirk with iOS 10 which would prevent my mirrorImage function from working ?
Just to add, in the mirrorImage function, I tried testing the size of the image before and after the transformation (as the error is complaining about having to tile the image), and the size is identical.
I fixed it by converting CIImage -> CGImage -> UIImage
let ciImage: CIImage = "myCIImageFile"
let cgImage: CGImage = {
let context = CIContext(options: nil)
return context.createCGImage(ciImage, from: ciImage.extent)!
}()
let uiImage = UIImage(cgImage: cgImage)
Never mind.
I don't know what iOS 10 has broken, but I managed to fix the problem by replacing my function with this:
-(UIImage*)mirrorImage:(UIImage*)img
{
UIImage* flippedImage = [UIImage imageWithCGImage:img.CGImage
scale:img.scale
orientation:UIImageOrientationUpMirrored];
return flippedImage;
}

difference between UIImageView.image = ... and UIimageView.layer.content =

Two ways of set UIImage to A UIImageView:
First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
what is the difference between the two ways?
which one is better?
In fact.What I want to do is displaying part of one PNG in UIImageView.
There are two ways:
First:
UIImage *image = [UIImage imageNamed:#"clothing.png"];
CGImageRef imageRef = CGImageCreateWithImageInRect(image, rect);
self.imageview.image = [UIImage imageWithCGImage:imageRef];
Second:
self.imageview2.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);//way2
self.imageview2.layer.contentsRect = rect;
Which one is better? Why? Thanks!
First:
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
By using this you can directly assign your image to any UIImageView While in
Second:
self.imageview.layer.contents = (__bridge id _Nullable)([[UIImage imageNamed:#"clothing.png"] CGImage]);
You can not assign image directly to layer. so you need to put a CGImage into a layer.
So First is best forever. Thank you.
First option is better, Bridge concept is used before introduction of ARC
Of course the better way is the first one :
self.imageview.image = [UIImage imageNamed:#"clothing.png"];
Indeed, this is the way an UIImageView is built for.
For clarification, every view (UIView, UIImageView, etc) has a .layer property to add visual content on it. Here you're adding an image in it. You could have achieved the same result with a single UIView. But, in performances terms and clarity, you should (have to) use .image property.
Edit :
Even with your new edit, first option is still the better one.
You can do that but it's your responsibility to modify the image to make the image.CGImage draw correctly respect to imageOrientation, contentMode and so on. If you set the image with imageView.image = image;, it's apple's responsibility to do that.
I give you an example that causes the problem:
as image.CGImage doesn't contain image orientation, so if you set it directly and if the source image is not UIImageOrientationUp, the image will be rotated, unless you "fix the orientation" of the source image like this. (You can get a image that is not UIImageOrientationUp, just take a photo with your iPhone)
- (UIImage *)fixOrientation {
if (self.imageOrientation == UIImageOrientationUp) return self;
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:(CGRect){0, 0, self.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
Here are two screen shot, the first is set image.CGImage directly to layer.contents, the second is set image to imageView.image.
So always use imageView.image unless you know what you're doing.

Screenshot for AVPlayer and Video

I am trying to take a screenshot of an AVPlayer inside a bigger view. I want to build a testing framework only, so private APIs or any method is good, because the framework will not be included when releasing to the AppStore.
I have tried with using
UIGetScreenImage() : works well on simulator but not on device
snapshotviewafterscreenupdates: it shows the view but I cannot create a UIImage from that.
drawViewInHierarchy and renderInContext will not work with AVPlayer
I don't want to use AVAssetGenerator for getting image from video, it is hard to get a good coordinate or the video player as the subview of other views
I know you don't want to use the AVAssetImageGenerator but I've also researched this extensively and I believe the only solution currently is using the AVAssetImageGenerator. It's not that difficult as you say to get the right coordinate because you should be able to get the current time of your player. In my App the following code works perfectly:
-(UIImage *)getAVPlayerScreenshot
{
AVURLAsset *asset = (AVURLAsset *)self.playerItem.asset;
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
imageGenerator.requestedTimeToleranceAfter = kCMTimeZero;
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero;
CGImageRef thumb = [imageGenerator copyCGImageAtTime:self.playerItem.currentTime
actualTime:NULL
error:NULL];
UIImage *videoImage = [UIImage imageWithCGImage:thumb];
CGImageRelease(thumb);
return videoImage;
}
AVPlayer rending videos using GPU, so you cannot capture it using core graphics methods.
However that’s possible to capture images with AVAssetImageGenerator, you need specify a CMTime.
Update:
Forget to take a screenshot of the entire screen. AVPlayerItemVideoOutput is my final choice, it supports video steam.
Here is my full implementation: https://github.com/BB9z/ZFPlayer/commit/a32c7244f630e69643336b65351463e00e712c7f#diff-2d23591c151edd3536066df7c18e59deR448
Swift version of Bob's answer below. I'm using AVQueuePlayer but should work for regular AVPlayer too.
public func getImageSnapshot() -> UIImage? {
guard let asset = player.currentItem?.asset else { return nil }
let imageGenerator = AVAssetImageGenerator(asset: asset);
imageGenerator.requestedTimeToleranceAfter = CMTime.zero;
imageGenerator.requestedTimeToleranceBefore = CMTime.zero;
do {
let thumb = try imageGenerator.copyCGImage(at: player.currentTime(), actualTime: nil);
let image = UIImage(cgImage: thumb);
return image;
} catch {
print("⛔️ Failed to get video snapshot: \(error)");
}
return nil;
}
Here is code for taking a screenshot of you entire screen, including the AVPlayer. You only need to add a UIImageView on top of your videoplayer, which stays hidden until we take the screenshot and then we hide it again.
func takeScreenshot() -> UIImage? {
//1 Hide all UI you do not want on the screenshot
self.hideButtonsForScreenshot()
//2 Create an screenshot from your AVPlayer
if let url = (self.overlayPlayer?.currentItem?.asset as? AVURLAsset)?.url {
let asset = AVAsset(url: url)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.requestedTimeToleranceAfter = CMTime.zero
imageGenerator.requestedTimeToleranceBefore = CMTime.zero
if let thumb: CGImage = try? imageGenerator.copyCGImage(at: self.overlayPlayer!.currentTime(), actualTime: nil) {
let videoImage = UIImage(cgImage: thumb)
//Note: create an image view on top of you videoPlayer in the exact dimensions, and display it before taking the screenshot
// mine is created in the storyboard
// 3 Put the image from the screenshot in your screenshotPhotoView and unhide it
self.screenshotPhotoView.image = videoImage
self.screenshotPhotoView.isHidden = false
}
}
//4 Take the screenshot
let bounds = UIScreen.main.bounds
UIGraphicsBeginImageContextWithOptions(bounds.size, true, 0.0)
self.view.drawHierarchy(in: bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//5 show all UI again that you didn't want on your screenshot
self.showButtonsForScreenshot()
//6 Now hide the screenshotPhotoView again
self.screenshotPhotoView.isHidden = true
self.screenshotPhotoView.image = nil
return image
}
If you want to take screenshot of current screen just call following method on any action event which give you Image object.
-(UIImage *) screenshot
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *sourceImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//now we will position the image, X/Y away from top left corner to get the portion we want
UIGraphicsBeginImageContext(sourceImage.size);
[sourceImage drawAtPoint:CGPointMake(0, 0)];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//To write image on divice.
//UIImageWriteToSavedPhotosAlbum(croppedImage,nil, nil, nil);
return croppedImage;
}
Hope this will help you.
CGRect grabRect = CGRectMake(0,0,320,568);// size you want to take screenshot of.
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(grabRect.size, NO, [UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(grabRect.size);
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -grabRect.origin.x, -grabRect.origin.y);
[self.view.layer renderInContext:ctx];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Why imageWithCGImage:scale:orientation: doesn't work?

This is my question:
I want to rotate my image when I got the degree it should rotate.
And my code is here
UIImage *image = imageView.image;
UIImage *originalImage = imageView.image;
CGAffineTransform transform = imageView.transform;
if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI_2))) {
image = [UIImage imageWithCGImage:originalImage.CGImage scale:originalImage.scale orientation:UIImageOrientationRight];
} else if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI))) {
image = [UIImage imageWithCGImage:originalImage.CGImage scale:originalImage.scale orientation:UIImageOrientationDown];
} else if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI_2 * 3))) {
image = [UIImage imageWithCGImage:originalImage.CGImage scale:originalImage.scale orientation:UIImageOrientationLeft];
} else if (CGAffineTransformEqualToTransform(transform, CGAffineTransformRotate(CGAffineTransformIdentity, M_PI * 2))) {
image = originalImage; // UIImageOrientationUp
}
As I hope , the image will show as rotated. But after rotating, this images is still what it was. It means the method imageWithCGImage:scale:orientation: didn't work.
Some one can tell me why? Thanks.
You use
UIImage *image = imageView.image;
to get the handle of imageView.image, and you've set your changed image for (UIImage*)image which is just a reference of imageView.image.So if you want imageView.image to be change, you should set imageView.image again to make it change
add this in the back after your codes
imageView.image = image;
You need set the image again (after all that code)
imageView.image = image;
This is something that you will need to fundamentally understand. When you get a pointer to another object, and then set that pointer to something else, it doesn't affect the original object. Think of it this way:
Lunch *myLunch = myFriend.CurrentLunch;
This means that your lunch is current your friend's lunch.
myLunch = new MisoRamen();
Now your lunch is a different lunch.
myLunch.InsertWeirdSauce();
You just inserted a sauce into your own lunch, while your friend's lunch remains safe. If you want to change your friend's lunch you have to do this.
Lunch *newLunch = new MisoRamen();
newLunch.InsertWeirdSauce();
myFriend.CurrentLunch = newLunch;
Now your friend will gasp in shock as he/she eats a lunch with weird sauce in it.

Resources