Apple has deprecated init(image:) method in MPMediaItemArtwork in iOS 10.
What is the new alternative.
the class shows interface shows method below to be available in the new OS version
public init(boundsSize: CGSize, requestHandler: #escaping (CGSize) -> UIImage)
Anyone know how to use it?
Also question 2, part of the previous question: Does showing now playing metadata on the lock-screen and control-center using MPNowPlayingInfoCenter work in the simulator?
You can use the following code:
let image = UIImage(named: "logo")!
let artwork = MPMediaItemArtwork.init(boundsSize: image.size, requestHandler: { (size) -> UIImage in
return image
})
And, yes, "now playing" metadata shows on the control center in the simulator.
I was wondering the same and ended up finding Apple's explanation for this.
They say we shouldn't do any expensive resizing operations on the image when handler is requested, but instead simply return the closely matching image out of ones already available to you.
The following WWDC 2017 video is where they mention it. It's about tvOS, but at least we get some insight. Starts at 07:20: https://developer.apple.com/videos/play/wwdc2017/251/?time=440
I saw this just now and I'm confused too, but I guess this is the right way:
self.remoteArtwork = [[MPMediaItemArtwork alloc] initWithBoundsSize:CGSizeMake(600, 600) requestHandler:^UIImage * _Nonnull(CGSize size) {
UIImage *lockScreenArtworkApp = [UIImage imageNamed:#"lockScreenLogo"];
return [self.manager resizeImageWithImage:lockScreenArtworkApp scaledToSize:size];
}];
The method - in my case in a singleton "Manager"-Class
- (UIImage *)resizeImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Minimum code:
MPMediaItemArtwork(boundsSize: image.size) { _ in image }
Related
I am using AVCapture+camera portrait mode to capture an image. when I display the captured image, it is fine--looks fine. But when I convert to jpeg representation and then convert to base64 string to server and server stored it, it is "rotated" already.
So I checked the image orientation before sending to server: it is UIImageOrientation.Right(so is there any way to capture an image using portrait mode but the captured image orientation is up? well, I doubt that after some digging). After the server got the image, it did not do anything, just ignored the metadata about orientation I guess.
since the image I captured looks fine, I want just preserve how the image look like. However, I want to set the image orientation to be up. if I just set the image orientation, the image does not look right anymore.
So is there a way to set the orientation without causing the image to be rotated or after setting the orientation to be up, how to I keep the orientation but rotate the actual image to make it look right?
- (UIImage *)removeRotationForImage:(UIImage*)image {
if (image.imageOrientation == UIImageOrientationUp) return image;
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawInRect:(CGRect){0, 0, image.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}
swift version of oldrinmendez answer
func removeRotationForImage(image: UIImage) -> UIImage {
if image.imageOrientation == UIImageOrientation.Up {
return image
}
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
image.drawInRect(CGRect(origin: CGPoint(x: 0, y: 0), size: image.size))
let normalizedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalizedImage
}
I use this method for taking a snapshot:
UIView *snapshotView = [someView snapshotAfterScreenUpdates:NO];
This gives me a UIView that I can play with.
But, I need the UIImage of it and not the UIView.
This is the method I use for converting a UIView to an UIImage:
- (UIImage *)snapshotOfView:(UIView *)view
{
UIImage *snapshot;
UIGraphicsBeginImageContext(view.frame.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
snapshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshot;
}
It doesn't work, the snapshot is a blank UIImage because the snapshotView isn't rendered.
The obvious thing to do is to directly take a snapshot of the view instead of taking the snapshot in the form of a UIView and then converting it.
The problem with the obvious method is that I'm using a WKWebView that has a huge bug that doesn't allow you to take screenshots. And that's true, I even reported the bug and they said they are trying to fix it.
So, how can I take the snapshot (UIImage) of the snapshotView without rendering it?
drawViewHierarchyInRect will work for you. You can use this directly on your WKWebView.
There is some useful detail on this Apple Technical Q&A. Also I touch on it here in answer to a slightly different question.
I use it in a category on UIView:
#implementation UIView (ImageSnapshot)
- (UIImage*)imageSnapshot {
UIGraphicsBeginImageContextWithOptions(self.bounds.size,
YES, self.contentScaleFactor);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
#end
I don't know what you mean by 'the obvious method' - but I have tested this on a WKWebView and it works.
Since accepter answer didn't worked for me (UIGraphicsGetImageFromCurrentImageContext() returned nil all the time) I found different solution for iOS10+:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let image = renderer.image { _ in view.drawHierarchy(in: view.bounds, afterScreenUpdates: true) }
P.S. You may want to call updateConstraintsIfNeeded() and layoutIfNeeded() before rendering
I used a method to get rounded pictures on my iOS app which work perfectly fine on iphone 3. My problem is that as soon as I try it on iphone 4 or above, the pictures get a bad quality.
Is there any way, I can turn my code around to get high res rounded picture?
-(void) setRoundedView:(UIImageView *)imageView picture: (UIImage *)picture toDiameter:(float)newSize{
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0);
[[UIBezierPath bezierPathWithRoundedRect:imageView.bounds
cornerRadius:100.0] addClip];
CGRect frame=imageView.bounds;
frame.size.width=newSize;
frame.size.height=newSize;
[picture drawInRect:frame];
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Many thanks for your help!
The issue is how you're defining your image context. You are specifying a 1.0 scale, but on retina screens that should be 2.0.
Instead you can use 0.0 to default to native quality:
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 0.0);
I had a C++ binarization routine that I used for later OCR operation.
However I found that it produced unnecessary slanting of text.
Searching for alternatives I found GPUImage of great value and it solved the slanting issue.
I am using GPUImage code like this to binarize my input images before applying OCR.
However the threshold value does not cover the range of images I get.
See two samples from my input images:
I can't handle both with same threshold value.
Low value seems to be fine with later, and higher value is fine with first one.
The second image seems to be of special complexity because I never get all the chars to be binarized right, irrespective of what value I set for threshold. On the other hand, my C++ binarization routine seems to do it right, but I don't have much insights to experiment into it like simplistic threshold value in GPUImage.
How should I handle that?
UPDATE:
I tried with GPUImageAverageLuminanceThresholdFilter with default multiplier = 1. It works fine with first image but the second image continues to be problem.
Some more diverse inputs for binarization:
UPDATE II:
After going through this answer by Brad, tried GPUImageAdaptiveThresholdFilter (also incorporating GPUImagePicture because earlier I was only applying it on UIImage).
With this, I got second image binarized perfect. However first one seems to have lot of noise after binarization when I set blur size is 3.0. OCR results in extra characters added. With lower value of blur size, second image loses precision.
Here it is:
+(UIImage *)binarize : (UIImage *) sourceImage
{
UIImage * grayScaledImg = [self toGrayscale:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurSize = 3.0;
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *imageWithAppliedThreshold = [stillImageFilter imageFromCurrentlyProcessedOutput];
// UIImage *destImage = [thresholdFilter imageByFilteringImage:grayScaledImg];
return imageWithAppliedThreshold;
}
For a pre processing step you need adaptive thresholding here.
I got these results using opencv grayscale and adaptive thresholding methods. Maybe with an addition of low pass noise filtering (gaussian or median) it should work like a charm.
I used provisia (its a ui to help you process images fast) to get the block size I need: 43 for the image you supplied here. The block size may change if you take photo from closer or further. If you want a generic algorithm, you need to develop one that should search for the best size (search until numbers are detected)
EDIT: I just saw the last image. It is untreatably small. Even if you apply the best pre-processing algorithm, you are not going to detect those numbers. Sampling up would not be solution since noises will come around.
I finally ended up exploring on my own, and here is my result with GPUImage filter:
+ (UIImage *) doBinarize:(UIImage *)sourceImage
{
//first off, try to grayscale the image using iOS core Image routine
UIImage * grayScaledImg = [self grayImage:sourceImage];
GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];
GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
stillImageFilter.blurSize = 8.0;
[imageSource addTarget:stillImageFilter];
[imageSource processImage];
UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];
return retImage;
}
+ (UIImage *) grayImage :(UIImage *)inputImage
{
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
I achieve almost 90% using this - I am sure there must be better options but I tried with blurSize as far as I could and 8.0 is the value that works with most of my input images.
For anyone else, good luck with your trying!
SWIFT3
SOLUTION 1
extension UIImage {
func doBinarize() -> UIImage? {
let grayScaledImg = self.grayImage()
let imageSource = GPUImagePicture(image: grayScaledImg)
let stillImageFilter = GPUImageAdaptiveThresholdFilter()
stillImageFilter.blurRadiusInPixels = 8.0
imageSource!.addTarget(stillImageFilter)
stillImageFilter.useNextFrameForImageCapture()
imageSource!.processImage()
guard let retImage: UIImage = stillImageFilter.imageFromCurrentFramebuffer(with: UIImageOrientation.up) else {
print("unable to obtain UIImage from filter")
return nil
}
return retImage
}
func grayImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
let imageRect = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
self.draw(in: imageRect, blendMode: .luminosity, alpha: 1.0)
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage
}
}
The result would be
SOLUTION 2
use GPUImageLuminanceThresholdFilter to achieve 100% black and white effect whithout grey color
let stillImageFilter = GPUImageLuminanceThresholdFilter()
stillImageFilter.threshold = 0.9
For example I need to detect flash light and this works for me
This question already has answers here:
The simplest way to resize an UIImage?
(34 answers)
Closed 9 years ago.
I've created a UIImageView which contains an image with a high resolution. The problem is that the resolution of that image is too high to put into the imageView. The size of the imageView is 92 x 91 (so it's small). It contains an UIImage, whose resolution is too high so it looks ugly in the UIImageView.
So how can I reduce the resolution of that UIImage?
My code for the UIImageView:
UIImageView *myImageView = [[UIImageView alloc] initWithImage:[UIImage imageWithContentsOfFile:pngFilePath]];
myImageView.frame = CGRectMake(212.0, 27, 92,91);
have a look at this
https://stackoverflow.com/a/2658801
This will help you to resize your image according to your need
Add method to your code and call like this
UIImage *myImage =[UIImage imageWithContentsOfFile:pngFilePath];
UIImage *newImage =[UIImage imageWithImage:myImage scaledToSize:CGSizeMake(92,91)];
You can resize an image using this method that returns a resized image :
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
// Here pass new size you need
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope it helps you.
try with this
myImageView.contentMode = UIViewContentModeScaleToFill;
You need to downsize the image, which will reduce the resolution, make it easier to store. I actually wrote a class (building on some stuff from SO) that does just that. It's on my github, take a look:
https://github.com/pavlovonline/UIImageResizer
the main method is
-(UIImage*)resizeImage:(UIImage*)image toSize:(CGFloat)size
so you give this method the size to which you want to downsize your image. If the height is greater than the width, it will auto-calculate the middle and give you a perfectly centered square. Same for width being greater than height. If you need an image that is not square, make your own adjustments.
so you'll get back a downsized UIImage which you can then put into your UIImageView. Save some memory too.