I am generating a QR code image using the CIQRCodeGenerator filter available from CIFilter. The image is generated fine and when it's displayed I can read the image using AVCaptureSession. However, when I try to scan the QR code using a different platform (Android, BlackBerry, iOS 6) then it doesn't recognize the image. According to Apple's documentation the generated image is compliant with the ISO/IEC 18004:2006 standard. Is the problem that I need something that is compliant with ISO 18004:2000?
Here is the code I'm using to generate the image:
NSData *stringData = [stringToEncode dataUsingEncoding:NSISOLatin1StringEncoding];
CIFilter *qrFilter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[qrFilter setValue:stringData forKey:#"inputMessage"];
[qrFilter setValue:#"M" forKey:#"inputCorrectionLevel"];
CIImage *qrImage = qrFilter.outputImage;
return [UIImage squareUIImageFromCIImage:qrImage withSize:size];
Here is a sample QR code:
Does anybody know if there is a way to generate a more universally recognized QR code image using CIFilter? I'd really prefer not to go back to using ZXing.
Thank you!
I'm not sure if the slight change makes a difference, but here's a snippet from my recent project that generates a QR code that is scanned from an iPad camera successfully in under a second:
CIFilter *filter = [CIFilter filterWithName:#"CIQRCodeGenerator"];
[filter setDefaults];
NSData *data = [accountNumber dataUsingEncoding:NSUTF8StringEncoding];
[filter setValue:data forKey:#"inputMessage"];
CIImage *outputImage = [filter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:[outputImage extent]];
UIImage *barcode = [UIImage imageWithCGImage:cgImage
scale:1.
orientation:UIImageOrientationUp];
// EDIT:
CFRelease(cgImage);
You are using the ISO-8859-1 character set, but different QR code readers assume different things about the character encoding depending on which version of the standard they're following. UTF-8 seems to be more common than ISO-8859-1.
Related
Using following code in my application which was performing quiet fine to draw a CIImage on a GLKView again and again as recieved from AVCaptureOutput -didOutputSampleBuffer until I was using iOS <= 10.1.*
After updating device to iOS 10.2.1 it has stopped working. I am calling it for few frames the app just crashes with low memory warning. Whereas with iOS 10.1.1 and below I smoothly runs the app even on older device like iPhone 5S.
[_glkView bindDrawable];
if (self.eaglContext != [EAGLContext currentContext])
[EAGLContext setCurrentContext:self.eaglContext];
glClearColor(0.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (ciImage) {
[_ciContext drawImage:ciImage inRect:gvRect fromRect:dRect];
}
[_glkView display];
This is how I am making the CIImage.
- (CIImage*)ciImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer ofSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CIImage *croppedImage = nil;
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
croppedImage = ciImage;
CIFilter *scaleFilter = [CIFilter filterWithName:#"CILanczosScaleTransform"];
[scaleFilter setValue:croppedImage forKey:#"inputImage"];
[scaleFilter setValue:[NSNumber numberWithFloat:self.zoom_Resize_Factor == 1 ? 0.25 : 0.5] forKey:#"inputScale"];
[scaleFilter setValue:[NSNumber numberWithFloat:1.0] forKey:#"inputAspectRatio"];
croppedImage = [scaleFilter valueForKey:#"outputImage"];
NSDictionary *options = #{(id)kCIImageAutoAdjustRedEye : #(false)};
NSArray *adjustments = [ciImage autoAdjustmentFiltersWithOptions:options];
for (CIFilter *filter in adjustments) {
[filter setValue:croppedImage forKey:kCIInputImageKey];
croppedImage = filter.outputImage;
}
CIFilter *selectedFilter = [VideoFilterFactory getFilterWithType:self.selectedFilterType]; //This line needs to be removed from here
croppedImage = [VideoFilterFactory applyFilter:selectedFilter OnImage:croppedImage];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return croppedImage;
}
Here is imgur link http://imgur.com/a/u6Vyo of VM Tracker and OpenGL ES instruments result. Incase it eases to understand. Thanks.
Your GLKView rendering implementation looks fine, the issue seems to be coming from the amount of processing you're doing on PixelBuffer after converting it into CIImage.
Also the Imgur link you shared shows that GLKView is unable to prepare VideoTexture object correctly, most probably due to the memory overload being created in each iteration. You need to optimise this CIFilter Processing.
I am creating blur image for one of my apps screen, for this i am using following code
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
blurrImage = [UIImage imageWithCGImage:cgImage];
self.blurrImageView.image = blurrImage;
CGImageRelease(cgImage);
form the above code i am getting the correct blur image, but the problem is at CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]]; at this line.
upto this line memory usage showing is normal, but after this line memory usage is increased abnormally high,
hear is the screenchot of memory usage shown before the execution. memory usage is keep on increasing along the execution of this method , this is before
and this after execution of the line CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
is this is common behaviour..? i searched answer but i didn't get, so any one faced the same problem please help me on this
one thing i am "not using ARC"
I experience the same memory consumption problems with Core Image.
If you're looking for alternatives, in iOS 7, you can use UIImage+ImageEffects category, which is available as part of the iOS_UIImageEffects project at the WWDC 2013 sample code page. It provides a few new methods:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage;
These don't suffer from the memory consumption issues that you experience with Core Image. (Plus, it's a much faster blurring algorithm.)
This technique is illustrated in WWDC 2013 video Implementing Engaging UI on iOS.
The fact you are using a screenshot could vary the memory usage, on retina display could be more that normal device. The doubled is ok in my opinion because you have the original UIImage and the blur image living in memory, probably also the context will keep some memory. I make a guess:
You are using a lot of autoreleased object, they will stay in memory
until the pool is drained, try to wrap the code in an
autoreleaseblock
#autoreleasepool{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
blurrImage = [UIImage imageWithCGImage:cgImage];
self.blurrImageView.image = blurrImage;
CGImageRelease(cgImage);
}
i'm developing an rtsp player using ffmpeg library and i must edit contrast image for every frame of video, searching the web i found this code for edit contrast :
- (UIImage*)contrast
{
CIImage *beginImage = [CIImage imageWithCGImage:[self CGImage]];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, beginImage,
#"inputIntensity", [NSNumber numberWithFloat:0.8], nil];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg =
[context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
self = newImg;
CGImageRelease(cgimg);
return self;
}
it works perfectly, but on iPad i lose performance and when decoding video are show a lot of noise on screen. There is a better way in performance to modify contrast for image??
Yes, there is a better way... Use opengl es 2.0 shaders, btw this hard work will be done on GPU...
I want to use AVFoundation to perform face recognition on an image taken from the camera roll.
Is this possible, in the first place?
I've found many tutorials on how to do this using live camera stream as input. I studied them and found no way to bind an AVCaptureInputDevice to an image.
P.S.
I don't want to use CoreImage. Actually I want to ditch CoreImage and use AVFoundation face recognition.
Thank you a lot.
What's the problem with CoreImage? It uses the same detector:
NSURL *url = [NSURL URLWithString:#"..."];
CIImage *img = [CIImage imageWithContentsOfURL:url]; // uiImageObj.CIImage
CIContext *context = [CIContext contextWithOptions:nil];
NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyLow, CIDetectorAccuracy, nil];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:detectorOptions];
NSArray *features = [detector featuresInImage:img];
I am trying to update a UIImage with geotag information. I looked at Saving Geotag info with photo on iOS4.1, which is where I found a reference to the NSMutableDDictionary+ImageMetadata category. However, I don't want to save to the photo album, but have a UIImage to pass on.
The following code seemed like it was making too many copies of the image and required all these frameworks linked: CoreImage, AssetsLibrary, CoreMedia, ImageIO.
Is there something more efficient that UIImage -> CIImage -> CGImage -> UIImage that can take the properties NSDictionary needed for setting EXIF data?
- (UIImage *)updateImage:(UIImage *)image location:(CLLocation *)location dateOriginal:(NSDate *)dateOriginal
{
NSMutableDictionary *properties = [NSMutableDictionary dictionaryWithDictionary:[image.CIImage properties]];
// uses https://github.com/gpambrozio/GusUtils
[properties setLocation:location];
[properties setDateOriginal:dateOriginal];
CIImage *tempImage = [[CIImage alloc] initWithImage:image options:properties];
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *updatedImage = [UIImage imageWithCGImage:[context createCGImage:tempImage fromRect:tempImage.extent]];
return updatedImage;
}
Have a look at libexif: http://libexif.sourceforge.net/
You'll probably need to pass the image's byte data to the library using
UIImageJPEGRepresentation(image, quality) bytes]