Swift: UIImageJPEGRepresentation return nil while looping - ios

I've been searching for this problem for a few days already. Still no answer. I'm trying to send multiple images to the server, so, I need to convert the uiimage to nsdata first. But the nsdata seem to be a nil, but the 1st image always successful to be convert. Here is the code:
for image in images {
var imageTemp: NSData?
if let image_data = UIImageJPEGRepresentation(image, 1) {
imageTemp = image_data
}
if imageTemp == nil
{
print("nil")
return
}
i = i + 1;
}

Related

Swift: Firebase Storage Code Block Not Executing

I have the following function to pull images from a Firebase Storage database.
For some reason, the print(imageRef) line is working well but the imageRef.getData() code block is being skipped completely. It's not executing either of the print statements "error updating..." or "Got the image".
What could be causing this?
func updateCurrentUser() {
var downloadedImages : [UIImage?] = []
for i in 0...8 {
let storageRef = Storage.storage().reference()
let imageRef = storageRef.child(self.currentUser.userid + "/img" + String(i) + ".jpg")
print(imageRef)
// Download in memory with a maximum allowed size of 1MB (1 * 1024 * 1024 bytes)
imageRef.getData(maxSize: 1 * 1024 * 1024) { data, error in
if let error = error {
print("error updating returning user: \(error.localizedDescription)")
} else {
// Data for the image is returned
let image = UIImage(data: data!)
downloadedImages[i] = image
print("Got the image")
}
}
}
self.currentUser.images = downloadedImages
}
Firebase is asynchronous and firebase data is only valid within the closure following the firebase function. In a nutshell, code is faster than the internet and it takes time for data to download.
Here's an abbreviated version of your code... note the comments
func updateCurrentUser() {
var downloadedImages : [UIImage?] = []
for i in 0...8 {
imageRef.getData(maxSize: 1 * 1024 * 1024) { data, error in
//firebase data is ONLY valid here so append each image to an array
self.currentUser.images.append(downloadedImage)
}
}
//the following line will execute WAY before the images are downloaded
// so self.currentUser.images will always be empty
self.currentUser.images = downloadedImages
}
You're probably going to want to use a completion handler as well. See my answer this question for further reading

iOS app crashes while converting multiple images to base64

The app has a feature of capturing multiple photos and uploading it to the server. When I loop through 100 images and convert it to base64, it works fine. But if I increase the number of images then it crashes, it doesn't go in the catch block. Below is the code snippet:
List<UIImage> capturedImage= new List<UIImage>();
foreach (var photo in capturedImage) {
var photoImage = photo;
using (NSData imgData = photoImage.AsJPEG(0.5f)) {
var strinng = imgData.GetBase64EncodedString(NSDataBase64EncodingOptions.None);
imageBase64.Add(strinng);
}
}

Has anyone tried how to use vision api(VNHomographicImageRegistrationRequest) in ios 11?

I am studying currency recognition problems which is related to the Vision SDK of iOS11.
I'm having trouble handling VNHomographicImageRegistrationRequest, which determines the perspective warp matrix needed to align the content of two images. But I couldn't find how to send two images parameters into this API, can anyone help me?
Apple's Vision framework flow is always the same: Request -> Handler -> Observation
Example:
// referenceAsset & asset2 can be:
// CGImage - CIImage - URL - Data - CVPixelBuffer
// Check initializers for more info
let request = VNHomographicImageRegistrationRequest(targetedCGImage: asset2, options: [:])
let handler = VNSequenceRequestHandler()
try! handler.perform([request], on: referenceAsset)
if let results = request.results as? [VNImageHomographicAlignmentObservation] {
print("Perspective warp found: \(results.count)")
results.forEach { observation in
// A matrix with 3 rows and 3 columns.
print(observation.warpTransform)
}
}
-(matrix_float3x3)predictWithVisionFromImage:(UIImage *)imageTarget toReferenceImage:(UIImage*)imageRefer{
UIImage *scaledImageTarget = [imageTarget scaleToSize:CGSizeMake(224, 224)];
CVPixelBufferRef bufferTarget = [imageTarget pixelBufferFromCGImage:scaledImageTarget];
UIImage *scaledImageRefer = [imageRefer scaleToSize:CGSizeMake(224, 224)];
CVPixelBufferRef bufferRefer = [imageRefer pixelBufferFromCGImage:scaledImageRefer];
VNHomographicImageRegistrationRequest* request = [[VNHomographicImageRegistrationRequest alloc]initWithTargetedCVPixelBuffer:bufferTarget completionHandler:nil];
VNHomographicImageRegistrationRequest* imageRequest = (VNHomographicImageRegistrationRequest*)request;
VNImageRequestHandler* handler = [[VNImageRequestHandler alloc]initWithCVPixelBuffer:bufferRefer options:#{}];
[handler performRequests:#[imageRequest] error:nil];
NSArray* resultsArr = imageRequest.results;
VNImageHomographicAlignmentObservation* firstObservation = [resultsArr firstObject];
return firstObservation.warpTransform;
}

How to read QR code from static image

I know that you can use AVFoundation to scan a QR code using the device's camera. Now here comes the problem, how can I do this from an static UIImage object?
Swift 4 version of #Neimsz's answer
func detectQRCode(_ image: UIImage?) -> [CIFeature]? {
if let image = image, let ciImage = CIImage.init(image: image){
var options: [String: Any]
let context = CIContext()
options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let qrDetector = CIDetector(ofType: CIDetectorTypeQRCode, context: context, options: options)
if ciImage.properties.keys.contains((kCGImagePropertyOrientation as String)){
options = [CIDetectorImageOrientation: ciImage.properties[(kCGImagePropertyOrientation as String)] ?? 1]
} else {
options = [CIDetectorImageOrientation: 1]
}
let features = qrDetector?.features(in: ciImage, options: options)
return features
}
return nil
}
How to use
if let features = detectQRCode(#imageLiteral(resourceName: "qrcode")), !features.isEmpty{
for case let row as CIQRCodeFeature in features{
print(row.messageString ?? "nope")
}
}
And during the execution this doesn't produce the Finalizing CVPixelBuffer 0x170133e20 while lock count is 1
I used the following QRCode Image (QRCode = https://jingged.com)
(Tested on iPhone 6 simulator with iOS version 11.2)
Output:
2018-03-14 15:31:13.159400+0530 TestProject[25889:233062] [MC] Lazy loading NSBundle MobileCoreServices.framework
2018-03-14 15:31:13.160302+0530 TestProject[25889:233062] [MC] Loaded MobileCoreServices.framework
https://jingged.com
The iOS API provides the CIDetector class from CoreImage framework.
CIDetector let you find specific patterns in images, like faces, smiles, eyes, or in our case : QRCodes.
Here is the code to detect a QRCode from an UIImage in Objective-C:
-(NSArray *)detectQRCode:(UIImage *) image
{
#autoreleasepool {
NSLog(#"%# :: %#", NSStringFromClass([self class]), NSStringFromSelector(_cmd));
NSCAssert(image != nil, #"**Assertion Error** detectQRCode : image is nil");
CIImage* ciImage = image.CIImage; // assuming underlying data is a CIImage
//CIImage* ciImage = [[CIImage alloc] initWithCGImage: image.CGImage];
// to use if the underlying data is a CGImage
NSDictionary* options;
CIContext* context = [CIContext context];
options = #{ CIDetectorAccuracy : CIDetectorAccuracyHigh }; // Slow but thorough
//options = #{ CIDetectorAccuracy : CIDetectorAccuracyLow}; // Fast but superficial
CIDetector* qrDetector = [CIDetector detectorOfType:CIDetectorTypeQRCode
context:context
options:options];
if ([[ciImage properties] valueForKey:(NSString*) kCGImagePropertyOrientation] == nil) {
options = #{ CIDetectorImageOrientation : #1};
} else {
options = #{ CIDetectorImageOrientation : [[ciImage properties] valueForKey:(NSString*) kCGImagePropertyOrientation]};
}
NSArray * features = [qrDetector featuresInImage:ciImage
options:options];
return features;
}
}
The returned NSArray* will contain CIFeature* if a QRCode is present and detected. If there was no QRCode, the NSArray* will be nil. If the QRCode decoding fails, the NSArray* will have no element.
To obtain the encoded string :
if (features != nil && features.count > 0) {
for (CIQRCodeFeature* qrFeature in features) {
NSLog(#"QRFeature.messageString : %# ", qrFeature.messageString);
}
}
As in the answer of #Duncan-C, you can then extract QRCode corners and draw an enclosing bounding box of the QRCode on the image.
Note :
Under iOS10.0 beta 6, the call to [qrDetector featuresInImage:ciImage options:options] when using images coming from the cameraSampleBuffer logs this internal warning (it runs smoothly but spam the console with this message, and I could not find a way to get rid of it for now):
Finalizing CVPixelBuffer 0x170133e20 while lock count is 1.
Source :
Apple Dev API Reference - CIDetector
Apple Dev API Programming guide - Face detection
None of the answers here were extremely straightforward in regards to returning test messages. Made a tiny extension that works well for me:
https://gist.github.com/freak4pc/3f7ae2801dd8b7a068daa957463ac645
extension UIImage {
func parseQR() -> [String] {
guard let image = CIImage(image: self) else {
return []
}
let detector = CIDetector(ofType: CIDetectorTypeQRCode,
context: nil,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
let features = detector?.features(in: image) ?? []
return features.compactMap { feature in
return (feature as? CIQRCodeFeature)?.messageString
}
}
}
Core Image has the CIDetector class, with the CIDetectorTypeQRCode for detecting QR codes. You can feed a Core Image filter either a still image or a video.
That should meet your needs. See the Xcode docs for more info.
The Github repo iOS8-day-by-day from ShinobiControls includes a project LiveDetection that shows how to use the CIDetectorTypeQRCode both from a video feed and from a still image. It looks like it hasn't been updated for Swift 2.0, and I wasn't able to get it to compile under Xcode 7.2.1, but the function performQRCodeDetection in the project DOES compile. (The compile problems are with code that handles all the horrible type-casting you have to deal with to handle CVPixelBuffers in Swift, which doesn't matter if all you want to do is find QRCodes in static images.)
EDIT:
Here is the key method from that site (in Swift)
func performQRCodeDetection(image: CIImage) -> (outImage: CIImage?, decode: String) {
var resultImage: CIImage?
var decode = ""
if let detector = detector {
let features = detector.featuresInImage(image)
for feature in features as! [CIQRCodeFeature] {
resultImage = drawHighlightOverlayForPoints(image,
topLeft: feature.topLeft,
topRight: feature.topRight,
bottomLeft: feature.bottomLeft,
bottomRight: feature.bottomRight)
decode = feature.messageString
}
}
return (resultImage, decode)
}
If you need just a string you can use such code:
class QRToString {
func string(from image: UIImage) -> String {
var qrAsString = ""
guard let detector = CIDetector(ofType: CIDetectorTypeQRCode,
context: nil,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh]),
let ciImage = CIImage(image: image),
let features = detector.features(in: ciImage) as? [CIQRCodeFeature] else {
return qrAsString
}
for feature in features {
guard let indeedMessageString = feature.messageString else {
continue
}
qrAsString += indeedMessageString
}
return qrAsString
}
}
Use Zbar SDK to read QRcode from Static Image.
ZBar-SDK-iOS
Please check this tutorial regarding intigration of Zbar SDK.
ZBar SDK Integration Tutorial
And then try to Scan static image.
Use Zbar scanner class to scan your image.
Here is documentation.
ZBarImageScanner.
Just for Example , How to use Zbar scanner class,
ZBarImageScanner *scandoc = [[ZBarImageScanner alloc]init];
NSInteger resultsnumber = [scandoc scanImage:yourUIImage];
if(resultsnumber > 0){
ZBarSymbolSet *results = scandoc.results;
//Here you will get result.
}
Below link will help you.
scaning-static-uiimage-using-ios-zbar-sdk
Objective-C
- (NSString *)readQR {
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:#{
CIDetectorAccuracy:CIDetectorAccuracyHigh
}];
/// in here you can replace `self` to your image value
CIImage *ciImage = [[CIImage alloc]initWithImage:self];
NSArray *features = [detector featuresInImage:ciImage];
if([features count] == 0) {
return nil;
}
__block NSString *qrString = #"";
[features enumerateObjectsUsingBlock:^(CIQRCodeFeature * _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
qrString = obj.messageString;
}];
return qrString;
}

Save UIImage to personal folder and then load it via UIImage.FromFile

I´ve done a picture selector via UIImagePickerController. Because of the memory issues this one has I want to save the selected image to disc and if needed load it from filepath. But I can´t manage to get it working.
If i bind the original image directly it is displayed with no problems.
File.Exists in the code returns true but image in the last line is null if watched in debugger.. Thank you very much for your help!
NSData data = originalImage.AsPNG();
string path = Environment.GetFolderPath (Environment.SpecialFolder.Personal);
string pathTempImage = Path.Combine(path, "tempImage.png");
byte[] tempImage = new byte[data.Length];
File.WriteAllBytes(pathTempImage, tempImage);
if(File.Exists(pathTempImage))
{
int i = 0;
}
UIImage image = UIImage.FromFile(pathTempImage);
Update
This is the code that works for me:
void HandleFinishedPickingMedia (object sender, UIImagePickerMediaPickedEventArgs e)
{
_view.DismissModalViewControllerAnimated (true);
BackgroundWorker bw = new BackgroundWorker();
bw.DoWork += delegate(object bwsender, DoWorkEventArgs e2) {
// determine what was selected, video or image
bool isImage = false;
switch(e.Info[UIImagePickerController.MediaType].ToString()) {
case "public.image":
Console.WriteLine("Image selected");
isImage = true;
break;
case "public.video":
Console.WriteLine("Video selected");
break;
}
// get common info (shared between images and video)
NSUrl referenceURL = e.Info[new NSString("UIImagePickerControllerReferenceUrl")] as NSUrl;
if (referenceURL != null)
Console.WriteLine("Url:"+referenceURL.ToString ());
// if it was an image, get the other image info
if(isImage) {
// get the original image
originalImage = e.Info[UIImagePickerController.OriginalImage] as UIImage;
if(originalImage != null) {
NSData data = originalImage.AsPNG();
_picture = new byte[data.Length];
ImageResizer resizer = new ImageResizer(originalImage);
resizer.RatioResize(200,200);
string path = Environment.GetFolderPath (Environment.SpecialFolder.Personal);
string pathTempImage = Path.Combine(path, "tempImage.png");
string filePath = Path.Combine(path, "OriginalImage.png");
NSData dataTempImage = resizer.ModifiedImage.AsPNG();
byte[] tempImage = new byte[dataTempImage.Length];
System.Runtime.InteropServices.Marshal.Copy(dataTempImage.Bytes,tempImage,0,Convert.ToInt32(tempImage.Length));
//OriginalImage
File.WriteAllBytes(filePath, _picture);
//TempImag
File.WriteAllBytes(pathTempImage, tempImage);
UIImage image = UIImage.FromFile(pathTempImage);
_view.InvokeOnMainThread (delegate {
templateCell.BindDataToCell(appSelectPicture.Label, image);
});
_picture = null;
}
} else { // if it's a video
// get video url
NSUrl mediaURL = e.Info[UIImagePickerController.MediaURL] as NSUrl;
if(mediaURL != null) {
Console.WriteLine(mediaURL.ToString());
}
}
// dismiss the picker
};
bw.RunWorkerAsync();
bw.RunWorkerCompleted += HandleRunWorkerCompleted;
}
byte[] tempImage = new byte[data.Length];
File.WriteAllBytes(pathTempImage, tempImage);
You're not copying the image data to your allocated array before saving it. That result in a large empty file that is not a valid image.
Try using one of the NSData.Save overloads, like:
NSError error;
data.Save (pathTempImage, NSDataWritingOptions.FileProtectionNone, out error);
That will allow you to avoid allocating the byte[] array.

Resources