In an app I'm developing, the customer enters in some personal information and signs a box on the screen. I then programmatically screenshot the the full view and convert it to base64.
This is working fine, however, due to the size of the image it takes approximately 15 seconds to convert it to base64 and send it to an API server. The image size at full quality is the same size as the iPad Air resolution (1536x2048).
Once saved on the server as a PNG, the image weighs in at around 410kb.
I want to resize the image capture down to 768x1024 (half of what it is) without losing clarity. I think this will save both time and storage space.
I'm currently taking the "screenshot" using the following function:
func screenCaptureAndGo() {
//Create the UIImage
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0)
view.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageData = UIImagePNGRepresentation(image)
imageDataString = imageData.base64EncodedStringWithOptions(NSDataBase64EncodingOptions.Encoding64CharacterLineLength)
}
I left out the API server stuff as it's not really relevant.
I ended up just using the Scale property. It didn't work the first time, which is why I didn't think to use it again, but after toying with it I got it to work.
I'm using 0.7 as the scale amount, and the images now clock in at 100kb, and it takes 4 seconds to convert and send the base64 string.
Related
I'm trying to save the camera image from ARFrame to file. The image is given as a CVPixelBuffer. I have the following code, which produces an image with a wrong aspect ratio. I tried different ways, including CIFilter to scale down the image, but still cannot get the correct picture saved.
Camera resolution is 1920w x 1440h. I want to create a 1x scale image from the provided pixel buffer. I am getting 4320 × 5763, 4320 × 3786
How do I save capturedImage (CVPixelBuffer) from ARFrame to file?
func prepareImage(arFrame: ARFrame) {
let orientation = UIInterfaceOrientation.portrait
let viewportSize = CGSize(width: 428, height: 869)
let transform = arFrame.displayTransform(for: orientation, viewportSize: viewportSize).inverted()
let ciImage = CIImage(cvPixelBuffer: arFrame.capturedImage).transformed(by: transform)
let uiImage = UIImage(ciImage: ciImage)
}
There are a couple of issues I encountered when trying to pull a screenshot from an ARFrame:
The CVPixelBuffer underlined in the ARFrame is rotated to the left, which means that you will need to rotate the image.
The frame itself is much wider than what you see in the camera. This is done because AR needs a wider range of image to be able to see beyond what you see for it's scene.
E.G:
This is an image taken from a raw ARFrame
The following below may assist you to handle those two steps easily:
https://github.com/Rightpoint/ARKit-CoreML/blob/master/Library/UIImage%2BUtilities.swift
The problem with these are that this code uses UIGraphicsBeginImageContextWithOptions, which is heavy util.
But if performance are not your main issue(Let's say for an AR Experience) these might be suffice.
If performance is an issue, you can do the mathematical calculation to understand the conversion you will need to do, and simply cut the image needed from the original picture and rotate that.
Hope this will assist you!
As storing pictures is going to be one of the more expensive features of Firebase my app will be using I want to make sure I'm doing it efficiently.
The steps I'm taking are the following:
Resize picture the user wants to upload to have width of 500 points (A point represents a pixel on non-retina screens and two pixels on retina screens)
Upload the data for the specified image in PNG format to Firebase storage
Here's my actual code:
let storageRef = FIRStorage.storage().reference().child("\(name).png")
if let uploadData = UIImagePNGRepresentation(profImage.resizeImage(targetSize: CGSize(width: 500, height: Int(500*(profImage.size.height/profImage.size.width))))) {
storageRef.put(uploadData, metadata: nil, completion: nil)
}
The photos are going to be a little less than the width of an iPhone screen when displayed to the user. Is the way I'm storing them efficient or is there a better way to format them?
**Edit: After a bit more research I've found out that JPGs are more efficient than PNG so I'll be switching to that since transparency isn't important for me. See my answer for example.
I've changed the image format from png to jpeg and found it saves a lot of space. Here's a picture of my storage for comparisons:
My code went from using UIImagePNGRepresentation to UIImageJPEGRepresentation with a compression factor of 1. I'm sure if I reduce the compression factor it'll save even more space.
I am making a simple camera app where a user takes an image and then emails it. I have one problem: once the user takes an image (which always works), if it is portrait, the MFMailComposer auto-rotates it to landscape incorrectly, making everything sideways. How can I stop this behavior?
This occurs because PNGs do not store orientation information. Attach the photo to the email as a JPG instead and it will be oriented correctly!
Use this code to attach your image instead:
let data:NSData = UIImageJPEGRepresentation(image, 0.9)! // 0.9 is compression value: 0.0 is most compressed/lowest quality and 1.0 is least compressed/highest quality
mailcomposer.addAttachmentData(data, mimeType: "image/jpg", fileName: "image.jpg")
Source + more info: https://stackoverflow.com/a/34796890/5700898
What is the most efficient way to iterate over the entire camera roll, open every single photo and resize it?
My naive attempts to iterate over the asset library and get the defaultRepresentation results took about 1 second per 4 images (iPhone 5). Is there a way to do better?
I need the resized images to do some kind of processing.
Resizing full resolution photos is rather expensive operation. But you can use images already resized to screen resolution:
ALAsset *result = // .. do not forget to initialize it
ALAssetRepresentation *rawImage = [result defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:rawImage.fullScreenImage];
If you need another resolution you can still use 'fullScreenImage' since it has smaller size than original photo.
(CGImageRef)fullScreenImage
Returns a CGImage of the representation that is appropriate for
displaying full screen. The dimensions of the image are dependent on
the device your application is running on; the dimensions may not,
however, exactly match the dimensions of the screen.
In iOS 5 and later, this method returns a fully cropped, rotated, and
adjusted image—exactly as a user would see in Photos or in the image
picker.
Returns a CGImage of the representation that is appropriate for
displaying full screen, or NULL if a CGImage representation could not
be generated.
In my iOS app, I need the user to be able to send email with a GIF image attachment. I implemented this by using MFMailComposeViewController. If the file size of the GIF image is small, everything works OK. However, if the image size is large, iOS asks to reduce the image size. If the user accepts to reduce image size, the animation of GIF is gone. Actually, this is the same problem as asked here: Preventing MFMailComposeViewController from scaling animated GIFs
My understanding is that there is no way to avoid iOS to ask to reduce size. Therefore, the solution I am thinking is as follows: I will pre-compress and generate a new gif with reduced file size before attaching so that it will always be small enough.
So my question is: Is there an image file size that is guarantee to not result in iOS's asking to reduce image size? For example, is there something like "the mail will never ask to reduce image size if the attached image file is less than X KB" and what is X?
I have an answer to the threshold question and a method to reduce an image down and reliably avoid Apple querying if you want to scale the image size down.
Some background:
In my App, I give the user the option of automatically scaling their image down to 1024x768 before E-Mailing it as a way of avoiding Apple's 'do you want to scale your image down?' query.
For a long time, I thought that this amount of descaling was sufficient. But I've discovered that if their image has enough fine detail in it, then even at 1024x768, it can still trigger Apple's query.
So, the code below is how I deal with this problem. Note that if the getMinImgSizFlag is TRUE, I've already descaled the image to 1024x768 elsewhere.automatically
//...convert the UIImage into NSData, as the email controller requires, using
// a default JPG compression value of 90%.
float jpgCompression = 0.9f;
imageAsNSData = UIImageJPEGRepresentation( [self camImage], jpgCompression );
if ( [gDB getMinImgSizFlag] == TRUE )
{
//...if we are here, the user has opted to pre emptively scale their
// image down to 1024x768 to avoid Apple's 'scale the image down?'
// query.
//
// if so, then we will do a bit more testing because even with
// the image scaled down, if it has a lot of fine detail, it may
// still exceed a critical size threashold and trigger the query.
//
// it's been empirically determined that the critical size threashold
// falls between 391K and 394K bytes.
//
// if we determine that the compressed image, at the default JPG
// compression, is still larger than the critical size threashold,
// then we will begin looping and increasing the JPG compression
// until the image size drops below 380K.
//
// the aproximately 10K between our limit, 380K, and Apple's
// critical size threashold allows for the possibility that Apple
// may be including the contribution of the E-Mail's text size into
// its threashold calculations.
while ( [imageAsNSData length] > 380000 )
{
jpgCompression -= 0.05f;
imageAsNSData = UIImageJPEGRepresentation( [self camImage], jpgCompression );
}
}
That's it. I've tested this code and it reliably allows me to avoid Apple's do you want to scale your image down before emailing it query.