iOS ARKit how to save ARFrame .capturedImage to file? - ios

I'm trying to save the camera image from ARFrame to file. The image is given as a CVPixelBuffer. I have the following code, which produces an image with a wrong aspect ratio. I tried different ways, including CIFilter to scale down the image, but still cannot get the correct picture saved.
Camera resolution is 1920w x 1440h. I want to create a 1x scale image from the provided pixel buffer. I am getting 4320 × 5763, 4320 × 3786
How do I save capturedImage (CVPixelBuffer) from ARFrame to file?
func prepareImage(arFrame: ARFrame) {
let orientation = UIInterfaceOrientation.portrait
let viewportSize = CGSize(width: 428, height: 869)
let transform = arFrame.displayTransform(for: orientation, viewportSize: viewportSize).inverted()
let ciImage = CIImage(cvPixelBuffer: arFrame.capturedImage).transformed(by: transform)
let uiImage = UIImage(ciImage: ciImage)
}

There are a couple of issues I encountered when trying to pull a screenshot from an ARFrame:
The CVPixelBuffer underlined in the ARFrame is rotated to the left, which means that you will need to rotate the image.
The frame itself is much wider than what you see in the camera. This is done because AR needs a wider range of image to be able to see beyond what you see for it's scene.
E.G:
This is an image taken from a raw ARFrame
The following below may assist you to handle those two steps easily:
https://github.com/Rightpoint/ARKit-CoreML/blob/master/Library/UIImage%2BUtilities.swift
The problem with these are that this code uses UIGraphicsBeginImageContextWithOptions, which is heavy util.
But if performance are not your main issue(Let's say for an AR Experience) these might be suffice.
If performance is an issue, you can do the mathematical calculation to understand the conversion you will need to do, and simply cut the image needed from the original picture and rotate that.
Hope this will assist you!

Related

How to get resized image from PDF vector asset?

I am developing game with SpriteKit for iPhone devices only.
In the scene I have buttons, which need to be positioned in the way to fill the height of the screen:
There should be no gaps between top and bottom edges, as well as from each other. The width can vary. Buttons contains vector ornaments, done in AI.
In order to achieve that, I thought I could use PDF asset, set to "single scale" and "Preserve Vector Data". Then I thought I can get slightly resized PNG file out of PDF and reposition then based on client area. However, it appears SKTextureNode doesn't take PDF as source. I get blurry image.
The code for loading image:
let texture = SKTexture(imageNamed: "BuyButton")
let node = SKSpriteNode(texture: texture, size: CGSize(width: 81*4, height: 165*4)) // PDF size is 81x165
This works just fine with UIImageView, where you can set any size and it will give you crisp image. I can guess that there is some magic routine somewhere in UIImageView, which forces to create new image of given size.
So my question is whether there is a way to engage this magic routine for SpriteKit?

Swift Images Saved to phone in app are being rotated either 90 degrees or 180 [duplicate]

This question already has answers here:
Swift PNG Image being saved with incorrect orientation
(3 answers)
Closed 5 years ago.
In swift 3.2 app I allow the user to either take a photo or select one from the phones gallery.
I show the image in the app and its right side up. I save the image to the directory by converting the png to data using the UIImagePNGRepresentation() method.
Upon retrieving the image its upside down.
As far as I can tell I am not rotating the image anywhere.
I had also faced the same issue. The only fix that popped out in my mind was to check if the orientation of the image is upright. If it's not upright, we need to get the correct image from the graphics context.
You can write an extension to fix image orientation as follows: (source: here)
extension UIImage {
func fixOrientation() -> UIImage {
if self.imageOrientation == UIImageOrientation.up {
return self
}
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
let normalizedImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return normalizedImage;
}
}
then call the method once you take pic from camera.
chosenImage = chosenImage.fixOrientation()
Mobile phones save images captured with the camera in the orientation that corresponds to how the CCD sensor returns the image data to make their life easier. It dates back to the time when phones had insufficient memory to rotate the image. Additionally, they save meta information about what orientation the image was saved in so it can be properl displayed.
I don't know how you display the image in your app but obviously you use code that knows about this.
Before using UIImagePNGRepresentation you have to fix the rotation of your image. Rikesh Subedi's code should work. But there's plenty more on StackOverflow.
Alternatively, you can save it as a JPEG. Unlike PNG, JPEG supports the meta information for storing the image's orientation.

Resizing a Screenshot Taken With UIGraphicsBeginImageContextWithOptions

In an app I'm developing, the customer enters in some personal information and signs a box on the screen. I then programmatically screenshot the the full view and convert it to base64.
This is working fine, however, due to the size of the image it takes approximately 15 seconds to convert it to base64 and send it to an API server. The image size at full quality is the same size as the iPad Air resolution (1536x2048).
Once saved on the server as a PNG, the image weighs in at around 410kb.
I want to resize the image capture down to 768x1024 (half of what it is) without losing clarity. I think this will save both time and storage space.
I'm currently taking the "screenshot" using the following function:
func screenCaptureAndGo() {
//Create the UIImage
UIGraphicsBeginImageContextWithOptions(view.frame.size, view.opaque, 0.0)
view.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
imageData = UIImagePNGRepresentation(image)
imageDataString = imageData.base64EncodedStringWithOptions(NSDataBase64EncodingOptions.Encoding64CharacterLineLength)
}
I left out the API server stuff as it's not really relevant.
I ended up just using the Scale property. It didn't work the first time, which is why I didn't think to use it again, but after toying with it I got it to work.
I'm using 0.7 as the scale amount, and the images now clock in at 100kb, and it takes 4 seconds to convert and send the base64 string.

Cropping UIImage from photo library vs iPhone camera

I can crop images from the iPhone's photo library no problem like this:
CGRect topRect = CGRectMake(0, 0, image.size.width, image.size.height / 2);
CGImageRef topCroppedCGImageRef = CGImageCreateWithImageInRect(image.CGImage,
topRect);
UIImage *croppedImage = [[UIImage alloc] initWithCGImage:topCroppedCGImageRef];
CGImageRelease(topCroppedCGImageRef);
However this doesn't work when the image comes from the camera. Specifically the cropped image is rotated and the cropped portion isn't as expected. After reading around it sounds like this problem is relatively common. However I've tried the various code fixes and it's not quite working (still have rotation, unexpected cropping and even distortion issues). So I'd like to actually understand why the above cropping doesn't just work for images coming from the camera.
Why doesn't the above cropping method work on images coming from the iPhone's camera?
As pointed out by this famous post - Resize a UIImage the right way, this is because you leave out functionality such as EXIF orientation support, an absolute necessity when dealing with photographs taken by the iPhone’s camera.
By default (picture taken in portrait) the image has an EXIF orientation flag = 6 which means the image is rotated 90 degrees counterclockwise:
$ identify -format "%[EXIF:orientation]" myimage.jpg
6

renderInContext: producing an image with blurry text

I am prerendering a composited image with a couple different UIImageViews and UILabels to speed up scrolling in a large tableview. Unfortunately, the main UILabel is looking a little blurry compared to other UILabels on the same view.
The black letters "PLoS ONE" are in a UILabel, and they look much blurrier than the words "Medical" or "Medicine". The logo "PLoS one" is probably similarly being blurred, but it's not as noticeable as the crisp text.
The entire magazine cover is a single UIImage assigned to a UIButton.
(source: karlbecker.com)
This is the code I'm using to draw the image. The magazineView is a rectangle that's 125 x 151 pixels.
I have tried different scaling qualities, but that has not changed anything. And it shouldn't, since the scaling shouldn't be different at all. The UIButton I'm assigning this image to is the exact same size as the magazineView.
UIGraphicsBeginImageContextWithOptions(magazineView.bounds.size, NO, 0.0);
[magazineView.layer renderInContext:UIGraphicsGetCurrentContext()];
[coverImage release];
coverImage = UIGraphicsGetImageFromCurrentImageContext();
[coverImage retain];
UIGraphicsEndImageContext();
Any ideas why it's blurry?
When I begin an image context and render into it right away, is the rendering happening on an even pixel, or do I need to manually set where that render is occurring?
Make sure that your label coordinates are integer values. If they are not whole numbers they will appear blurry.
I think you need to use CGRectIntegral for more information please see: What is the usage of CGRectIntegral? and Reference of CGRectIntegral
I came across the same problem today where my content got pixelated when I am producing an image from UILabel text.
We use UIGraphicsBeginImageContextWithOptions() to configure the drawing environment for rendering into a bitmap which accepts three parameters:
size: The size of the new bitmap context. This represents the size of the image returned by the UIGraphicsGetImageFromCurrentImageContext function.
opaque: A Boolean flag indicating whether the bitmap is opaque. If the opaque parameter is YES, the alpha channel is ignored and the bitmap is treated as fully opaque.
scale: The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
So we should use a proper scale factor with respect to the device display (1x, 2x, 3x) to fix this issue.
Swift 5 version:
UIGraphicsBeginImageContextWithOptions(frame.size, true, UIScreen.main.scale)
if let currentContext = UIGraphicsGetCurrentContext() {
nameLabel.layer.render(in: currentContext)
let nameImage = UIGraphicsGetImageFromCurrentImageContext()
return nameImage
}

Resources