Force PDF page to fill context - ios

In my project, I do need to generate an image of a PDF page, generated with CoreGraphics.
I managed to create a context, with the size of the image I want (destinationSize: CGSize) but when I use the CGPDFPageGetDrawingTransform function, it only resize the page down, but it won't scale the context to make the page fill the destination rect.
Here is the extract of code I have right now in my project :
UIGraphicsBeginImageContextWithOptions(destinationSize, true, 0)
defer {
UIGraphicsEndImageContext()
}
// Invert y axis (CoreGraphics and UIKit axes are differents)
CGContextTranslateCTM( ctx, 0, destinationSize.height);
CGContextScaleCTM(ctx, 1, -1)
let transform = CGPDFPageGetDrawingTransform(pageRef, .CropBox, CGRect(origin: CGPointZero, size: destinationSize), 0, true)
CGContextConcatCTM(ctx, transform)
// TODO We need force the page to fill all the dest rect when it's bigger than the CropBox size
CGContextDrawPDFPage(ctx, pageRef)
I tried to scale my context by a scale factor with this, replacing the TODO :
let contextScale: CGFloat = (pageRect.width < expectedWidth) ? expectedWidth / pageRect.width : 1
CGContextScaleCTM(ctx, contextScale, contextScale)
but it created an incorrect offset of the drawing, and I'm kind of lost with CoreGraphics transformations.
What would be the correct way to rescale the context to make sure the pdf page draws to fill the context size ?

This is the solution I came up with.
For what I know, this is working for any pdf document page. (tested with many rotations, cropbox sizes and origins.
func transformationForPage(_ pageNumber: Int, targetSize: CGSize) -> CGAffineTransform {
let pageRef = getPage(pageNumber)
let rotation = getPageRotationInteger(pageNumber)
let cropbox = cropboxForPage(pageNumber)
var transform = pageRef!.getDrawingTransform(.cropBox, rect: CGRect(origin: CGPoint.zero, size: targetSize), rotate: 0, preserveAspectRatio: true)
// We change the context scale to fill completely the destination size
if cropbox.width < targetSize.width {
let contextScale = targetSize.width / cropbox.width
transform = transform.scaledBy(x: contextScale, y: contextScale)
transform.tx = -(cropbox.origin.x * transform.a + cropbox.origin.y * transform.b)
transform.ty = -(cropbox.origin.x * transform.c + cropbox.origin.y * transform.d)
// Rotation handling
if rotation == 180 || rotation == 270 {
transform.tx += targetSize.width
}
if rotation == 90 || rotation == 180 {
transform.ty += targetSize.height
}
}
return transform
}

Related

How to translate X-axis correctly from VNFaceObservation boundingBox (Vision + ARKit)

I'm using both ARKit & Vision, following along Apple's sample project, "Using Vision in Real Time with ARKit". So I am not setting up my camera as ARKit handles that for me.
Using Vision's VNDetectFaceRectanglesRequest, I'm able to get back a collection of VNFaceObservation objects.
Following various guides online, I'm able to transform the VNFaceObservation's boundingBox to one that I can use on my ViewController's UIView.
The Y-axis is correct when placed on my UIView in ARKit, but the X-axis is completely off & inaccurate.
// face is an instance of VNFaceObservation
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -view.frame.height)
let translate = CGAffineTransform.identity.scaledBy(x: view.frame.width, y: view.frame.height)
let rect = face.boundingBox.applying(translate).applying(transform)
What is the correct way to display the boundingBox on the screen (in ARKit/UIKit) so that the X & Y axis match up correctly to the detected face rectangle? I can't use self.cameraLayer.layerRectConverted(fromMetadataOutputRect: transformedRect) since I'm not using AVCaptureSession.
Update: Digging into this further, the camera's image is 1920 x 1440. Most of it is not displayed on ARKit's screen space. The iPhone XS screen is 375 x 812 points.
After I get Vision's observation boundingBox, I've transformed it to fit the current view (375 x 812). This isn't working since the actual width seems to be 500 (the left & right sides are out of the screen view). How do I CGAffineTransform the CGRect bounding box (seems like 500x812, a total guess) from 375x812?
The key piece missing here is ARFrame's displayTransform(for:viewportSize:). You can read the documentation for it here.
This function will generate the appropriate transform for a given frame and viewport size (the CGRect of the view you're displaying the image and bounding box in).
func visionTransform(frame: ARFrame, viewport: CGRect) -> CGAffineTransform {
let orientation = UIApplication.shared.statusBarOrientation
let transform = frame.displayTransform(for: orientation,
viewportSize: viewport.size)
let scale = CGAffineTransform(scaleX: viewport.width,
y: viewport.height)
var t = CGAffineTransform()
if orientation.isPortrait {
t = CGAffineTransform(scaleX: -1, y: 1)
t = t.translatedBy(x: -viewport.width, y: 0)
} else if orientation.isLandscape {
t = CGAffineTransform(scaleX: 1, y: -1)
t = t.translatedBy(x: 0, y: -viewport.height)
}
return transform.concatenating(scale).concatenating(t)
}
You can then use this like so:
let transform = visionTransform(frame: yourARFrame, viewport: yourViewport)
let rect = face.boundingBox.applying(transform)

Optimize retrieval of all rendered pixel's data in a UIView

I need to perform some statistics and pixel-by-pixel analysis of a UIView containing sub views, sublayers and mask in a small iOS-swift3 project.
For the moment i came up with the following:
private func computeStatistics() {
// constants
let width: Int = Int(self.bounds.size.width)
let height: Int = Int(self.bounds.size.height)
// color extractor
let pixel = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: 4)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
for x in 0..<width {
for y in 0..<height {
let context = CGContext(data: pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context!.translateBy(x: -CGFloat(x), y: -CGFloat(y))
layer.render(in: context!)
// analyse the pixel here
// eg: let totalRed += pixel[0]
}
}
pixel.deallocate(capacity: 4)
}
It's working, the problem is that on a fullscreen view even on an iphone4 this would mean 150.000 instantiations of the context and as many expensive renders, that beside being very slow must also have an issue with deallocation, saturating my memory (even in simulator).
I tried analysis only a fraction of the pixels
let definition: Int = width / 10
for x in 0..<width where x%definition == 0 {
...
}
But beside still taking up to 10 seconds on even on a simulated iphone7 is a very poor solution.
Is it possible to avoid re-rendering and translating the context everytime?

Applying 3D Transform After Pinching It

I am trying to Apply 3D transform after scaling image from pinch Gesture Recognizer.. When a 3d Transform is applied The image get rescaled to default size ( i.e size before Pinch) . How can i Stop imageView from going back to the Previous state ( i.e before Pinch )
self.transform = CATransform3DIdentity
self.transform.m34 = 1.0 / 500.0;
self.transform = CATransform3DRotate(self.transform, CGFloat(145 * M_PI / 180), 0, 1, 0)
viewToDelete.layer.transform = self.transform
func handlePinch(_ nizer:UIPinchGestureRecognizer) {
nizer.view!.transform = nizer.view!.transform.scaledBy(x: nizer.scale, y: nizer.scale)
nizer.scale = 1
}
In the Image Left most view is the created view//.. view next to It is scaled using Pinch... 3rd downside is after 3dTransform

Resizing an SKSpriteNode without losing quality

I have an SKSpiteNode:
private var btnSound = SKSpriteNode(imageNamed: "btnSound")
Now I made this image in Adobe Illustrator with a size of 2048x2048 pixels (overkill really), so it has good resolution. My problem is when I set the size of it the image, the lines in it go serrated or jagged...not smooth.
This is how I size it:
btnSound.position = CGPoint(x: self.frame.width * 1 / 5 , y: self.frame.height * (5.2 / 8))
btnSound.size.width = self.frame.width * 1 / 7
btnSound.size.height = btnSound.size.width
btnSound.zPosition = 1
self.addChild(btnSound)
This is the image when in Illustrator (screenshot)and this is the image in the app (screenshot)
Things I have tried:
Making the image PDF
Making the image PNG
Making the PNG 72 DPI, making it 300 DPI
Run on simulator / device (iPhone7)
btnSound.setScale(preDetermineScale)
Using the following function, though I am not familiar with the UIGraphicsBeginImageContext method. The image just comes out blurry with this. Heres the code and the resulting image:
func resizeImage(image: UIImage, newWidth: CGFloat) -> UIImage? {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
func setup() {
let btnSoundImg = UIImage(named: "btnSound")
let resizeBtnSoundImage = resizeImage(image: btnSoundImg!, newWidth: self.frame.width * 1 / 7)
let btnSoundTexture = SKTexture(image: resizeBtnSoundImage!)
btnSound.texture = btnSoundTexture
btnSound.position = CGPoint(x: self.frame.width * 1 / 5 , y: self.frame.height * (5.2 / 8))
btnSound.size.width = self.frame.width * 1 / 7
btnSound.size.height = btnSound.size.width
btnSound.zPosition = 1
self.addChild(btnSound)
}
I am self taught and haven't done a whole lot of programming so I'd love to learn how to do this correctly as I'm only finding solutions for resizing UIImageViews.
Another thought I had was maybe it shouldn't be a spriteNode as its just used for a button?
First up, there's some primitive rules to follow, to get the best results.
Only scale by factors of 2. ie 50%, 25%, 12.5% 6.25% etc.
This way, any four pixels in your original image become 1 pixel in your scaled image, for each step down in scale size.
Make your original image a square of an exponent of 2 in size. So: 128x128, 256x256, 512x512, etc. You've covered this already with your 2048x2048 sizing.
Turn on mipmapping. This is off, by default, in SpriteKit, so you have to switch it on: https://developer.apple.com/reference/spritekit/sktexture/1519960-usesmipmaps
Play with the different filtering modes to get the best reductions of noise and banding in your image: https://developer.apple.com/reference/spritekit/sktexture/1519659-filteringmode hint, linear will probably be better.
As has always been the case, judicious use of Photoshop for manually scaling will give you the best results and least flexibility

Pixel Gap Between CGRect

I'm drawing bar graphs, and I have several stacked CGRects that are directly on top of each other (i.e. one rect's minY is the previous rect's maxY). However, there are still semi-transparent gaps between the rects. Is there any way to fix this? I've found that this also happens when drawing touching adjacent arcs.
Here's a screenshot of what I mean:
By zooming in, I've confirmed that this isn't just an optical illusion like one would find between adjacent red and blue rects. I would appreciate any input.
var upToNowSegmentTotal: CGFloat = 0
for k in 0..<data[i].bars[j].segments.count {
var segmentRect: CGRect = CGRect()
if barDirection == "vertical" {
let a: CGFloat = translateY(upToNowSegmentTotal)
let b: CGFloat = translateY(upToNowSegmentTotal + data[i].bars[j].segments[k].length)
upToNowSegmentTotal += data[i].bars[j].segments[k].length
var rectY: CGFloat
if a > b {
rectY = b
} else {
rectY = a
}
segmentRect = CGRect(
x: barWidthPosition,
y: rectY,
width: barWidthAbsolute,
height: abs(a - b)
)
}
}
Ignore the stuff about the width of the bars. Here's the translateY function. Basically, it translates coordinates from the graphing window into x/y stuff that's drawn. Remember that because the window/ graphing area does not change between drawn rects, the same y input will always produce the same result.
private func translateY(y: CGFloat) -> CGFloat {
if barsAreReversed {
return graphingArea.minY + graphingArea.height * (y - graphingWindow.startValue) / (graphingWindow.length)
} else {
return graphingArea.maxY - graphingArea.height * (y - graphingWindow.startValue) / (graphingWindow.length)
}
}
EDIT 2:
Here's a simplified version of my code that shows the problem:
override func drawRect(rect: CGRect) {
let rect1: CGRect = CGRect(
x: 0,
y: 0,
width: 40,
height: 33.7
)
let rect2: CGRect = CGRect(
x: 0,
y: rect1.height,
width: 40,
height: 33.5
)
let context: CGContextRef = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor(red: 1 / 255, green: 29 / 255, blue: 29 / 255, alpha: 1).CGColor)
CGContextAddRect(context, rect1)
CGContextFillRect(context, rect1)
CGContextSetFillColorWithColor(context, UIColor(red: 9 / 255, green: 47 / 255, blue: 46 / 255, alpha: 1).CGColor)
CGContextAddRect(context, rect2)
CGContextFillRect(context, rect2)
}
It produces this:
I suspect that in this particular case, the rects you are filling are not integral, i.e they might have origins/heights that are by default rendered with slightly transparent pixels (anti-aliasing). You could avoid this by properly rounding your Y-axis translation
private func translateY(y: CGFloat) -> CGFloat {
if barsAreReversed {
return round(graphingArea.minY + graphingArea.height * (y - graphingWindow.startValue) / (graphingWindow.length))
} else {
return round(graphingArea.maxY - graphingArea.height * (y - graphingWindow.startValue) / (graphingWindow.length))
}
}
With arcs and other shapes it is not as easy, however, you could try and get rid of it, by leaving a bit of overlap between shapes. Of course, as pointed out by matt, you could simply turn anti-aliasing off, in which case these transparent "half-pixels" will all be rendered as if they are actually fully-within the rect.
This is likely happening because the rectangle coordinates you are using to draw shapes are fractional values. As a result Core Graphics performs antialiasing at the edges of those rectangles when your coordinates land between pixel boundaries.
You could solve this by simply rounding the coordinates of the rectangles before drawing. You can use the CGRectIntegral function which performs this kind of rounding, for example:
CGContextFillRect(context, CGRectIntegral(rect1))
It's antialiasing. I can prevent this phenomenon by using your exact same code but drawing in a CGContext in which we have first called CGContextSetAllowsAntialiasing(context, false). Here it is without that call:
And here it is with that call:
But, as others have said, we can get the same result by changing your 33.7 and 33.5 to 40, so that we come down on pixel boundaries.

Resources