Rotated Image gets distorted and blurry? - ios

I use an image view:
#IBOutlet weak var imageView: UIImageView!
to paint an image and also another image which has been rotated. It turns out that the rotated image has very bad quality. In the following image the glasses in the yellow box are not rotated. The glasses in the red box are rotated by 4.39 degrees.
Here is the code I use to draw the glasses:
UIGraphicsBeginImageContext(imageView.image!.size)
imageView.image!.drawInRect(CGRectMake(0, 0, imageView.image!.size.width, imageView.image!.size.height))
var drawCtxt = UIGraphicsGetCurrentContext()
var glassImage = UIImage(named: "glasses.png")
let yellowRect = CGRect(...)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.yellowColor().CGColor)
CGContextStrokeRect(drawCtxt, yellowRect)
CGContextDrawImage(drawCtxt, yellowRect, glassImage!.CGImage)
// paint the rotated glasses in the red square
CGContextSaveGState(drawCtxt)
CGContextTranslateCTM(drawCtxt, centerX, centerY)
CGContextRotateCTM(drawCtxt, 4.398 * CGFloat(M_PI) / 180)
var newRect = yellowRect
newRect.origin.x = -newRect.size.width / 2
newRect.origin.y = -newRect.size.height / 2
CGContextAddRect(drawCtxt, newRect)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.redColor().CGColor)
CGContextSetLineWidth(drawCtxt, 1)
// draw the red rect
CGContextStrokeRect(drawCtxt, newRect)
// draw the image
CGContextDrawImage(drawCtxt, newRect, glassImage!.CGImage)
CGContextRestoreGState(drawCtxt)
How can I rotate and paint the glasses without losing quality or get a distorted image?

You should use UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) to create the initial context. Passing in 0.0 as the scale will default to the scale of the current screen (e.g., 2.0 on an iPhone 6 and 3.0 on an iPhone 6 Plus).
See this note on UIGraphicsBeginImageContext():
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.

As others have pointed out, you need to set up your context to allow for retina displays.
Aside from that, you might want to use a source image that is larger than the target display size and scale it down. (2X the pixel dimensions of the target image would be a good place to start.)
Rotating to odd angles is destructive. The graphics engine has to map a grid of source pixels onto a different grid where they don't line up. Perfectly straight lines in the source image are no longer straight in the destination image, etc. The graphics engine has to do some interpolation, and a source pixel might be spread over several pixels, or less than a full pixel, in the destination image.
By providing a larger source image you give the graphics engine more information to work with. It can better slice and dice those source pixels into the destination grid of pixels.

Related

iOS Vision: Drawing Detected Rectangles on Live Camera Preview Works on iPhone But Not on iPad

I'm using the iOS Vision framework to detect rectangles in real-time with the camera on an iPhone and it works well. The live preview displays a moving yellow rectangle around the detected shape.
However, when the same code is run on an iPad, the yellow rectangle tracks accurately along the X axis, but on the Y it is always slightly offset from the centre and it is not correctly scaled. The included image shows both devices tracking the same test square to better illustrate. In both cases, after I capture the image and plot the rectangle on the full camera frame (1920 x 1080), everything looks fine. It's just the live preview on the iPad that does not track properly.
I believe the issue is caused by how the iPad screen has a 4:3 aspect ratio. The iPhone's full screen preview scales its 1920 x 1080 raw frame down to 414 x 718, where both X and Y dims are scaled down by the same factor (about 2.6). However, the iPad scales the 1920 x 1080 frame down to 810 x 964, which warps the image and causes the error along the Y axis.
A rough solution could be to set a preview layer size smaller than the full screen and have it be scaled down uniformly in a 16:9 ratio matching 1920 x 1080, but I would prefer to use the full screen. Has anyone here come across this issue and found a transform that can properly translate and scale the rect observation onto the iPad screen?
Example test images and code snippet are below.
let rect: VNRectangleObservation
//Camera preview (live) image dimensions
let previewWidth = self.previewLayer!.bounds.width
let previewHeight = self.previewLayer!.bounds.height
//Dimensions of raw captured frames from the camera (1920 x 1080)
let frameWidth = self.frame!.width
let frameHeight = self.frame!.height
//Transform to change detected rectangle from Vision framework's coordinate system to SwiftUI
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -(previewHeight))
let scale = CGAffineTransform.identity.scaledBy(x: previewWidth, y: previewHeight)
//Convert the detected rectangle from normalized [0, 1] coordinates with bottom left origin to SwiftUI top left origin
//and scale the normalized rect to preview window dimensions.
var bounds: CGRect = rect.boundingBox.applying(scale).applying(transform)
//Rest of code draws the bounds CGRect in yellow onto the preview window, as shown in the image.
In case it helps anyone else, based on the info posted by Mr.SwiftOak's comment, I was able to resolve the problem through a combination of changing the preview layer to scale as .resizeAspect, rather than .resizeAspectFill, preserving the ratio of the raw frame in the preview. This led to the preview no longer taking up the full iPad screen, but made it a lot simpler to overlay accurately.
I then drew the rectangles as a .overlay to the preview window, so that the drawing coords are relative to the origin of the image (top left) rather than the view itself, which has an origin at (0, 0) top left of the entire screen.
To clarify on how I've been drawing the rects, there are two parts:
Converting the detect rect bounding boxes into paths on CAShapeLayers:
let boxPath = CGPath(rect: bounds, transform: nil)
let boxShapeLayer = CAShapeLayer()
boxShapeLayer.path = boxPath
boxShapeLayer.fillColor = UIColor.clear.cgColor
boxShapeLayer.strokeColor = UIColor.yellow.cgColor
boxLayers.append(boxShapeLayer)
Appending the layers in the updateUIView of the preview UIRpresentable:
func updateUIView(_ uiView: VideoPreviewView, context: Context)
{
if let rectangles = self.viewModel.rectangleDrawings {
for rect in rectangles {
uiView.videoPreviewLayer.addSublayer(rect)
}
}
}

swift get image in aspect fill from original image [duplicate]

This question already has answers here:
How to crop a UIImageView to a new UIImage in 'aspect fill' mode?
(2 answers)
Closed 4 years ago.
The problem I am facing is that the image taken from the camera is larger then the one shown in the live view. I have the camera view setup as Aspect Fill.
So the image that I get from the camera is about 4000x3000 and the view that shows the live feed from the camera is 375x800 (fullscreen iPhoneX size) so how do I transform/cut out part of the image from the image gotten from the camera to be the same as the one shown in the live view, so I can further manipulate the image (draw over it).
As far as I understand the Aspect Fill property clips the image that cannon't be shown in the view. But that clip does not happen on X = 0 and y = 0 it happens somewhere in the middle of the image. So how do i get that X and Y on the original image so that i can crop out exactly that part out.
I hope I explained well enough.
EDIT:
To give more context and some code snipets to make it easier to understand the issue.
Setting up my camera with the .resizeAspectFill gravity.
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = self.captureView.frame
self.captureView.layer.addSublayer(cameraPreviewLayer!)
which is displayed in the live view (captureView) that has the size of
375x818 (width: 375 and height: 818).
Then I get the image from that camera on button click and the size of that image is:
3024x4032 (width: 3024 and height: 4032)
So what i want to do is crop the image from the camera to be the same as the one in the live view (captureView) that is set to AspectFill type.
As you already state, content mode option Aspect fill tries to fill up the live view and you are also right that it crops some rectangle from center (cropping top-bottom or left-right depending upon the image size and the image view size)
For generic solution there are two possible case
The image needed to be cropped along the height to fit the image view (proportional drawing height is smaller)
The image needed to be cropped along the width to fit the image view (proportional drawing width is smaller)
Considering your size notation is 4000x3000 (height = 4000, width = 3000 a portrait image) and your drawing canvas size is 375X800 (height = 375, width = 800), then your cropping would be height wise while setting the content mode Aspect Fill.
So cropping will be done from X=0 but the Y would be somewhat positive. So lets calculate the Y
let propotionalHeight = 4000 / 3000 * 800
let allowedHeight = 375
let topBottomCroppedHeight = proportionalHeight - allowedHeight
let croppedYPosition = topBottomCroppedHeight / 2
So here you got your Y value. and the height would be the height of the canvas / live view where you are rendering. Please replace these values with your variables.
If you are interested in how all the contentMode works can dive into here. All the contentMode supported by UIImageView is simulated here.
Happy coding.
UPDATE
one thing i forgot to mention that, this calculated croppedYPosition is for smaller proportion image. If you want to use this value for the original 4000X3000 image you have to scale this up for the original value as following
let originalYPosition = (croppedYPosition / 375) * 4000
Use originalYPosition to crop from the original image of size 4000X3000.

Transforming a CAShapelayer to a specific size in swift

I'm trying to make a CAShapeLayer with a bezier path in it expand with a view (which is animated, of course). I am aware that I could change the size of the path with CATransform3DMakeScale(, , ) But this won't allow me to make the path an exact size (in points).
Does anybody know how to do this?
You would do this using good old fashioned math.
Simple solution
To phrase your question differently: you have something of one size (the path/shape layer), and you want to scale it so that it becomes another size.
To know how much you want to scale along X and Y (separately) you divide the size you want to fit to by the current size. You can get the bounding box of the path using
let boundingBox = CGPathGetBoundingBox(path)
I'm assuming that you already have some size that you want to scale it to (here I'm calling mine containingSize).
Using those two, you can calculate the two scale factors by dividing the dimension you are scaling to with the dimension you are scaling from
let xScaleFactor = containingSize.width / boundingBox.width
let yScaleFactor = containingSize.height / boundingBox.height
Using those, you can create the required scale transform
let scaleTransform = CATransform3DMakeMakeScale(xScaleFactor, yScaleFactor, 1.0)
Scaling this shape layer
using those two scale factors, will scale the shape layer to fill the container size. If the container size has the same aspect ratio as the path, everything will look as expected. If not, the scaled layer will appear stretched.
Fitting instead of filling
This problem (unless its what you want) can be solved by calculating a uniform scale factor, that is the smaller of the two, so that the scaled path fits the container instead of fills it.
We do this by finding out which scale factor is the most constrained one, and then apply that to both X and Y
let boundingBoxAspectRatio = boundingBox.width/boundingBox.height
let viewAspectRatio = containingSize.width/containingSize.height
let scaleFactor: CGFloat
if (boundingBoxAspectRatio > viewAspectRatio) {
// Width is limiting factor
scaleFactor = containingSize.width/boundingBox.width
} else {
// Height is limiting factor
scaleFactor = containingSize.height/boundingBox.height
}
let scaleTransform = CATransform3DMakeMakeScale(scaleFactor, scaleFactor, 1.0)
This will scale the path without changing its aspect ratio
Scaling the layer or scaling the path?
You might also have noticed that as the shape layer was scaled, the line width scaled as well, like if it was an image. There is a difference between scaling the layer, and scaling the path.
If you only want it to appear as the path of the shape layer is scaled, then you should scale the path instead of the layer. You can do this by creating a new path that is transformed, and using that path with your shape layer. Note that the scale factor is still calculated using the bounding box of the unscaled path.
var affineTransform = CGAffineTransformMakeScale(scaleFactor, scaleFactor)
let transformedPath = CGPathCreateCopyByTransformingPath(path, &affineTransform)
yourShapeLayer.path = transformedPath
This will scale up the path, without affecting the line width, etc of the shape layer.

SKCropNode making masked image disappear entirely

I have an image I want to partially mask (wallSprite), an image to act as a mask over it (wallMaskBox), and a node to hold both (wallCropNode). When I simply add both images as children of wallCropNode, both image display correctly:
var wallSprite = SKSpriteNode(imageNamed: "wall.png")
var wallCropNode = SKCropNode()
var wallMaskBox = SKSpriteNode(imageNamed: "blacksquaretiny.png")
wallMaskBox.zPosition = 100
wallCropNode.addChild(wallSprite)
wallCropNode.addChild(wallMaskBox)
gameplayContainerNode.addChild(wallCropNode)
But when I set the mask image as a maskNode property of the crop node:
var wallSprite = SKSpriteNode(imageNamed: "wall.png")
var wallCropNode = SKCropNode()
var wallMaskBox = SKSpriteNode(imageNamed: "blacksquaretiny.png")
wallMaskBox.zPosition = 100
wallCropNode.addChild(wallSprite)
wallCropNode.maskNode = wallMaskBox
gameplayContainerNode.addChild(wallCropNode)
the wallSprite image disappears entirely, instead of being partly cropped. Any ideas?
The issue is your black square image is completely opaque. Some (or all) of its pixels should be transparent (i.e., alpha = 0). The pixels that correspond to the mask node's transparent pixels will be masked out (i.e., not rendered) in the cropped node. To demonstrate this, I used your code to create the following.
Here's the original image:
Here's the mask image that I used for the maskNode. Note that the white regions are transparent (i.e., alpha = 0). From Apple's documentation,
When rendering its children, each pixel is verified against the
corresponding pixel in the mask. If the pixel in the mask has an alpha
value of less than 0.05, the image pixel is masked out. Any pixel not
rendered by the mask node is automatically masked out.
and here's the cropped node. I took a screenshot of the scene from the iPhone 6 simulator.

Scaling sprites (not textures) for target viewport size/device in MonoGame

When you have to display a series of visual components (sprites) within the context of a game each taking a literal height and width that needs to be relative to the height & width of the Viewport (not necessarily aspect ratio) of the target device:
Is there a scaling class to help come up with scaling ratio in a dynamic fashion based on current device viewport size?
Will I need to roll my own scaling ratio algorithm?
Any cross platform issues I should be aware of?
This is not a question relating to the loading of assets based on target device nor is it a question of how to perform the scaling of the sprite (which is described here: http://msdn.microsoft.com/en-us/library/bb194913.aspx), rather a question of how to determine the scale of sprites based on view port size.
You can always create your own implementation of scaling.
For example, the default target viewport dimensions are:
const int defaultWidth = 1280, defaultHeight = 720;
And your current screen dimensions are 800×600, which gives you a (let's use a Vector2 instead of two floats):
int currentWidth = GraphicsDevice.Viewport.Width,
currentHeight = GraphicsDevice.Viewport.Height;
Vector2 scale = new Vector2(currentWidth / defaultWidth,
currentHeight / defaultHeight);
This gives you a {0.625; 0.83333}. You can now use this in a handy SpriteBatch.Draw() overload that takes a Vector2 scaling variable:
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
Alternatively, you can draw all your stuff to a RenderTarget2D and copy the resulting image from there to a stretched texture on the main screen, but that will still require the above SpriteBatch.Draw() overload, though it might save you time if you have lots of draw calls.
Another Option to generate the scale would be to leverage:
var scaleMatrix = Matrix.CreateScale(
(float)GraphicsDevice.Viewport.Width / View.Width,
(float)GraphicsDevice.Viewport.Width / View.Width, 1f);
http://msdn.microsoft.com/en-gb/library/bb195692.aspx.
But this did not meet my needs, as I would then have to roll my own transform to map touch input location to the 'transformed' sprites (which respond to user touch input by knowing their own position and size).
In the end I used a percentage based approach.
I basically got the viewport height and width...
GraphicsDevice.Viewport.Width
GraphicsDevice.Viewport.Height
...then calculated the Height and Width of my sprites (Note: "as mentioned in question they take a literal height and width") based on their relative size to the screen myself using percentages.
//I want the buttons height and width to be 20% of the viewport
var x, y = GraphicsDevice.Viewport.Width * 0.2f; //20% of screen width
var btnsize = new Vector(x,y);
var button = new GameButton(btnsize);
Then once I have the size of the button I am able to calculate the position on the screen to render the button based of the size of the button and the available viewport size, against working in relative position based in percentages.

Resources