I am working on a project where I have to plot points in specific regions on an image that represents the human body. In Interface Builder, I have set up a container UIView, which takes up most of the vertical center of the main view. In that container view, I placed a UIImageView and set the graphic in IB. The graphic is much larger than both the UIImageView and the container UIView, more specifically, it’s taller. The ContentMode of the UIImageView is set to AspectFit because I want the image to not show as bigger than the container.
The code creates several CGRect instances which are regions where user taps mean something. When the user taps on the container view, code is used to determine if the point is within one of the regions and if it is, a dot is drawn in the center of that region.
The problem is that when I run the app on certain simulators, the region rectangles are not in the right place on the image. For example, when I run the app on an iPhone X, the rectangle region that is in place for the head looks fine. When I run the app on an iPhone XR, the rectangle region is off to the left of the head.
I am using coordinates to define the region rectangles that are based on where, for example, the human head is in the image. I feel like this is not the right way to do this since AspectFit for the ContentMode of the image is most likely causing the image to be scaled to maintain aspect.
Bottom line is that I want a rectangle to be in the right place and size no matter how the image scales. No sure if how I am doing it makes sense, so hope that some suggestions come in that offer a better way to do this.
Update 1: The UIImageView is pinned to the surrounding UIView, so it's width and height are as big as the container. Since the image is skinnier than the UIImageView, the image appears centered in it. In the attached images, the purple background is the UIImageView showing the topmost UIView's background color.
Update 2: I checked the scale for both width and height and found they are different. The width scale factor is 1.36565656565 and the height scale factor is 2.104. I tried the formulas given with both scale factors given by Sweeper and no luck.
You just need to do some maths.
On the original image, identify the region the user can tap. Note down its x, y, w, h, relative to the image.
Figure out how much the image shrank in the image view. Since you said the image is taller than the image view, the image underwent a scale factor of imageViewHeight / imageHeight. We'll now refer to this as scaleFactor.
The region's Y coordinate must have also gone down by scaleFactor, so you multiply regionY by scaleFactor to get newY.
The region's width and height will do the same thing, so multiply them by scaleFactor and get newWidth and newHeight.
The X coordinate of the region, relative to the image view, is a bit tricky. You need to account for the amount of empty space that the image view has created by scaling down the image. This emptySpace is calculated by (imageViewWidth - newWidth) / 2. Then to calculate the new region's X coordinate relative to the image view, you do emptySpace + X * scaleFactor.
Now the rect (newX, newY, newWidth, newHeight) is the region relative to the image view that the user can tap!
I made this code for a rectangle in the position exactly the same on every device.
To make the check if it's a x phone I do in the viewDidLoad and set the constrains to what I want and added tab bar size.
#IBOutlet weak var tabMenu: NSLayoutConstraint!
#IBOutlet weak var topConstraint: NSLayoutConstraint!
#IBOutlet weak var bottomConstraint: NSLayoutConstraint!
#IBOutlet weak var backgroundImg: UIImageView!
#IBOutlet weak var rectangleImg: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// Check if it's a iPhone X
if #available(iOS 11, *) {
let safeArea = UIApplication.shared.delegate?.window??.safeAreaInsets
// If its a x phone i use safe instead of superView
guard let safe = safeArea else { return }
if safe.bottom > 0 {
topConstraint.constant = safe.top
bottomConstraint.constant = safe.bottom + tabMenu.constant
}
}
}
override func viewDidAppear(_ animated: Bool) {
// Here I set the rectangle to 36% of the width and height (change to what you want)
rectangleImg.frame = CGRect(x: 0, y: 0, width: backgroundImg.frame.width * 0.36,
height: backgroundImg.frame.height * 0.36)
// Last I put the rectangle in the center of background image
rectangleImg.center.x = backgroundImg.center.x
rectangleImg.center.y = backgroundImg.center.y
}
Hope this code could help!
The root of my problem is that I had the UIImageView's four sides pinned to its container view. When the width of the device changed, it correctly scaled the image, but it caused the graphic to be "stretched". This caused the width of the head, for example, to increase. I have solved this for now by locking the width of the image so the coordinates I come up with from the original image remain intact no matter what the device.
For those reading this, it may not sound like much of a solution, but I have to go with this since I have a pretty aggressive deadline. I have tested it on multiple device simulators and it works. I may have to revisit this in the future, but for now, it is working.
I also used this question's answer to rework the code: create a clickable body diagram
Related
Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.
This question already has answers here:
How to crop a UIImageView to a new UIImage in 'aspect fill' mode?
(2 answers)
Closed 4 years ago.
The problem I am facing is that the image taken from the camera is larger then the one shown in the live view. I have the camera view setup as Aspect Fill.
So the image that I get from the camera is about 4000x3000 and the view that shows the live feed from the camera is 375x800 (fullscreen iPhoneX size) so how do I transform/cut out part of the image from the image gotten from the camera to be the same as the one shown in the live view, so I can further manipulate the image (draw over it).
As far as I understand the Aspect Fill property clips the image that cannon't be shown in the view. But that clip does not happen on X = 0 and y = 0 it happens somewhere in the middle of the image. So how do i get that X and Y on the original image so that i can crop out exactly that part out.
I hope I explained well enough.
EDIT:
To give more context and some code snipets to make it easier to understand the issue.
Setting up my camera with the .resizeAspectFill gravity.
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = self.captureView.frame
self.captureView.layer.addSublayer(cameraPreviewLayer!)
which is displayed in the live view (captureView) that has the size of
375x818 (width: 375 and height: 818).
Then I get the image from that camera on button click and the size of that image is:
3024x4032 (width: 3024 and height: 4032)
So what i want to do is crop the image from the camera to be the same as the one in the live view (captureView) that is set to AspectFill type.
As you already state, content mode option Aspect fill tries to fill up the live view and you are also right that it crops some rectangle from center (cropping top-bottom or left-right depending upon the image size and the image view size)
For generic solution there are two possible case
The image needed to be cropped along the height to fit the image view (proportional drawing height is smaller)
The image needed to be cropped along the width to fit the image view (proportional drawing width is smaller)
Considering your size notation is 4000x3000 (height = 4000, width = 3000 a portrait image) and your drawing canvas size is 375X800 (height = 375, width = 800), then your cropping would be height wise while setting the content mode Aspect Fill.
So cropping will be done from X=0 but the Y would be somewhat positive. So lets calculate the Y
let propotionalHeight = 4000 / 3000 * 800
let allowedHeight = 375
let topBottomCroppedHeight = proportionalHeight - allowedHeight
let croppedYPosition = topBottomCroppedHeight / 2
So here you got your Y value. and the height would be the height of the canvas / live view where you are rendering. Please replace these values with your variables.
If you are interested in how all the contentMode works can dive into here. All the contentMode supported by UIImageView is simulated here.
Happy coding.
UPDATE
one thing i forgot to mention that, this calculated croppedYPosition is for smaller proportion image. If you want to use this value for the original 4000X3000 image you have to scale this up for the original value as following
let originalYPosition = (croppedYPosition / 375) * 4000
Use originalYPosition to crop from the original image of size 4000X3000.
I have a uislider that has a callout bubble whose center X follows above the center X of the uislider thumb.
I tested it on an iPhone 7 and it works perfectly, and it also works perfectly on other devices once the thumb is moved.
The issue I have is that on initial load, the bubble and thumb should be aligned as it does when the thumb is moved, but on a device like an iPhone 8+ or iPhone SE, the bubble's center x is off from the thumb's center x (+/- 20-ish pts).
I believe the calculation to get the two objects aligned is correct since it works fine when you actually move the thumb, but something with the initial calculation of the thumb's x is off. With some console logs, the thumb's X would initially read as 180 pts but on touch the first value is actually 201 pts. The bubble then adjusts accordingly.
I am using a custom uislider subclass for gradients and such, and that could potentially be affecting it(?) but it's confusing how it still works on touch.
Any help would be appreciated.
On viewDidLoad (leading constraint is to safe area):
distanceSlider.setValue(Float(sliderValue), animated: false)
distanceCalloutViewLeadingConstraint.constant = distanceSlider.thumbCenterX - (distanceCalloutView.center.x - distanceCalloutView.frame.minX)
On distance change:
distanceCalloutViewLeadingConstraint.constant = distanceSlider.thumbCenterX - (distanceCalloutView.center.x - distanceCalloutView.frame.minX)
Thumb center X extension:
extension UISlider {
var thumbCenterX: CGFloat {
let trackRect = self.trackRect(forBounds: frame)
let thumbRect = self.thumbRect(forBounds: bounds, trackRect: trackRect, value: value)
return thumbRect.midX
}
}
Try setting below line in viewDidLayoutSubviews(). It looks like the view in with slider is added is not properly layout its size.
distanceCalloutViewLeadingConstraint.constant = distanceSlider.thumbCenterX - (distanceCalloutView.center.x - distanceCalloutView.frame.minX)
Why are you using custom image and adjusting according to slider value. You can directly use
func setThumbImage(_ image: UIImage?,
for state: UIControl.State)
check the reference apple doc link https://developer.apple.com/documentation/uikit/uislider/1621336-setthumbimage
I have 3 images:
topBg.png
midBg.png
botBg.png
I want to set topBg.png at top scene and height = 200
middleBg.png should be infinite scale or repeat vertically
botBg.png - should be in bottom and height = 200
i have next code:
override func didMove(to view: SKView) {
self.bgTopSpriteNode = self.childNode(withName: "//bgTopNode") as? SKSpriteNode
self.bgMiddleSpriteNode = self.childNode(withName: "//bgMiddleNode") as? SKSpriteNode
self.bgBottomSpriteNode = self.childNode(withName: "//bgBottomNode") as? SKSpriteNode
if let bgTopSpriteNode = self.bgTopSpriteNode,
let bgMiddleSpriteNode = self.bgMiddleSpriteNode,
let bgBottomSpriteNode = self.bgBottomSpriteNode {
bgTopSpriteNode.size.width = self.frame.width
bgTopSpriteNode.size.height = 200
bgTopSpriteNode.position.x = 0
bgMiddleSpriteNode.size.width = self.frame.width
bgMiddleSpriteNode.size.height = self.frame.height-400
bgMiddleSpriteNode.position.x = 0
bgBottomSpriteNode.size.width = self.frame.width
bgBottomSpriteNode.size.height = 200
bgBottomSpriteNode.position.x = 0
}
}
But how to set Y position of images. Because coordinates begin from center of screen, not from left top and i don't know how to convert them.
There are a couple of different ways to achieve what you're looking to do.
First, you can compute the y position of the top and the bottom of the screen using simply size.height / 2 if you have the anchorPoint of your scene at (0.5,0.5). (Don't use frame - use size. That way, you take into account the scaleMode of the scene.)
It sounds like you are frustrated that the origin of the scene is in the center. If you'd like to move it to the corner, you can easily do so by setting the scene's anchorPoint property, say, to (0.0, 0.0) for the lower left corner. Then, your y-values are 0 and size.height. If you are using the .sks editor, this is exposed in the interface - you can just set it there. Otherwise, you can set it programmatically.
Finally, you can set the scaleMode of your scene to something like .aspectFill, set the size of the scene directly (say, to 1024x768 for an iPad), and just place the images wherever they need to go. This approach works particularly well with .sks files, if you are using them; when you load up a scene, you can set the size of the scene based on the aspect ratio of the view it's in to accommodate different aspect ratios. For instance, you could adopt a 320x480 "reference size" for your iPhone scenes. Whenever you load up the scene, you could set the size of the scene to be 320 points wide and however many points tall to match the aspect ratio of the device. Then, all your graphics would be produced at 320pt wide, and you could slide them up or down proportionally across the scene's size for layout. This is a little more complicated, but it's a lot easier than trying to deal with separate layout considerations for multiple devices.
I should also point out a couple of things.
You can use the anchorPoint property of a sprite to dictate where the sprite's coordinates are measured from. This is handy for cases where you want images to be flush up against something. For instance, if you want an image flush against the left side of the screen, set its position to be exactly the left side of the screen, and then set its anchorPoint.x to 0.0; this will put the left edge of the sprite against the left edge of the screen. This also works for scenes, as you encountered - moving the anchorPoint of the scene moves everything in the scene relative to its size.
You don't need three images for what you're describing. You can use a single sprite and just set its centerRect property to tell it to use the top and bottom of an image and stretch the center part vertically. You have to do a little math to set the right xScale and yScale (not width and height, IIRC), but then you can draw all of that with one sprite instead of three. This would be really handy in your case, because you could just leave the sprite at (0,0), set its scale to match the size of the entire scene, and set the centerRect property - you wouldn't have to do any positioning math at all.
I use an image view:
#IBOutlet weak var imageView: UIImageView!
to paint an image and also another image which has been rotated. It turns out that the rotated image has very bad quality. In the following image the glasses in the yellow box are not rotated. The glasses in the red box are rotated by 4.39 degrees.
Here is the code I use to draw the glasses:
UIGraphicsBeginImageContext(imageView.image!.size)
imageView.image!.drawInRect(CGRectMake(0, 0, imageView.image!.size.width, imageView.image!.size.height))
var drawCtxt = UIGraphicsGetCurrentContext()
var glassImage = UIImage(named: "glasses.png")
let yellowRect = CGRect(...)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.yellowColor().CGColor)
CGContextStrokeRect(drawCtxt, yellowRect)
CGContextDrawImage(drawCtxt, yellowRect, glassImage!.CGImage)
// paint the rotated glasses in the red square
CGContextSaveGState(drawCtxt)
CGContextTranslateCTM(drawCtxt, centerX, centerY)
CGContextRotateCTM(drawCtxt, 4.398 * CGFloat(M_PI) / 180)
var newRect = yellowRect
newRect.origin.x = -newRect.size.width / 2
newRect.origin.y = -newRect.size.height / 2
CGContextAddRect(drawCtxt, newRect)
CGContextSetStrokeColorWithColor(drawCtxt, UIColor.redColor().CGColor)
CGContextSetLineWidth(drawCtxt, 1)
// draw the red rect
CGContextStrokeRect(drawCtxt, newRect)
// draw the image
CGContextDrawImage(drawCtxt, newRect, glassImage!.CGImage)
CGContextRestoreGState(drawCtxt)
How can I rotate and paint the glasses without losing quality or get a distorted image?
You should use UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) to create the initial context. Passing in 0.0 as the scale will default to the scale of the current screen (e.g., 2.0 on an iPhone 6 and 3.0 on an iPhone 6 Plus).
See this note on UIGraphicsBeginImageContext():
This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.
As others have pointed out, you need to set up your context to allow for retina displays.
Aside from that, you might want to use a source image that is larger than the target display size and scale it down. (2X the pixel dimensions of the target image would be a good place to start.)
Rotating to odd angles is destructive. The graphics engine has to map a grid of source pixels onto a different grid where they don't line up. Perfectly straight lines in the source image are no longer straight in the destination image, etc. The graphics engine has to do some interpolation, and a source pixel might be spread over several pixels, or less than a full pixel, in the destination image.
By providing a larger source image you give the graphics engine more information to work with. It can better slice and dice those source pixels into the destination grid of pixels.