UISlider Tracking Callout Unaligned to Thumb - ios

I have a uislider that has a callout bubble whose center X follows above the center X of the uislider thumb.
I tested it on an iPhone 7 and it works perfectly, and it also works perfectly on other devices once the thumb is moved.
The issue I have is that on initial load, the bubble and thumb should be aligned as it does when the thumb is moved, but on a device like an iPhone 8+ or iPhone SE, the bubble's center x is off from the thumb's center x (+/- 20-ish pts).
I believe the calculation to get the two objects aligned is correct since it works fine when you actually move the thumb, but something with the initial calculation of the thumb's x is off. With some console logs, the thumb's X would initially read as 180 pts but on touch the first value is actually 201 pts. The bubble then adjusts accordingly.
I am using a custom uislider subclass for gradients and such, and that could potentially be affecting it(?) but it's confusing how it still works on touch.
Any help would be appreciated.
On viewDidLoad (leading constraint is to safe area):
distanceSlider.setValue(Float(sliderValue), animated: false)
distanceCalloutViewLeadingConstraint.constant = distanceSlider.thumbCenterX - (distanceCalloutView.center.x - distanceCalloutView.frame.minX)
On distance change:
distanceCalloutViewLeadingConstraint.constant = distanceSlider.thumbCenterX - (distanceCalloutView.center.x - distanceCalloutView.frame.minX)
Thumb center X extension:
extension UISlider {
var thumbCenterX: CGFloat {
let trackRect = self.trackRect(forBounds: frame)
let thumbRect = self.thumbRect(forBounds: bounds, trackRect: trackRect, value: value)
return thumbRect.midX
}
}

Try setting below line in viewDidLayoutSubviews(). It looks like the view in with slider is added is not properly layout its size.
distanceCalloutViewLeadingConstraint.constant = distanceSlider.thumbCenterX - (distanceCalloutView.center.x - distanceCalloutView.frame.minX)
Why are you using custom image and adjusting according to slider value. You can directly use
func setThumbImage(_ image: UIImage?,
for state: UIControl.State)
check the reference apple doc link https://developer.apple.com/documentation/uikit/uislider/1621336-setthumbimage

Related

Sizing a SKScene to fit within iOS screen boundaries

I have written a small macOS application that sizes a 64 x 64 grid of SKSpriteNodes based on the smaller of the height or width of the view:
let pixelSize = min(self.size.width / Double(numColumns), self.size.height / Double(numRows))
...numRows and Columns are both 64. When the window opens at 1024 x 768, it calculates based on 768 and produces a square drawing area centered in the window, and it follows resizes and such.
I then used the Xcode template to add an iOS target. I was surprised to find that the size of the view remains 1024 x 768, in spite of being in a view in Main.storyboard that is 380 x 844. In the storyboard, Game Controller View Scene is set to Aspect Fit and Resize Subviews is turned on. I tried Clips to Bounds and several other settings, but it seems the Scene simply isn't being resized.
Did I miss a setting somewhere? Or should I calculate my sizes some other way?
from within SKScene you can access size via self.view?.frame.size
note however that if you draw your scene once via didMove(to view: SKView) you will
potentially miss important resize events that can and will effect the dimensions
of your scene. for instance, on macOS the storyboard dimensions will often be overridden
with cached values after the application launches. for this reason, i suggest adding responders for resize on macOS
and rotation on iOS.
//macOS -- respond to window resize
class GameViewController: NSViewController {
//called on window resize
//(note: NSWindow.didResizeNotification also accesses this event and may be better in some cases)
override func viewDidLayout() {
print("view been resized to \(self.view.frame.size)")
}
}
//iOS -- respond to device rotation
class GameViewController: UIViewController {
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
coordinator.animate(alongsideTransition: { context in
// This is called during the animation
}, completion: { context in
// This is called after the rotation is finished. Equal to deprecated `didRotate`
print("view been resized to \(self.view.frame.size)")
})
}
}
ps your code says pixels, but my guess is you probably mean points. a point is aways 1/72th of an
inch, i.e. 72dpi. by contrast, pixels will vary depending on the device's resolution
(x2 for iPhone 8, x3 for iPhone 12, etc.). i basically only ever care about points when
designing an app, not pixels. although this isn't a hard and fast rule and ymmv etc etc.

iOS - Swift - Clickable Regions on a UIImage

I am working on a project where I have to plot points in specific regions on an image that represents the human body. In Interface Builder, I have set up a container UIView, which takes up most of the vertical center of the main view. In that container view, I placed a UIImageView and set the graphic in IB. The graphic is much larger than both the UIImageView and the container UIView, more specifically, it’s taller. The ContentMode of the UIImageView is set to AspectFit because I want the image to not show as bigger than the container.
The code creates several CGRect instances which are regions where user taps mean something. When the user taps on the container view, code is used to determine if the point is within one of the regions and if it is, a dot is drawn in the center of that region.
The problem is that when I run the app on certain simulators, the region rectangles are not in the right place on the image. For example, when I run the app on an iPhone X, the rectangle region that is in place for the head looks fine. When I run the app on an iPhone XR, the rectangle region is off to the left of the head.
I am using coordinates to define the region rectangles that are based on where, for example, the human head is in the image. I feel like this is not the right way to do this since AspectFit for the ContentMode of the image is most likely causing the image to be scaled to maintain aspect.
Bottom line is that I want a rectangle to be in the right place and size no matter how the image scales. No sure if how I am doing it makes sense, so hope that some suggestions come in that offer a better way to do this.
Update 1: The UIImageView is pinned to the surrounding UIView, so it's width and height are as big as the container. Since the image is skinnier than the UIImageView, the image appears centered in it. In the attached images, the purple background is the UIImageView showing the topmost UIView's background color.
Update 2: I checked the scale for both width and height and found they are different. The width scale factor is 1.36565656565 and the height scale factor is 2.104. I tried the formulas given with both scale factors given by Sweeper and no luck.
You just need to do some maths.
On the original image, identify the region the user can tap. Note down its x, y, w, h, relative to the image.
Figure out how much the image shrank in the image view. Since you said the image is taller than the image view, the image underwent a scale factor of imageViewHeight / imageHeight. We'll now refer to this as scaleFactor.
The region's Y coordinate must have also gone down by scaleFactor, so you multiply regionY by scaleFactor to get newY.
The region's width and height will do the same thing, so multiply them by scaleFactor and get newWidth and newHeight.
The X coordinate of the region, relative to the image view, is a bit tricky. You need to account for the amount of empty space that the image view has created by scaling down the image. This emptySpace is calculated by (imageViewWidth - newWidth) / 2. Then to calculate the new region's X coordinate relative to the image view, you do emptySpace + X * scaleFactor.
Now the rect (newX, newY, newWidth, newHeight) is the region relative to the image view that the user can tap!
I made this code for a rectangle in the position exactly the same on every device.
To make the check if it's a x phone I do in the viewDidLoad and set the constrains to what I want and added tab bar size.
#IBOutlet weak var tabMenu: NSLayoutConstraint!
#IBOutlet weak var topConstraint: NSLayoutConstraint!
#IBOutlet weak var bottomConstraint: NSLayoutConstraint!
#IBOutlet weak var backgroundImg: UIImageView!
#IBOutlet weak var rectangleImg: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// Check if it's a iPhone X
if #available(iOS 11, *) {
let safeArea = UIApplication.shared.delegate?.window??.safeAreaInsets
// If its a x phone i use safe instead of superView
guard let safe = safeArea else { return }
if safe.bottom > 0 {
topConstraint.constant = safe.top
bottomConstraint.constant = safe.bottom + tabMenu.constant
}
}
}
override func viewDidAppear(_ animated: Bool) {
// Here I set the rectangle to 36% of the width and height (change to what you want)
rectangleImg.frame = CGRect(x: 0, y: 0, width: backgroundImg.frame.width * 0.36,
height: backgroundImg.frame.height * 0.36)
// Last I put the rectangle in the center of background image
rectangleImg.center.x = backgroundImg.center.x
rectangleImg.center.y = backgroundImg.center.y
}
Hope this code could help!
The root of my problem is that I had the UIImageView's four sides pinned to its container view. When the width of the device changed, it correctly scaled the image, but it caused the graphic to be "stretched". This caused the width of the head, for example, to increase. I have solved this for now by locking the width of the image so the coordinates I come up with from the original image remain intact no matter what the device.
For those reading this, it may not sound like much of a solution, but I have to go with this since I have a pretty aggressive deadline. I have tested it on multiple device simulators and it works. I may have to revisit this in the future, but for now, it is working.
I also used this question's answer to rework the code: create a clickable body diagram

Swift sprite kit vertical background infinite image

I have 3 images:
topBg.png
midBg.png
botBg.png
I want to set topBg.png at top scene and height = 200
middleBg.png should be infinite scale or repeat vertically
botBg.png - should be in bottom and height = 200
i have next code:
override func didMove(to view: SKView) {
self.bgTopSpriteNode = self.childNode(withName: "//bgTopNode") as? SKSpriteNode
self.bgMiddleSpriteNode = self.childNode(withName: "//bgMiddleNode") as? SKSpriteNode
self.bgBottomSpriteNode = self.childNode(withName: "//bgBottomNode") as? SKSpriteNode
if let bgTopSpriteNode = self.bgTopSpriteNode,
let bgMiddleSpriteNode = self.bgMiddleSpriteNode,
let bgBottomSpriteNode = self.bgBottomSpriteNode {
bgTopSpriteNode.size.width = self.frame.width
bgTopSpriteNode.size.height = 200
bgTopSpriteNode.position.x = 0
bgMiddleSpriteNode.size.width = self.frame.width
bgMiddleSpriteNode.size.height = self.frame.height-400
bgMiddleSpriteNode.position.x = 0
bgBottomSpriteNode.size.width = self.frame.width
bgBottomSpriteNode.size.height = 200
bgBottomSpriteNode.position.x = 0
}
}
But how to set Y position of images. Because coordinates begin from center of screen, not from left top and i don't know how to convert them.
There are a couple of different ways to achieve what you're looking to do.
First, you can compute the y position of the top and the bottom of the screen using simply size.height / 2 if you have the anchorPoint of your scene at (0.5,0.5). (Don't use frame - use size. That way, you take into account the scaleMode of the scene.)
It sounds like you are frustrated that the origin of the scene is in the center. If you'd like to move it to the corner, you can easily do so by setting the scene's anchorPoint property, say, to (0.0, 0.0) for the lower left corner. Then, your y-values are 0 and size.height. If you are using the .sks editor, this is exposed in the interface - you can just set it there. Otherwise, you can set it programmatically.
Finally, you can set the scaleMode of your scene to something like .aspectFill, set the size of the scene directly (say, to 1024x768 for an iPad), and just place the images wherever they need to go. This approach works particularly well with .sks files, if you are using them; when you load up a scene, you can set the size of the scene based on the aspect ratio of the view it's in to accommodate different aspect ratios. For instance, you could adopt a 320x480 "reference size" for your iPhone scenes. Whenever you load up the scene, you could set the size of the scene to be 320 points wide and however many points tall to match the aspect ratio of the device. Then, all your graphics would be produced at 320pt wide, and you could slide them up or down proportionally across the scene's size for layout. This is a little more complicated, but it's a lot easier than trying to deal with separate layout considerations for multiple devices.
I should also point out a couple of things.
You can use the anchorPoint property of a sprite to dictate where the sprite's coordinates are measured from. This is handy for cases where you want images to be flush up against something. For instance, if you want an image flush against the left side of the screen, set its position to be exactly the left side of the screen, and then set its anchorPoint.x to 0.0; this will put the left edge of the sprite against the left edge of the screen. This also works for scenes, as you encountered - moving the anchorPoint of the scene moves everything in the scene relative to its size.
You don't need three images for what you're describing. You can use a single sprite and just set its centerRect property to tell it to use the top and bottom of an image and stretch the center part vertically. You have to do a little math to set the right xScale and yScale (not width and height, IIRC), but then you can draw all of that with one sprite instead of three. This would be really handy in your case, because you could just leave the sprite at (0,0), set its scale to match the size of the entire scene, and set the centerRect property - you wouldn't have to do any positioning math at all.

UIInterpolatingMotionEffect Parallax Effect Swift 2 iOS

I am trying to create a parallax effect such as the iPhone homescreen where the background moves as your tilt your phone. I have so far achieved this, but I have one problem still. After I tilt my phone and the background moves, it very slowly moves back into position of being centered.
It's not the constraints. I removed the center x/y constraint and it still slid back slowly as if it was re-calibrating to the new position. The only other constraint is the ratio. So it's not the constraints.
any ideas?
The code is simple:
let leftRightMin = CGFloat(-50.0)
let leftRightMax = CGFloat(50.0)
let upDownMin = CGFloat(-35.0)
let upDownMax = CGFloat(35.0)
let leftRight = UIInterpolatingMotionEffect(keyPath: "center.x", type:
UIInterpolatingMotionEffectType.TiltAlongHorizontalAxis)
leftRight.minimumRelativeValue = leftRightMin
leftRight.maximumRelativeValue = leftRightMax
let upDown = UIInterpolatingMotionEffect(keyPath: "center.y", type:
UIInterpolatingMotionEffectType.TiltAlongVerticalAxis)
upDown.minimumRelativeValue = upDownMin
upDown.maximumRelativeValue = upDownMax
let fxGroup = UIMotionEffectGroup()
fxGroup.motionEffects = [leftRight, upDown]
backgroundImage.addMotionEffect(fxGroup)
Any ideas why it slowly centers the image back after tilting and how to fix it?
This is the behavior of UIInterpolatingMotionEffect. You'll notice it also happens on the home screen, and everywhere else in the system that the effect is used.
It does this because sometimes the user will move their device in such a way that the content interpolates to the maximum position, but does not return to the center or resting position. This means that the position that was once the maximum interpolating position is now the resting center, so the content must move back to its original position in case the device moves again and the effect can continue.
I did not find any mention of this behavior in the UIInterpolatingMotionEffect documentation, but it can be observed everywhere the system uses the effect.

Set dimensions for UIImagePickerController "move and scale" cropbox

How does the "move and scale screen" determine dimensions for its cropbox?
Basically I would like to set a fixed width and height for the "CropRect" and let the user move and scale his image to fit in to that box as desired.
Does anyone know how to do this? (Or if it is even possible with the UIImagePickerController)
Thanks!
Not possible with UIImagePickerController unfortunately. The solution I recommend is to disable editing for the image picker and handle it yourself. For instance, I put the image in a scrollable, zoomable image view. On top of the image view is a fixed position "crop guide view" that draws the crop indicator the user sees. Assuming the guide view has properties for the visible rect (the part to keep) and edge widths (the part to discard) you can get the cropping rectangle like so. You can use the UIImage+Resize category to do the actual cropping.
CGRect cropGuide = self.cropGuideView.visibleRect;
UIEdgeInsets edges = self.cropGuideView.edgeWidths;
CGPoint cropGuideOffset = self.cropScrollView.contentOffset;
CGPoint origin = CGPointMake( cropGuideOffset.x + edges.left, cropGuideOffset.y + edges.top );
CGSize size = cropGuide.size;
CGRect crop = { origin, size };
crop.origin.x = crop.origin.x / self.cropScrollView.zoomScale;
crop.origin.y = crop.origin.y / self.cropScrollView.zoomScale;
crop.size.width = crop.size.width / self.cropScrollView.zoomScale;
crop.size.height = crop.size.height / self.cropScrollView.zoomScale;
photo = [photo croppedImage:crop];
Kinda late to the game but I think this may be what you are looking for: https://github.com/gekitz/GKImagePicker
Here is a solution for manual cropping by Ming Yang.
https://github.com/myang-git/iOS-Image-Crop-View
It offers a rectangular frame, which the user can slide or drag to fit the required portion of the image in the rectangle. Please note that this solution does the reverse of the question asked - lets the rectangle size vary, but eventually brings the desired result.
It is coded in Objective-C. You may have to either code it in Swift or simply build a bridging header to connect the Objective-C code with Swift code.
It's now later than late but may be useful for someone. This is the library I've used for swift (many thanks to Tim Oliver):
TOCropViewController
as described in README file in GitHub link above, by using this library you can get cropped images in user-defined rectangular and also in a circular mode, e.g. for updating profile image.
below is sample code from GitHub:
func presentCropViewController {
let image: UIImage = ... //Load an image
let cropViewController = CropViewController(image: image)
cropViewController.delegate = self
present(cropViewController, animated: true, completion: nil)
}
func cropViewController(_ cropViewController: CropViewController, didCropToImage image: UIImage, withRect cropRect: CGRect, angle: Int) {
// 'image' is the newly cropped version of the original image
}

Resources