MGLUserLocationAnnotationView subclass does not take the pitch perspective of the camera - ios

I have styled the user location icon according to these docs:
https://docs.mapbox.com/ios/maps/examples/user-location-annotation/
It works, but although I worked with camera and pitch, it is displayed in two dimensions. How can i make it that it is in the right perspective of the camera and the pitch effect works?
I added MGLMapCamera with this code:
func mapViewDidFinishLoadingMap(_ mapView: MGLMapView) {
// Wait for the map to load before initiating the first camera movement.
mapView.camera = MGLMapCamera(lookingAtCenter: mapView.centerCoordinate, altitude: 200, pitch: 50, heading: 0)
}
Isn't there a camera option mode like gps for Android?
https://docs.mapbox.com/android/maps/examples/location-component-camera-options/

If you want the annotation view to tilt with the map view, you can use the MGLAnnotationView:scalesWithViewingDistance property. This determines whether the annotation view grows and shrinks as the distance between the viewpoint and the annotation view changes on a tilted map.
When the value of this property is YES and the map is tilted, the annotation view appears smaller if it is towards the top of the view (closer to the horizon) and larger if it is towards the bottom of the view (closer to the viewpoint).

I found a solution.
As far as I can see, there is no way to set this with an option. So I solved it manually:
First, I create an image:
private func setupLayers() {
if arrow == nil {
arrow = CALayer()
let myImage = UIImage(named: "arrow")?.cgImage
arrow.bounds = CGRect(x: 0, y: 0, width: size, height: size)
arrow.contents = myImage
layer.addSublayer(arrow)
}
}
Then I use MGLRadiansFromDegrees to convert from radians to degrees and transform the image with every update:
override func update() {
// Convert Optional to Double
let camerapitch = Double(mapView?.camera.pitch ?? 0)
let pitch: CGFloat = MGLRadiansFromDegrees(camerapitch)
layer.transform = CATransform3DMakeRotation(pitch, 1, 0, 0)
}

Related

How to crop an image with a given angle with swift

Does anyone know how to crop an image with a given angle with swift?
I put the demo image below.
I googled a little while and found almost all the solutions was about the image of no rotation or 90-degree rotation.
I want to rotate the image then crop it just like what the Photo App does in iPhone.
Thanks for any hint!
One option is to use the CGContext and CGAffineTransform to rotate according to your angle.
Make two rects one for rotated image and one for cropping image and use cropping(to rect: CGRect) -> CGImage?
finally according to your logic make only one image or two this is totally up to your approach.
here is a good reference for you:
https://www.raywenderlich.com/2305-core-image-tutorial-getting-started
hope it helps
Design storyboard and create outlets and properties in ViewController class.
let picker = UIImagePickerController()
var circlePath = UIBezierPath()
#IBOutlet weak var crop: CropView!
#IBOutlet weak var imageView: UIImageView!
#IBOutlet weak var scroll: UIScrollView!
The property crop has a custom UIView class. Add the below delegate function in it.
func point(inside point: CGPoint, with event: UIEvent?) -> Bool{
return false
}
Create extension for UIImage and UIImageView — refer. For zooming image, use delegate function viewForZooming ,then add UIScrollViewDelegate to class as the subtype and return imageView.
Pick image from gallery —
Create IBAction, To pick an image from Album and set the picker source type as photo library use the code below
picker.sourceType = .photoLibrary
present(picker, animated: true, completion: nil)
and add UIImagePickerControllerDelegate to class as sub type. In viewDidLoad,
picker.delegate = self
Use didFinishPickingMediaWithInfo —UIImagePickerControllerDelegate function to set image to the view after picking image from album.
let chosenImage = info[UIImagePickerControllerOriginalImage] as! UIImage
imageView.image = chosenImage.resizeImage()
dismiss(animated:true, completion: nil)
To dismiss the photo album when you did cancel, use imagePickerControllerDidCancel delegate.
Shoot picture from camera —
Create IBAction to shoot an image from the camera. First, check whether SourceTypeAvailable in the device. If it is set picker camera capture mode as a photo. Else handle the action.Then set source type as camera and camera capture mode as the photo.
if UIImagePickerController.isSourceTypeAvailable(.camera){
picker.sourceType = UIImagePickerControllerSourceType.camera
picker.cameraCaptureMode = UIImagePickerControllerCameraCaptureMode.photo
picker.modalPresentationStyle = .custom
present(picker,animated: true,completion: nil)
}else{
//action performed if there is no camera in device.
}
Cropping —
Add subLayer to the picked image — this layer provides an area to fix crop frame.
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.size.width, height: self.view.bounds.size.height), cornerRadius: 0)
Assign the path to the circle path property as of type UIBezierPath. Using BezierPath, you can change the crop frame into different shapes.
circlePath = UIBezierPath(roundedRect: CGRect(x: (self.view.frame.size.width / 2) - (size/2), y: (self.view.frame.size.height / 2) - (size / 2), width: size, height: size, cornerRadius: 0)
path.append(circlePath)
The CAShapeLayer that draws a cubic Bezier spline in its coordinate space.
let fillLayer = CAShapeLayer()
fillLayer.path = path.cgPath
Finally, add the layer to view,
view.layer.addSublayer(fillLayer)
Add crop area:
Create crop area. For that we need to set factor, dividing the imageView image width by view frame width set scale,
let factor = imageView.image!.size.width/view.frame.width
for zooming as scroll zoomScale.
let scale = 1/scroll.zoomScale
Then set crop area frame(x, y, width, height).
let x = (scroll.contentOffset.x + circlePath.bounds.origin.x - imageFrame.origin.x) * scale * factor
let y = (scroll.contentOffset.y + circlePath.bounds.origin.y - imageFrame.origin.y) * scale * factor
let width = circlePath.bounds.width * scale * factor
let height = circlePath.bounds.height * scale * factor
Finally , create a IBAction to crop the image.
let croppedCGImage = imageView.image?.cgImage?.cropping(to: croparea)
let croppedImage = UIImage(cgImage: croppedCGImage!)

Scale and move an UIView using UIPanGestureRecognizer

I have a UIView A. I put a icon on view A and try to use pan gesture to scale and move this view A.
I have try many solution, but i can not make it.
Please suggest help me?
More detail:
I will add more detail.
I have view A, add sub on self view. And i want when i draw panGestureRegonizer on view A, view A will move follow draw.
And while moving view A will scale. View A will scale to smaller when view move to top/left/bottom/right of sreen and scale to larger when view move to center of screen.
Let's say you have vc - ViewController and your UIView A is a subview of vc. You can add UIPanGestureRecognizer to A. Then drag the panGestureRegonizer to your vc as an action:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
//your code here
}
From the sender you can check view , location and state of the action. The state might impact your code in some cases, depending on what you are trying to achieve.
Then you need to modify the action to this:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
UIView.animateKeyframes(withDuration: 0.1, delay: 0, options: UIViewKeyframeAnimationOptions.calculationModeLinear, animations: {
let location = sender.location(in: sender.view?.superview)
sender.view?.center = location
})
}
Here sender.view?.superview equals vc.view. This code snippet will detect the pan gesture, and then will move A so A.center is matching the gesture's location. Note that duration 0.1 is giving smooth animation effect to the movement.
This will give you "move" functionality with pan gesture.
EDIT for scaling:
Logic: You have coordinate system(CS) with center, x and y. When the user uses pan gesture, he/she generates sequence of points in the CS. So our task is to measure the distance between the center of the CS and users' points. When we have the furthest distance, we can calculate scale factor for our scaling view.
var center: CGPoint! //center of the CS
let maxSize: CGSize = CGSize.init(width: 100, height: 100) // maximum size of our scaling view
var maxLengthToCenter: CGFloat! //maximum distance from the center of the CS to the furthest point in the CS
private func prepareForScaling() {
self.center = self.view.center //we set the center of our CS to equal the center of the VC's view
let frame = self.view.frame
//the furthest distance in the CS is the diagonal, and we calculate it using pythagoras theorem
self.maxLengthToCenter = (frame.width*frame.width + frame.height*frame.height).squareRoot()
}
Then we need to call our setup functional to have our data ready for scaling functionality - we can do this in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
self.prepareForScaling()
}
Then we need a helper function to calculates the scaled size of our view, for user's pan gesture current position on the screen.
private func scaledSize(for location: CGPoint) -> CGSize {
//calculate location x,y differences from the center
let xDifference = location.x - self.center.x
let yDifference = location.y - self.center.y
//calculate the scale factor - note that this factor will be between 0.0(center) and 0.5(diagonal - furthest point)
//It is due our measurement - from center to view's edge. Consider multiplying this factor with your custom constant.
let scaleFactor = (xDifference*xDifference + yDifference*yDifference).squareRoot()/maxLengthToCenter
//create scaled size with maxSize and current scale factor
let scaledSize = CGSize.init(width: maxSize.width*(1-scaleFactor), height: maxSize.height*(1-scaleFactor))
return scaledSize
}
And finally, we need to modify our pan gesture action to change the size of A:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
UIView.animateKeyframes(withDuration: 0.1, delay: 0, options: UIViewKeyframeAnimationOptions.calculationModeLinear, animations: {
let location = sender.location(in: sender.view?.superview)
sender.view?.frame = CGRect.init(origin: CGPoint.init(x: 0, y: 0), size: self.scaledSize(for: location))
sender.view?.center = location
})
}

CGAffineTransform an UIView with UIPanGestureRecognizer

I have view A, add sub on self view. And i want when i draw panGestureRegonizer on view A, view A will move follow draw.
And while moving view A will scale. View A will scale to smaller when view move to top/left/bottom/right of sreen and scale to larger when view move to center of screen.
I have try many solution, but i can not make it.
Please suggest help me?
Logic: You have coordinate system(CS) with center, x and y. When the user uses pan gesture, he/she generates sequence of points in the CS. So our task is to measure the distance between the center of the CS and users' points. When we have the furthest distance, we can calculate scale factor for our scaling view.
var center: CGPoint! //center of the CS
let maxSize: CGSize = CGSize.init(width: 100, height: 100) // maximum size of our scaling view
var maxLengthToCenter: CGFloat! //maximum distance from the center of the CS to the furthest point in the CS
private func prepareForScaling() {
self.center = self.view.center //we set the center of our CS to equal the center of the VC's view
let frame = self.view.frame
//the furthest distance in the CS is the diagonal, and we calculate it using pythagoras theorem
self.maxLengthToCenter = (frame.width*frame.width + frame.height*frame.height).squareRoot()
}
Then we need to call our setup functional to have our data ready for scaling functionality - we can do this in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
self.prepareForScaling()
}
Then we need a helper function to calculates the scaled size of our view, for user's pan gesture current position on the screen.
private func scaledSize(for location: CGPoint) -> CGSize {
//calculate location x,y differences from the center
let xDifference = location.x - self.center.x
let yDifference = location.y - self.center.y
//calculate the scale factor - note that this factor will be between 0.0(center) and 0.5(diagonal - furthest point)
//It is due our measurement - from center to view's edge. Consider multiplying this factor with your custom constant.
let scaleFactor = (xDifference*xDifference + yDifference*yDifference).squareRoot()/maxLengthToCenter
//create scaled size with maxSize and current scale factor
let scaledSize = CGSize.init(width: maxSize.width*(1-scaleFactor), height: maxSize.height*(1-scaleFactor))
return scaledSize
}
And finally, we need to modify our pan gesture action to change the size of A:
#IBAction func panGestureAction(_ sender: UIPanGestureRecognizer) {
UIView.animateKeyframes(withDuration: 0.1, delay: 0, options: UIViewKeyframeAnimationOptions.calculationModeLinear, animations: {
let location = sender.location(in: sender.view?.superview)
sender.view?.frame = CGRect.init(origin: CGPoint.init(x: 0, y: 0), size: self.scaledSize(for: location))
sender.view?.center = location
})
}

only detect in a section of camera preview layer, iOS, Swift

I am trying to get a detection zone in a live preview on my camera preview layer.
Is it possible for this, say there is a live feed and you have face detect on and as you look around it will only put a box around the face in a certain area for example a rectangle in the centre of the screen. all other faces in the preview that are outside of the rectangle don't get detected?
Im using Vision, iOS, Swift.
I figured this out by adding a guard before the CALayer adding
Before View did load
#IBOutlet weak var scanAreaImage: UIImageView!
var regionOfInterest: CGRect!
In View did load
scanAreaImage.frame is a image view that I put in via storyboard and this would represent the area I only wanted detection in,
let someRect: CGRect = scanAreaImage.frame
regionOfInterest = someRect
then in the vision text detection section.
func highlightLetters(box: VNRectangleObservation) {
let xCord = box.topLeft.x * (cameraPreviewlayer?.frame.size.width)!
let yCord = (1 - box.topLeft.y) * (cameraPreviewlayer?.frame.size.height)!
let width = (box.topRight.x - box.bottomLeft.x) * (cameraPreviewlayer?.frame.size.width)!
let height = (box.topLeft.y - box.bottomLeft.y) * (cameraPreviewlayer?.frame.size.height)!
// This is the section I Added for the rec of interest detection zone.
//////////////////////////////////////////////
let wordRect = CGRect(x: xCord, y: yCord, width: width, height: height)
guard regionOfInterest.contains(wordRect.origin) else { return } // only draw a box if the orgin of the word box is within the regionOfInterest
// regionOfInterest being the cgRect you created earlier
//////////////////////////////////////////////
let outline = CALayer()
outline.frame = CGRect(x: xCord, y: yCord, width: width, height: height)
outline.borderWidth = 1.0
if textColour == 1 {
outline.borderColor = UIColor.blue.cgColor
}else {
outline.borderColor = UIColor.clear.cgColor
}
cameraPreviewlayer?.addSublayer(outline)
this will only show outlines of the things inside the rectangle you created in storyboard. (Mine being the scanAreaImage)
I hope this helps someone.

Coordinate system in swift ios is wrong

I am trying to position a sprite at a point :
class GameScene: SKScene {
let player = SKSpriteNode(imageNamed: "Koopa_walk_1.png")
override func didMove(to view: SKView) {
player.position = CGPoint(x: 0, y: 0)
self.addChild(player)
print(koopa.get_x())
}
}
But for some reason my sprite appears in more or less the middle of the screen :
Edit :
This is the original image (260px by 320px) :
I expected to see the image appear in the top left because it's coordinates are (0, 0)
The coordinate system of SpriteKit is cartesian, with an origin default of the middle of the screen, when using the starting template in Apple's Xcode.
This template sets the origin to the centre of the screen by using an origin setting of (0.5, 0.5)
To have a top left origin, you're going to need set this to (0, 1), and then invert Y values, to negative values.

Resources