Absolute depth from iPhone X back camera using disparity from AVDepthData? - ios

I'm trying to estimate the absolute depth (in meters) from an AVDepthData object based on this equation: depth = baseline x focal_length / (disparity + d_offset). I have all the parameters from cameraCalibrationData, but does this still apply to an image taken in Portrait mode with iPhone X since the two cameras are offset vertically? Also based on WWDC 2017 Session 507, the disparity map is relative, but the AVDepthData documentation states that the disparity values are in 1/m. So can I apply the equation on the values in the depth data as is or do I need to do some additional processing beforehand?
var depthData: AVDepthData
do {
depthData = try AVDepthData(fromDictionaryRepresentation: auxDataInfo)
} catch {
return nil
}
// Working with disparity
if depthData.depthDataType != kCVPixelFormatType_DisparityFloat32 {
depthData = depthData.converting(toDepthDataType: kCVPixelFormatType_DisparityFloat32)
}
CVPixelBufferLockBaseAddress(depthData.depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Scale Intrinsic matrix to be in depth image pixel space
guard var intrinsicMatrix = depthData.cameraCalibrationData?.intrinsicMatrix else{ return nil}
let referenceDimensions = depthData.cameraCalibrationData?.intrinsicMatrixReferenceDimensions
let depthWidth = CVPixelBufferGetWidth(depthData.depthDataMap)
let depthHeight = CVPixelBufferGetHeight(depthData.depthDataMap)
let depthSize = CGSize(width: depthWidth, height: depthHeight)
let ratio: Float = Float(referenceDimensions.width) / Float(depthWidth)
intrinsicMatrix[0][0] /= ratio;
intrinsicMatrix[1][1] /= ratio;
intrinsicMatrix[2][0] /= ratio;
intrinsicMatrix[2][1] /= ratio;
// For converting disparity to depth
let baseline: Float = 1.45/100.0 // measured baseline in m
// Prepare for lens distortion correction
let lut = depthData.cameraCalibrationData?.lensDistortionLookupTable
let center = depthData.cameraCalibrationData?.lensDistortionCenter
let centerX: CGFloat = center!.x / CGFloat(ratio)
let centerY: CGFloat = center!.y / CGFloat(ratio)
let correctedCenter = CGPoint(x: centerX, y: centerY);
// Build point cloud
var pointCloud = Array<Any>()
for dataY in 0 ..< depthHeight{
let rowData = CVPixelBufferGetBaseAddress(depthData.depthDataMap)! + dataY * CVPixelBufferGetBytesPerRow(depthData.depthDataMap)
let data = UnsafeBufferPointer(start: rowData.assumingMemoryBound(to: Float32.self), count: depthWidth)
for dataX in 0 ..< depthWidth{
let dispZ = data[dataX]
let pointZ = baseline * intrinsicMatrix[0][0] / dispZ
let currPoint: CGPoint = CGPoint(x: dataX,y: dataY)
let correctedPoint: CGPoint = lensDistortionPoint(for: currPoint, lookupTable: lut!, distortionOpticalCenter: correctedCenter,imageSize: depthSize)
let pointX = (Float(correctedPoint.x) - intrinsicMatrix[2][0]) * pointZ / intrinsicMatrix[0][0];
let pointY = (Float(correctedPoint.y) - intrinsicMatrix[2][1]) * pointZ / intrinsicMatrix[1][1];
pointCloud.append([pointX,pointY,pointZ])
}
}
CVPixelBufferUnlockBaseAddress(depthData.depthDataMap, CVPixelBufferLockFlags(rawValue: 0))

Related

How do I get the distance of a specific coordinate from the screen size depthMap of ARDepthData?

I am trying to get the distance of a specific coordinate from a depthMap resized to the screen size, but it is not working.
I have tried to implement the following steps.
convert the depthMap to CIImage, and then resize the image to the orientation and size of the screen using affine transformation
convert the converted image to a screen-sized CVPixelBuffer
get the distance in meters stored in CVPixelBuffer from a one-dimensional array by width * y + x when getting the coordinates of (x, y).
I have implemented the above procedure, but I cannot get the appropriate index from the one-dimensional array. What should I do?
The code for the procedure is shown below.
1.
let depthMap = depthData.depthMap
// convert the depthMap to CIImage
let image = CIImage(cvPixelBuffer: depthMap)
let imageSize = CGSize(width: depthMap.width, height: depthMap.height)
// 1) キャプチャ画像を 0.0〜1.0 の座標に変換
let normalizeTransform = CGAffineTransform(scaleX: 1.0/imageSize.width, y: 1.0/imageSize.height)
// 2) 「Flip the Y axis (for some mysterious reason this is only necessary in portrait mode)」とのことでポートレートの場合に座標変換。
// Y軸だけでなくX軸も反転が必要。
let interfaceOrientation = self.arView.window!.windowScene!.interfaceOrientation
let flipTransform = (interfaceOrientation.isPortrait) ? CGAffineTransform(scaleX: -1, y: -1).translatedBy(x: -1, y: -1) : .identity
// 3) キャプチャ画像上でのスクリーンの向き・位置に移動
let displayTransform = frame.displayTransform(for: interfaceOrientation, viewportSize: arView.bounds.size)
// 4) 0.0〜1.0 の座標系からスクリーンの座標系に変換
let toViewPortTransform = CGAffineTransform(scaleX: arView.bounds.size.width, y: arView.bounds.size.height)
// 5) 1〜4までの変換を行い、変換後の画像をスクリーンサイズでクリップ
let transformedImage = image.transformed(by: normalizeTransform.concatenating(flipTransform).concatenating(displayTransform).concatenating(toViewPortTransform)).cropped(to: arView.bounds)
// convert the converted image to a screen-sized CVPixelBuffer
if let convertDepthMap = transformedImage.pixelBuffer(cgSize: arView.bounds.size) {
previewImage.image = transformedImage.toUIImage()
DispatchQueue.main.async {
self.processDepthData(convertDepthMap)
}
}
// The process of acquiring CVPixelBuffer is implemented in extension
extension CIImage {
func toUIImage() -> UIImage {
UIImage(ciImage: self)
}
func pixelBuffer(cgSize size:CGSize) -> CVPixelBuffer? {
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
let width:Int = Int(size.width)
let height:Int = Int(size.height)
CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_DepthFloat32,
attrs,
&pixelBuffer)
// put bytes into pixelBuffer
let context = CIContext()
context.render(self, to: pixelBuffer!)
return pixelBuffer
}
}
private func processDepthData(_ depthMap: CVPixelBuffer) {
CVPixelBufferLockBaseAddress(depthMap, .readOnly)
let width = CVPixelBufferGetWidth(depthMap)
let height = CVPixelBufferGetHeight(depthMap)
if let baseAddress = CVPixelBufferGetBaseAddress(depthMap) {
let mutablePointer = baseAddress.bindMemory(to: Float32.self, capacity: width*height)
let bufferPointer = UnsafeBufferPointer(start: mutablePointer, count: width*height)
let depthArray = Array(bufferPointer)
CVPixelBufferUnlockBaseAddress(depthMap, .readOnly)
// index = width * y + x to trying to get the distance in meters for the coordinate of (300, 100), but it gets the distance for another coordinate
print(depthArray[width * 100 + 300])
}
}

Depth map normalization

I am applying CIDepthBlurEffect filter to a PHAsset as follows:
PHImageManager.default().requestImageDataAndOrientation(for: self.asset!, options: imageOptions) { (data, responseString, imageOrientation, info) in
if data != nil {
DispatchQueue.main.async {
if let depthImage = CIImage(data: data!, options: [CIImageOption.auxiliaryDisparity : true])?.oriented(imageOrientation) {
if let newMainImage = CIImage(data: data!)?.oriented(imageOrientation) {
let filter = CIFilter(name : "CIDepthBlurEffect",
parameters: [kCIInputImageKey : newMainImage,
kCIInputDisparityImageKey: depthImage,
"inputAperture" : withRadius])
self.imageView.image = UIImage(ciImage: filter!.outputImage!)
}
}
}
}
This works great most of the time. However, sometimes, the filter blur planes seem to be off. Almost as the Z plane is wrong or shifted.
Does this method require the depth data to be normalized?
Previously, when CVPixelBuffer was used, I was using this extension:
final func normalize() -> CVPixelBuffer {
let width = CVPixelBufferGetWidth(self)
let height = CVPixelBufferGetHeight(self)
CVPixelBufferLockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(self), to: UnsafeMutablePointer<Float>.self)
var minPixel: Float = 1.0
var maxPixel: Float = 0.0
for y in 0 ..< height {
for x in 0 ..< width {
let pixel = floatBuffer[y * width + x]
minPixel = min(pixel, minPixel)
maxPixel = max(pixel, maxPixel)
}
}
let range = maxPixel - minPixel
for y in 0 ..< height {
for x in 0 ..< width {
let pixel = floatBuffer[y * width + x]
floatBuffer[y * width + x] = (pixel - minPixel) / range
}
}
CVPixelBufferUnlockBaseAddress(self, CVPixelBufferLockFlags(rawValue: 0))
return self
}
Is there such a way to do this to a CIImage?

Find the Bounding Box of Submesh in iOS

I am drawing a 3D model as below from a obj file. I need to find the Bounding box for each submesh of that obj file.
let assetURL = Bundle.main.url(forResource: "train", withExtension: "obj")!
let allocator = MTKMeshBufferAllocator(device: device)
let asset = MDLAsset(url: assetURL, vertexDescriptor: vertexDescriptor, bufferAllocator: meshAllocator)
let mdlMesh = asset.object(at: 0) as! MDLMesh
mesh = try! MTKMesh(mesh: mdlMesh, device: device)
for submesh in mesh.submeshes {
//I need to find the bounding box for each Submesh
}
How can I achieve that in iOS.
Here's a function that takes an MTKMesh and produces an array of MDLAxisAlignedBoundingBoxes in model space:
enum BoundingBoxError : Error {
case invalidIndexType(String)
}
func boundingBoxesForSubmeshes(of mtkMesh: MTKMesh, positionAttributeData: MDLVertexAttributeData) throws -> [MDLAxisAlignedBoundingBox]
{
struct VertexPosition {
var x, y, z: Float
}
var boundingBoxes = [MDLAxisAlignedBoundingBox]()
var minX = Float.greatestFiniteMagnitude
var minY = Float.greatestFiniteMagnitude
var minZ = Float.greatestFiniteMagnitude
var maxX = -Float.greatestFiniteMagnitude
var maxY = -Float.greatestFiniteMagnitude
var maxZ = -Float.greatestFiniteMagnitude
let positionsPtr = positionAttributeData.dataStart
for submesh in mtkMesh.submeshes {
let indexBuffer = submesh.indexBuffer
let mtlIndexBuffer = indexBuffer.buffer
let submeshIndicesRaw = mtlIndexBuffer.contents().advanced(by: indexBuffer.offset)
if submesh.indexType != .uint32 {
throw BoundingBoxError.invalidIndexType("Expected 32-bit indices")
}
let submeshIndicesPtr = submeshIndicesRaw.bindMemory(to: UInt32.self,
capacity: submesh.indexCount)
let submeshIndices = UnsafeMutableBufferPointer<UInt32>(start: submeshIndicesPtr,
count:submesh.indexCount)
for index in submeshIndices {
let positionPtr = positionsPtr.advanced(by: Int(index) * positionAttributeData.stride)
let position = positionPtr.assumingMemoryBound(to: VertexPosition.self).pointee
if position.x < minX { minX = position.x }
if position.y < minY { minY = position.y }
if position.z < minZ { minZ = position.z }
if position.x > maxX { maxX = position.x }
if position.y > maxY { maxY = position.y }
if position.z > maxZ { maxZ = position.z }
}
let min = SIMD3<Float>(x: minX, y: minY, z: minZ)
let max = SIMD3<Float>(x: maxX, y: maxY, z: maxZ)
let box = MDLAxisAlignedBoundingBox(maxBounds: max, minBounds: min)
boundingBoxes.append(box)
}
return boundingBoxes
}
Note that this function expects the submesh to have 32-bit indices, though it could be adapted to support 16-bit indices as well.
To drive it, you'll also need to supply an MDLVertexAttributeData object that points to the vertex data in the containing mesh. Here's how to do that:
let mesh = try! MTKMesh(mesh: mdlMesh, device: device)
let positionAttrData = mdlMesh.vertexAttributeData(forAttributeNamed: MDLVertexAttributePosition, as: .float3)!
let boundingBoxes = try? boundingBoxesForSubmeshes(of: mesh, positionAttributeData: positionAttrData)
I haven't tested this code thoroughly, but it seems to produce sane results on my limited set of test cases. Visualizing the boxes with a debug mesh should immediately show whether or not they're correct.

How to convert camera coordinates to coordinate-space of scene?

I am trying to find the coordinates of the Camera in the scene I have made but I end up with coordinates in a different system. Coordinates such as (0.0134094329550862, which is about 1cm in the Scene Coordinate System while moving more than that.
I do this to get coordinates:
let cameraCoordinates = self.sceneView.pointOfView?.worldPosition
self.POSx = Double((cameraCoordinates?.x)!)
self.POSy = Double((cameraCoordinates?.y)!)
self.POSz = Double((cameraCoordinates?.z)!)
This is how I found the coordinates of the camera and its orientation.
func getUserVector() -> (SCNVector3, SCNVector3) {
if let frame = self.sceneView.session.currentFrame {
let mat = SCNMatrix4(frame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let pos = SCNVector3(mat.m41, mat.m42, mat.m43)
return (dir, pos)
}
return (SCNVector3(0, 0, -1), SCNVector3(0, 0, -0.2))
}

How to draw an arc on Google Maps in iOS?

How to draw an arc between two coordinate points in Google Maps like in this image and same like facebook post in iOS ?
Before using the below function, don't forget to import GoogleMaps
Credits: xomena
func drawArcPolyline(startLocation: CLLocationCoordinate2D?, endLocation: CLLocationCoordinate2D?) {
if let startLocation = startLocation, let endLocation = endLocation {
//swap the startLocation & endLocation if you want to reverse the direction of polyline arc formed.
let mapView = GMSMapView()
let path = GMSMutablePath()
path.add(startLocation)
path.add(endLocation)
// Curve Line
let k: Double = 0.2 //try between 0.5 to 0.2 for better results that suits you
let d = GMSGeometryDistance(startLocation, endLocation)
let h = GMSGeometryHeading(startLocation, endLocation)
//Midpoint position
let p = GMSGeometryOffset(startLocation, d * 0.5, h)
//Apply some mathematics to calculate position of the circle center
let x = (1 - k * k) * d * 0.5 / (2 * k)
let r = (1 + k * k) * d * 0.5 / (2 * k)
let c = GMSGeometryOffset(p, x, h + 90.0)
//Polyline options
//Calculate heading between circle center and two points
let h1 = GMSGeometryHeading(c, startLocation)
let h2 = GMSGeometryHeading(c, endLocation)
//Calculate positions of points on circle border and add them to polyline options
let numpoints = 100.0
let step = ((h2 - h1) / Double(numpoints))
for i in stride(from: 0.0, to: numpoints, by: 1) {
let pi = GMSGeometryOffset(c, r, h1 + i * step)
path.add(pi)
}
//Draw polyline
let polyline = GMSPolyline(path: path)
polyline.map = mapView // Assign GMSMapView as map
polyline.strokeWidth = 3.0
let styles = [GMSStrokeStyle.solidColor(UIColor.black), GMSStrokeStyle.solidColor(UIColor.clear)]
let lengths = [20, 20] // Play with this for dotted line
polyline.spans = GMSStyleSpans(polyline.path!, styles, lengths as [NSNumber], .rhumb)
let bounds = GMSCoordinateBounds(coordinate: startLocation, coordinate: endLocation)
let insets = UIEdgeInsets(top: 20, left: 20, bottom: 20, right: 20)
let camera = mapView.camera(for: bounds, insets: insets)!
mapView.animate(to: camera)
}
}
I used Bezier quadratic equation to draw curved lines. You can have a look on to the implementation. Here is the sample code.
func bezierPath(from startLocation: CLLocationCoordinate2D, to endLocation: CLLocationCoordinate2D) -> GMSMutablePath {
let distance = GMSGeometryDistance(startLocation, endLocation)
let midPoint = GMSGeometryInterpolate(startLocation, endLocation, 0.5)
let midToStartLocHeading = GMSGeometryHeading(midPoint, startLocation)
let controlPointAngle = 360.0 - (90.0 - midToStartLocHeading)
let controlPoint = GMSGeometryOffset(midPoint, distance / 2.0 , controlPointAngle)
let path = GMSMutablePath()
let stepper = 0.05
let range = stride(from: 0.0, through: 1.0, by: stepper)// t = [0,1]
func calculatePoint(when t: Double) -> CLLocationCoordinate2D {
let t1 = (1.0 - t)
let latitude = t1 * t1 * startLocation.latitude + 2 * t1 * t * controlPoint.latitude + t * t * endLocation.latitude
let longitude = t1 * t1 * startLocation.longitude + 2 * t1 * t * controlPoint.longitude + t * t * endLocation.longitude
let point = CLLocationCoordinate2D(latitude: latitude, longitude: longitude)
return point
}
range.map { calculatePoint(when: $0) }.forEach { path.add($0) }
return path
}
None of the answers mentioned is a full proof solution. For a few locations, it draws a circle instead of a polyline.
To resolve this we will calculate bearing(degrees clockwise from true north) and if it is less than zero, swap the start and end location.
func createArc(
startLocation: CLLocationCoordinate2D,
endLocation: CLLocationCoordinate2D) -> GMSPolyline {
var start = startLocation
var end = endLocation
if start.bearing(to: end) < 0.0 {
start = endLocation
end = startLocation
}
let angle = start.bearing(to: end) * Double.pi / 180.0
let k = abs(0.3 * sin(angle))
let path = GMSMutablePath()
let d = GMSGeometryDistance(start, end)
let h = GMSGeometryHeading(start, end)
let p = GMSGeometryOffset(start, d * 0.5, h)
let x = (1 - k * k) * d * 0.5 / (2 * k)
let r = (1 + k * k) * d * 0.5 / (2 * k)
let c = GMSGeometryOffset(p, x, h + 90.0)
var h1 = GMSGeometryHeading(c, start)
var h2 = GMSGeometryHeading(c, end)
if (h1 > 180) {
h1 = h1 - 360
}
if (h2 > 180) {
h2 = h2 - 360
}
let numpoints = 100.0
let step = ((h2 - h1) / Double(numpoints))
for i in stride(from: 0.0, to: numpoints, by: 1) {
let pi = GMSGeometryOffset(c, r, h1 + i * step)
path.add(pi)
}
path.add(end)
let polyline = GMSPolyline(path: path)
polyline.strokeWidth = 3.0
polyline.spans = GMSStyleSpans(
polyline.path!,
[GMSStrokeStyle.solidColor(UIColor(hex: "#2962ff"))],
[20, 20], .rhumb
)
return polyline
}
The bearing is the direction in which a vertical line on the map points, measured in degrees clockwise from north.
func bearing(to point: CLLocationCoordinate2D) -> Double {
func degreesToRadians(_ degrees: Double) -> Double { return degrees * Double.pi / 180.0 }
func radiansToDegrees(_ radians: Double) -> Double { return radians * 180.0 / Double.pi }
let lat1 = degreesToRadians(latitude)
let lon1 = degreesToRadians(longitude)
let lat2 = degreesToRadians(point.latitude);
let lon2 = degreesToRadians(point.longitude);
let dLon = lon2 - lon1;
let y = sin(dLon) * cos(lat2);
let x = cos(lat1) * sin(lat2) - sin(lat1) * cos(lat2) * cos(dLon);
let radiansBearing = atan2(y, x);
return radiansToDegrees(radiansBearing)
}
The answer above does not handle all the corner cases, here is one that draws the arcs nicely:
func drawArcPolyline(startLocation: CLLocationCoordinate2D?, endLocation: CLLocationCoordinate2D?) {
if let _ = startLocation, let _ = endLocation {
//swap the startLocation & endLocation if you want to reverse the direction of polyline arc formed.
var start = startLocation!
var end = endLocation!
var gradientColors = GMSStrokeStyle.gradient(
from: UIColor(red: 11.0/255, green: 211.0/255, blue: 200.0/255, alpha: 1),
to: UIColor(red: 0/255, green: 44.0/255, blue: 66.0/255, alpha: 1))
if startLocation!.heading(to: endLocation!) < 0.0 {
// need to reverse the start and end, and reverse the color
start = endLocation!
end = startLocation!
gradientColors = GMSStrokeStyle.gradient(
from: UIColor(red: 0/255, green: 44.0/255, blue: 66.0/255, alpha: 1),
to: UIColor(red: 11.0/255, green: 211.0/255, blue: 200.0/255, alpha: 1))
}
let path = GMSMutablePath()
// Curve Line
let k = abs(0.3 * sin((start.heading(to: end)).degreesToRadians)) // was 0.3
let d = GMSGeometryDistance(start, end)
let h = GMSGeometryHeading(start, end)
//Midpoint position
let p = GMSGeometryOffset(start, d * 0.5, h)
//Apply some mathematics to calculate position of the circle center
let x = (1-k*k)*d*0.5/(2*k);
let r = (1+k*k)*d*0.5/(2*k);
let c = GMSGeometryOffset(p, x, h + 90.0)
//Polyline options
//Calculate heading between circle center and two points
var h1 = GMSGeometryHeading(c, start)
var h2 = GMSGeometryHeading(c, end)
if(h1>180){
h1 = h1 - 360
}
if(h2>180){
h2 = h2 - 360
}
//Calculate positions of points on circle border and add them to polyline options
let numpoints = 100.0
let step = (h2 - h1) / numpoints
for i in stride(from: 0.0, to: numpoints, by: 1) {
let pi = GMSGeometryOffset(c, r, h1 + i * step)
path.add(pi)
}
path.add(end)
//Draw polyline
let polyline = GMSPolyline(path: path)
polyline.map = mapView // Assign GMSMapView as map
polyline.strokeWidth = 5.0
polyline.spans = [GMSStyleSpan(style: gradientColors)]
}
}
Objective-C version #Rouny answer
- (void)DrawCurvedPolylineOnMapFrom:(CLLocationCoordinate2D)startLocation To:(CLLocationCoordinate2D)endLocation
{
GMSMutablePath * path = [[GMSMutablePath alloc]init];
[path addCoordinate:startLocation];
[path addCoordinate:endLocation];
// Curve Line
double k = 0.2; //try between 0.5 to 0.2 for better results that suits you
CLLocationDistance d = GMSGeometryDistance(startLocation, endLocation);
float h = GMSGeometryHeading(startLocation , endLocation);
//Midpoint position
CLLocationCoordinate2D p = GMSGeometryOffset(startLocation, d * 0.5, h);
//Apply some mathematics to calculate position of the circle center
float x = (1-k*k)*d*0.5/(2*k);
float r = (1+k*k)*d*0.5/(2*k);
CLLocationCoordinate2D c = GMSGeometryOffset(p, x, h + -90.0);
//Polyline options
//Calculate heading between circle center and two points
float h1 = GMSGeometryHeading(c, startLocation);
float h2 = GMSGeometryHeading(c, endLocation);
//Calculate positions of points on circle border and add them to polyline options
float numpoints = 100;
float step = ((h2 - h1) / numpoints);
for (int i = 0; i < numpoints; i++) {
CLLocationCoordinate2D pi = GMSGeometryOffset(c, r, h1 + i * step);
[path addCoordinate:pi];
}
//Draw polyline
GMSPolyline * polyline = [GMSPolyline polylineWithPath:path];
polyline.map = mapView;
polyline.strokeWidth = 3.0;
NSArray *styles = #[[GMSStrokeStyle solidColor:kBaseColor],
[GMSStrokeStyle solidColor:[UIColor clearColor]]];
NSArray *lengths = #[#5, #5];
polyline.spans = GMSStyleSpans(polyline.path, styles, lengths, kGMSLengthRhumb);
GMSCoordinateBounds * bounds = [[GMSCoordinateBounds alloc]initWithCoordinate:startLocation coordinate:endLocation];
UIEdgeInsets insets = UIEdgeInsetsMake(20, 20, 20, 20);
GMSCameraPosition * camera = [mapView cameraForBounds:bounds insets:insets ];
[mapView animateToCameraPosition:camera];
}
Swift 5+
Very easy and Smooth way
//MARK: - Usage
let path = self.bezierPath(from: CLLocationCoordinate2D(latitude: kLatitude, longitude: kLongtitude), to: CLLocationCoordinate2D(latitude: self.restaurantLat, longitude: self.restaurantLong))
let polyline = GMSPolyline(path: path)
polyline.strokeWidth = 5.0
polyline.strokeColor = appClr
polyline.map = self.googleMapView // Google MapView
Simple Function
func drawArcPolyline(from startLocation: CLLocationCoordinate2D, to endLocation: CLLocationCoordinate2D) -> GMSMutablePath {
let distance = GMSGeometryDistance(startLocation, endLocation)
let midPoint = GMSGeometryInterpolate(startLocation, endLocation, 0.5)
let midToStartLocHeading = GMSGeometryHeading(midPoint, startLocation)
let controlPointAngle = 360.0 - (90.0 - midToStartLocHeading)
let controlPoint = GMSGeometryOffset(midPoint, distance / 2.0 , controlPointAngle)
let path = GMSMutablePath()
let stepper = 0.05
let range = stride(from: 0.0, through: 1.0, by: stepper)// t = [0,1]
func calculatePoint(when t: Double) -> CLLocationCoordinate2D {
let t1 = (1.0 - t)
let latitude = t1 * t1 * startLocation.latitude + 2 * t1 * t * controlPoint.latitude + t * t * endLocation.latitude
let longitude = t1 * t1 * startLocation.longitude + 2 * t1 * t * controlPoint.longitude + t * t * endLocation.longitude
let point = CLLocationCoordinate2D(latitude: latitude, longitude: longitude)
return point
}
range.map { calculatePoint(when: $0) }.forEach { path.add($0) }
return path
}

Resources