How do you add MKPolylines to MKSnapShotter in swift 3? - ios

Is there a way to take a screenshot of mapView and include the polyline? I believe I need to draw CGPoint's on the image that the MKSnapShotter returns, but I am unsure on how to do so.
Current code
func takeSnapshot(mapView: MKMapView, withCallback: (UIImage?, NSError?) -> ()) {
let options = MKMapSnapshotOptions()
options.region = mapView.region
options.size = mapView.frame.size
options.scale = UIScreen.main().scale
let snapshotter = MKMapSnapshotter(options: options)
snapshotter.start() { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image{
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
}
}

I had the same problem today. After several hours of research, here is how I solve it.
The following codes are in Swift 3.
1. Init your polyline coordinates array
// initial this array with your polyline coordinates
var yourCoordinates = [CLLocationCoordinate2D]()
yourCoorinates.append( coordinate 1 )
yourCoorinates.append( coordinate 2 )
...
// you can use any data structure you like
2. take the snapshot as usual, but set the region based on your coordinates:
func takeSnapShot() {
let mapSnapshotOptions = MKMapSnapshotOptions()
// Set the region of the map that is rendered. (by polyline)
let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
// Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
mapSnapshotOptions.scale = UIScreen.main.scale
// Set the size of the image output.
mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)
// Show buildings and Points of Interest on the snapshot
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
snapShotter.start() { snapshot, error in
guard let snapshot = snapshot else {
return
}
// Don't just pass snapshot.image, pass snapshot itself!
self.imageView.image = self.drawLineOnImage(snapshot: snapshot)
}
}
3. Use snapshot.point() to draw Polylines on Snapshot Image
func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
let image = snapshot.image
// for Retina screen
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let context = UIGraphicsGetCurrentContext()
// set stroking width and color of the context
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.orange.cgColor)
// Here is the trick :
// We use addLine() and move() to draw the line, this should be easy to understand.
// The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
// Thus we use snapshot.point() to save the pain.
context!.move(to: snapshot.point(for: yourCoordinates[0]))
for i in 0...yourCoordinates.count-1 {
context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
context!.move(to: snapshot.point(for: yourCoordinates[i]))
}
// apply the stroke to the context
context!.strokePath()
// get the image from the graphics context
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
// end the graphics context
UIGraphicsEndImageContext()
return resultImage!
}
That's it, hope this helps someone.
References
How do I draw on an image in Swift?
MKTile​Overlay,MKMap​Snapshotter & MKDirections
Creating an MKMapSnapshotter with an MKPolylineRenderer
Render a Map as an Image using MapKit

What is wrong with:
snapshotter.start( completionHandler: { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image {
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
})

If you just want a copy of the image the user sees in the MKMapView, remember that it's a UIView subclass, and so you could do this...
public extension UIView {
public var snapshot: UIImage? {
get {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, UIScreen.main.scale)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
}
// ...
if let img = self.mapView.snapshot {
// Do something
}

Related

Why is my programmatic screenshot capturing out of date view?

I have a map view that allows users to draw perimeter lines, and I need to capture a screenshot when they are done to save for recording purposes. For some reason, my code is capturing the view state before the new overlay is added, even though I add the overlay before attempting the screenshot.
I use a separate view to capture gesture, and then covert the points to an overlay and add it to the map here where points are the points gathered from the gesture tracking
func convertFragments() {
var coordinates: [CLLocationCoordinate2D] = []
for point in points {
let coordinate = mapView.convert(point, toCoordinateFrom: drawingView)
coordinates.append(coordinate)
}
let polyline = MKPolyline(coordinates: coordinates, count: coordinates.count)
polyline.title = selectedTool.name
removeLines(for: selectedTool)
points = []
mapView.addOverlay(polyline)
incidentManager.update(for: selectedTool, value: polyline)
}
Then incidentManager.update(value: polyline) makes a network call to save the information and calls a second method to capture the screenshot and update a log in log(eventType: eventType)
func update(for tool: MapTool, value: Any) {
func saveLine(_ line: MKPolyline, for key: String) {
incidentReference.setData([
key: IncidentManager.convertCLPoints(line.coordinates)
], merge: true)
}
switch tool {
case .hotZone, .innerPerimeter, .outerPerimeter:
let line = value as! MKPolyline
saveLine(line, for: tool.rawValue)
case .commandPost, .stagingArea:
let point = value as! CLLocationCoordinate2D
let geoPoint = GeoPoint(latitude: point.latitude, longitude: point.longitude)
incidentReference.setData([
tool.rawValue: geoPoint
], merge: true)
case .poi:
let annotation = value as! PerimeterMapAnnotation
let point = annotation.coordinate
let geoPoint = GeoPoint(latitude: point.latitude, longitude: point.longitude)
incidentReference.setData([
"pointsOfInterest": [annotation.title: geoPoint]
], merge: true)
default:
return
}
updateAddress(defaultValue: value)
let eventType = tool.eventType(didSet: true)
log(eventType: eventType)
}
I have an extension on UIView that captures the view, but for some reason, its not the most updated version of the map view.
private func log(eventType: LogEventType) {
guard let mapImage = mapView.asImage() else { return }
let storageRef = Storage.storage().reference()
let imageRef = storageRef.child("\(incident.id)/\(UUID().uuidString).jpg")
let eventTime = Date()
let eventTimeString = eventTime.apiDateString
incidentReference.collection("eventLog").document(eventTimeString).setData([
"logEventType": eventType.rawValue,
"imageReference": imageRef.fullPath,
"date": Timestamp(date: eventTime)
])
upload(image: mapImage, at: imageRef)
if shouldUpdateCover {
shouldUpdateCover = false
incidentReference.setData([
"coverPhotoRef": imageRef.fullPath
], merge: true)
}
}
func asImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
I have tried many variations this "screenshot" method that I have found online, and no luck. When I debug, I can check the map and its overlays, and see that they are updating before hand.
Any ideas why the image captured here does not capture the changes made?

Filtering Depth Data on iOS 12 appears to be rotated

I am having an issue where the Depth Data for the .builtInDualCamera appears to be rotated 90 degrees when isFilteringEnabled = true
Here is my code:
fileprivate let session = AVCaptureSession()
fileprivate let meta = AVCaptureMetadataOutput()
fileprivate let video = AVCaptureVideoDataOutput()
fileprivate let depth = AVCaptureDepthDataOutput()
fileprivate let camera: AVCaptureDevice
fileprivate let input: AVCaptureDeviceInput
fileprivate let synchronizer: AVCaptureDataOutputSynchronizer
init(delegate: CaptureSessionDelegate?) throws {
self.delegate = delegate
session.sessionPreset = .vga640x480
// Setup Camera Input
let discovery = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera], mediaType: .video, position: .unspecified)
if let device = discovery.devices.first {
camera = device
} else {
throw SessionError.CameraNotAvailable("Unable to load camera")
}
input = try AVCaptureDeviceInput(device: camera)
session.addInput(input)
// Setup Metadata Output (Face)
session.addOutput(meta)
if meta.availableMetadataObjectTypes.contains(AVMetadataObject.ObjectType.face) {
meta.metadataObjectTypes = [ AVMetadataObject.ObjectType.face ]
} else {
print("Can't Setup Metadata: \(meta.availableMetadataObjectTypes)")
}
// Setup Video Output
video.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
session.addOutput(video)
video.connection(with: .video)?.videoOrientation = .portrait
// ****** THE ISSUE IS WITH THIS BLOCK HERE ******
// Setup Depth Output
depth.isFilteringEnabled = true
session.addOutput(depth)
depth.connection(with: .depthData)?.videoOrientation = .portrait
// Setup Synchronizer
synchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [depth, video, meta])
let outputRect = CGRect(x: 0, y: 0, width: 1, height: 1)
let videoRect = video.outputRectConverted(fromMetadataOutputRect: outputRect)
let depthRect = depth.outputRectConverted(fromMetadataOutputRect: outputRect)
// Ratio of the Depth to Video
scale = max(videoRect.width, videoRect.height) / max(depthRect.width, depthRect.height)
// Set Camera to the framerate of the Depth Data Collection
try camera.lockForConfiguration()
if let fps = camera.activeDepthDataFormat?.videoSupportedFrameRateRanges.first?.minFrameDuration {
camera.activeVideoMinFrameDuration = fps
}
camera.unlockForConfiguration()
super.init()
synchronizer.setDelegate(self, queue: syncQueue)
}
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput data: AVCaptureSynchronizedDataCollection) {
guard let delegate = self.delegate else {
return
}
// Check to see if all the data is actually here
guard
let videoSync = data.synchronizedData(for: video) as? AVCaptureSynchronizedSampleBufferData,
!videoSync.sampleBufferWasDropped,
let depthSync = data.synchronizedData(for: depth) as? AVCaptureSynchronizedDepthData,
!depthSync.depthDataWasDropped
else {
return
}
// It's OK if the face isn't found.
let face: AVMetadataFaceObject?
if let metaSync = data.synchronizedData(for: meta) as? AVCaptureSynchronizedMetadataObjectData {
face = (metaSync.metadataObjects.first { $0 is AVMetadataFaceObject }) as? AVMetadataFaceObject
} else {
face = nil
}
// Convert Buffers to CIImage
let videoImage = convertVideoImage(fromBuffer: videoSync.sampleBuffer)
let depthImage = convertDepthImage(fromData: depthSync.depthData, andFace: face)
// Call Delegate
delegate.captureImages(video: videoImage, depth: depthImage, face: face)
}
fileprivate func convertVideoImage(fromBuffer sampleBuffer: CMSampleBuffer) -> CIImage {
// Convert from "CoreMovie?" to CIImage - fairly straight-forward
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let image = CIImage(cvPixelBuffer: pixelBuffer!)
return image
}
fileprivate func convertDepthImage(fromData depthData: AVDepthData, andFace face: AVMetadataFaceObject?) -> CIImage {
var convertedDepth: AVDepthData
// Convert 16-bif floats up to 32
if depthData.depthDataType != kCVPixelFormatType_DisparityFloat32 {
convertedDepth = depthData.converting(toDepthDataType: kCVPixelFormatType_DisparityFloat32)
} else {
convertedDepth = depthData
}
// Pixel buffer comes straight from depthData
let pixelBuffer = convertedDepth.depthDataMap
let image = CIImage(cvPixelBuffer: pixelBuffer)
return image
}
The original Video Looks like this: (For reference)
When the values are:
// Setup Depth Output
depth.isFilteringEnabled = false
depth.connection(with: .depthData)?.videoOrientation = .portrait
The Image looks like this: (you can see the closer jacket is white, the farther jacket is grey, and the distance is dark grey - as expected)
When the values are:
// Setup Depth Output
depth.isFilteringEnabled = true
depth.connection(with: .depthData)?.videoOrientation = .portrait
The image looks like this: (You can see the color values appear to be in the right places, but the shapes in the smoothing filter appear to be rotated)
When the values are:
// Setup Depth Output
depth.isFilteringEnabled = true
depth.connection(with: .depthData)?.videoOrientation = .landscapeRight
The image looks like this: (Both the colors and the shapes appear to be horizontal)
Am I doing something wrong to get these incorrect values?
I have tried re-ordering the code
// Setup Depth Output
depth.connection(with: .depthData)?.videoOrientation = .portrait
depth.isFilteringEnabled = true
But that does nothing.
I think this is an issue related to iOS 12, because I remember this working just fine under iOS 11 (although I don't have any images saved to prove it)
Any Help is appreciated, thanks!
Unlike the suggestion to review other answers on rotating the image after creation, which I found did not work, in the AVDepthData documentation, there is a method available that does the orientation correction for you.
The method is called: depthDataByApplyingExifOrientation: which returns an instance of AVDepthData with the orientation applied, ie. you can create your image in the correct orientation you desire by passing in the parameter of your choice.
This is my helper method that returns a UIImage with the orientation fix.
- (UIImage *)createDepthMapImageFromCapturePhoto:(AVCapturePhoto *)photo {
// AVCapturePhoto which has depthData - in swift you should confirm this exists
AVDepthData *frontDepthData = [photo depthData];
// Overwrite the instance with the correct orientation applied.
frontDepthData = [frontDepthData depthDataByApplyingExifOrientation:kCGImagePropertyOrientationRight];
// Create the CIImage from the depth data using the available method.
CIImage *ciDepthImage = [CIImage imageWithDepthData:frontDepthData];
// Create CIContext which enables converting CIImage to CGImage
CIContext *context = [[CIContext alloc] init];
// Create the CGImage
CGImageRef img = [context createCGImage:ciDepthImage fromRect:[ciDepthImage extent]];
// Create the final image.
UIImage *depthImage = [UIImage imageWithCGImage:img];
// Return the depth image.
return depthImage;
}

How to draw a CLLocationCoordinate2Ds on MKMapSnapshotter (drawing on mapView printed image)

I have mapView with array of CLLocationCoordinate2D. I use these locations to draw lines on my mapView by using MKPolyline. Now i want to store it as a UIimage. I found that theres class MKMapSnapshotter but unfortunately i can't draw overlays on it "Snapshotter objects do not capture the visual representations of any overlays or annotations that your app creates." So i get only blank map image. Is there any way to get image with my overlays?
private func generateImageFromMap() {
let mapSnapshotterOptions = MKMapSnapshotter.Options()
guard let region = mapRegion() else { return }
mapSnapshotterOptions.region = region
mapSnapshotterOptions.size = CGSize(width: 200, height: 200)
mapSnapshotterOptions.showsBuildings = false
mapSnapshotterOptions.showsPointsOfInterest = false
let snapShotter = MKMapSnapshotter(options: mapSnapshotterOptions)
snapShotter.start() { snapshot, error in
guard let snapshot = snapshot else {
//do something with image ....
let mapImage = snapshot...
}
}
}
How can i put overlays on this image? Or maybe theres other way for that problem.
Unfortunately, you have to draw them yourself. Fortunately, MKSnapshot has a convenient point(for:) method to convert a CLLocationCoordinate2D into a CGPoint within the snapshot.
For example, assume you had an array of CLLocationCoordinate2D:
private var coordinates: [CLLocationCoordinate2D]?
private func generateImageFromMap() {
guard let region = mapRegion() else { return }
let options = MKMapSnapshotter.Options()
options.region = region
options.size = CGSize(width: 200, height: 200)
options.showsBuildings = false
options.showsPointsOfInterest = false
MKMapSnapshotter(options: options).start() { snapshot, error in
guard let snapshot = snapshot else { return }
let mapImage = snapshot.image
let finalImage = UIGraphicsImageRenderer(size: mapImage.size).image { _ in
// draw the map image
mapImage.draw(at: .zero)
// only bother with the following if we have a path with two or more coordinates
guard let coordinates = self.coordinates, coordinates.count > 1 else { return }
// convert the `[CLLocationCoordinate2D]` into a `[CGPoint]`
let points = coordinates.map { coordinate in
snapshot.point(for: coordinate)
}
// build a bezier path using that `[CGPoint]`
let path = UIBezierPath()
path.move(to: points[0])
for point in points.dropFirst() {
path.addLine(to: point)
}
// stroke it
path.lineWidth = 1
UIColor.blue.setStroke()
path.stroke()
}
// do something with finalImage
}
}
Then the following map view (with the coordinates, as MKPolyline, rendered by mapView(_:rendererFor:), like usual):
The above code will create the this finalImage:

How to clean node material diffuse content memory in SceneKit?

In one of my app. i am facing issue of app crash because of i am unable to clean memory of node material diffuse content. when i am trying load node at that time memory is keeping up so i want clear memory whenever remove node from parent. please suggest approbate solution.
Here is below my code:
let recomondationView = viewRecomodation as! THARRecomondationsView
planeGeoMetryP1.firstMaterial?.diffuse.contents = UIImage.imageWithView(view: recomondationView)
oldAnnotationNode.name = name
oldAnnotationNode.geometry = planeGeoMetryP1
let billboardConstraint = SCNBillboardConstraint()
billboardConstraint.freeAxes = SCNBillboardAxis.Y
self.constraints = [billboardConstraint]
self.addChildNode(oldAnnotationNode)
Here is method of convert UIView to UIImage
extension UIImage {
class func imageWithView(view: UIView) -> UIImage {
var image = UIImage()
UIGraphicsBeginImageContextWithOptions(view.frame.size, true, 1.0)
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
image = renderer.image { ctx in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
UIGraphicsEndImageContext()
return image
}
}
Here is the code that i am using to remove node from parent
if let index = self.sceneNode?.childNodes.index(of: locationNode) {
self.sceneNode?.childNodes[index].geometry = nil
self.sceneNode?.childNodes[index].removeFromParentNode()
}

How to draw detected rectangle path on live camera feed using CAShapeLayer and UIBezeirPath

I am developing an application to detect rectangles in a live camera feed and highlight the detected rectangle. I did camera thing using AVFoundation and used below methods in order to do detect and highlight the detected rectangle.
var detector: CIDetector?;
override func viewDidLoad() {
super.viewDidLoad();
detector = self.prepareRectangleDetector();
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) { // re check this method
// Need to shimmy this through type-hell
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Force the type change - pass through opaque buffer
let opaqueBuffer = Unmanaged<CVImageBuffer>.passUnretained(imageBuffer!).toOpaque()
let pixelBuffer = Unmanaged<CVPixelBuffer>.fromOpaque(opaqueBuffer).takeUnretainedValue()
let sourceImage = CIImage(CVPixelBuffer: pixelBuffer, options: nil)
// Do some detection on the image
self.performRectangleDetection(sourceImage);
var outputImage = sourceImage
// Do some clipping
var drawFrame = outputImage.extent
let imageAR = drawFrame.width / drawFrame.height
let viewAR = videoDisplayViewBounds.width / videoDisplayViewBounds.height
if imageAR > viewAR {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * viewAR) / 2.0
drawFrame.size.width = drawFrame.height / viewAR
} else {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / viewAR) / 2.0
drawFrame.size.height = drawFrame.width / viewAR
}
//videoDisplayView is a GLKView which is used to display camera feed
videoDisplayView.bindDrawable()
if videoDisplayView.context != EAGLContext.currentContext() {
EAGLContext.setCurrentContext(videoDisplayView.context)
}
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.drawImage(outputImage, inRect: videoDisplayViewBounds, fromRect: drawFrame);
videoDisplayView.display();
}
func prepareRectangleDetector() -> CIDetector {
let options: [String: AnyObject] = [CIDetectorAccuracy: CIDetectorAccuracyHigh];
return CIDetector(ofType: CIDetectorTypeRectangle, context: nil, options: options);
}
func performRectangleDetection(image: CIImage){
let resultImage: CIImage? = nil;
if let detector = detector {
// Get the detections
let features = detector.featuresInImage(image, options: [CIDetectorAspectRatio:NSNumber(float:1.43)]);
if features.count != 0{ // feature found
for feature in features as! [CIRectangleFeature] {
self.previewImageView.layer.sublayers = nil;
let line: CAShapeLayer = CAShapeLayer();
line.frame = self.videoDisplayView.bounds;
let linePath: UIBezierPath = UIBezierPath();
linePath.moveToPoint(feature.topLeft);
linePath.addLineToPoint(feature.topRight);
linePath.addLineToPoint(feature.bottomRight);
linePath.addLineToPoint(feature.bottomLeft);
linePath.addLineToPoint(feature.topLeft);
linePath.closePath();
line.lineWidth = 5.0;
line.path = linePath.CGPath;
line.fillColor = UIColor.clearColor().CGColor;
line.strokeColor = UIColor(netHex: 0x3399CC, alpha: 1.0).CGColor;
// videoDisplayParentView is the parent of videoDisplayView and they both have same bounds
self.videoDisplayParentView.layer.addSublayer(line);
}
}
}
}
I used CAShapeLayer and UIBezierPath to draw the rectangle. This is very very slow. Path gets visible after minutes.
Can Someone please help me to figure out why it is slow or let me know if I am doing something wrong here. Any help would be highly appreciated.
Or if there is some way easy than this I would like to know it too.
If you get into the business of adding a sublayer to a GLKView it will be slow. The GLKView here refreshes multiple times every second (as it is in captureOutput:didOutputSampleBuffer:.. method), the process of creating and adding the sublayer every time will not be able to keep up with.
A better way is to draw the path using CoreImage and compositing it over resultImage.

Resources