How to clean node material diffuse content memory in SceneKit? - ios

In one of my app. i am facing issue of app crash because of i am unable to clean memory of node material diffuse content. when i am trying load node at that time memory is keeping up so i want clear memory whenever remove node from parent. please suggest approbate solution.
Here is below my code:
let recomondationView = viewRecomodation as! THARRecomondationsView
planeGeoMetryP1.firstMaterial?.diffuse.contents = UIImage.imageWithView(view: recomondationView)
oldAnnotationNode.name = name
oldAnnotationNode.geometry = planeGeoMetryP1
let billboardConstraint = SCNBillboardConstraint()
billboardConstraint.freeAxes = SCNBillboardAxis.Y
self.constraints = [billboardConstraint]
self.addChildNode(oldAnnotationNode)
Here is method of convert UIView to UIImage
extension UIImage {
class func imageWithView(view: UIView) -> UIImage {
var image = UIImage()
UIGraphicsBeginImageContextWithOptions(view.frame.size, true, 1.0)
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
image = renderer.image { ctx in
view.drawHierarchy(in: view.bounds, afterScreenUpdates: true)
}
UIGraphicsEndImageContext()
return image
}
}
Here is the code that i am using to remove node from parent
if let index = self.sceneNode?.childNodes.index(of: locationNode) {
self.sceneNode?.childNodes[index].geometry = nil
self.sceneNode?.childNodes[index].removeFromParentNode()
}

Related

How to combine a Gif Image into UIImageView with overlaying UIImageView in swift?

A gif image is loaded into a UIImageView (by using this extension) and another UIImageView is overlaid on it. Everything works fine but the problem is when I going for combine both via below code, it shows a still image (.jpg). I wanna combine both and after combine it should be a animated image (.gif) too.
let bottomImage = gifPlayer.image
let topImage = UIImage
let size = CGSize(width: (bottomImage?.size.width)!, height: (bottomImage?.size.height)!)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage!.draw(in: areaSize)
topImage!.draw(in: areaSize, blendMode: .normal, alpha: 0.8)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Click here to know more about this problem please.
When using an animated GIF in a UIImageView, it becomes an array of UIImage.
We can set that array with (for example):
imageView.animationImages = arrayOfImages
imageView.animationDuration = 1.0
or, we can set the .image property to an animatedImage -- that's how the GIF-Swift code you are using works:
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
in that case, the image also contains the duration:
img.images?.duration
So, to generate a new animated GIF with the border/overlay image, you need to get that array of images and generate each "frame" with the border added to it.
Here's a quick example...
This assumes:
you are using GIF-Swift
you have added bottomImageView and topImageView in Storyboard
you have a GIF in the bundle named "funny.gif" (edit the code if yours is different)
you have a "border.png" in assets (again, edit the code as needed)
and you have a button to connect to the #IBAction:
import UIKit
import ImageIO
import UniformTypeIdentifiers
class animImageViewController: UIViewController {
#IBOutlet var bottomImageView: UIImageView!
#IBOutlet var topImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
if let img = UIImage.gifImageWithName("funny") {
bottomImageView.image = img
}
if let img = UIImage(named: "border") {
topImageView.image = img
}
}
#IBAction func saveButtonTapped(_ sender: Any) {
generateNewGif(from: bottomImageView, with: topImageView)
}
func generateNewGif(from animatedImageView: UIImageView, with overlayImageView: UIImageView) {
var images: [UIImage]!
var delayTime: Double!
guard let overlayImage = overlayImageView.image else {
print("Could not get top / overlay image!")
return
}
if let imgs = animatedImageView.image?.images {
// the image view is using .image = animatedImage
// unwrap the duration
if let dur = animatedImageView.image?.duration {
images = imgs
delayTime = dur / Double(images.count)
} else {
print("Image view is using an animatedImage, but could not get the duration!" )
return
}
} else if let imgs = animatedImageView.animationImages {
// the image view is using .animationImages
images = imgs
delayTime = animatedImageView.animationDuration / Double(images.count)
} else {
print("Could not get images array!")
return
}
// we now have a valid [UIImage] array, and
// a valid inter-frame duration, and
// a valid "overlay" UIImage
// generate unique file name
let destinationFilename = String(NSUUID().uuidString + ".gif")
// create empty file in temp folder to hold gif
let destinationURL = URL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent(destinationFilename)
// metadata for gif file to describe it as an animated gif
let fileDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFLoopCount : 0]]
// create the file and set the file properties
guard let animatedGifFile = CGImageDestinationCreateWithURL(destinationURL as CFURL, UTType.gif.identifier as CFString, images.count, nil) else {
print("error creating file")
return
}
CGImageDestinationSetProperties(animatedGifFile, fileDictionary as CFDictionary)
let frameDictionary = [kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime: delayTime]]
// use original size of gif
let sz: CGSize = images[0].size
let renderer: UIGraphicsImageRenderer = UIGraphicsImageRenderer(size: sz)
// loop through the images
// drawing the top/border image on top of each "frame" image with 80% alpha
// then writing the combined image to the gif file
images.forEach { img in
let combinedImage = renderer.image { ctx in
img.draw(at: .zero)
overlayImage.draw(in: CGRect(origin: .zero, size: sz), blendMode: .normal, alpha: 0.8)
}
guard let cgFrame = combinedImage.cgImage else {
print("error creating cgImage")
return
}
// add the combined image to the new animated gif
CGImageDestinationAddImage(animatedGifFile, cgFrame, frameDictionary as CFDictionary)
}
// done writing
CGImageDestinationFinalize(animatedGifFile)
print("New GIF created at:")
print(destinationURL)
print()
// do something with the newly created file...
// maybe move it to documents folder, or
// upload it somewhere, or
// save to photos library, etc
}
}
Notes:
the code is based on this article: How to Make an Animated GIF Using Swift
this should be considered Example Code Only!!! -- a starting-point for you, not a "production ready" solution.

How do you apply Core Image filters to an onscreen image using Swift/MacOS or iOS and Core Image

Photos editing adjustments provides a realtime view of the applied adjustments as they are applied. I wasn't able to find any samples of how you do this. All the examples seems to show that you apply the filters through a pipeline of sorts and then take the resulting image and update the screen with the result. See code below.
Photos seems to show the adjustment applied to the onscreen image. How do they achieve this?
func editImage(inputImage: CGImage) {
DispatchQueue.global().async {
let beginImage = CIImage(cgImage: inputImage)
guard let exposureOutput = self.exposureFilter(beginImage, ev: self.brightness) else {
return
}
guard let vibranceOutput = self.vibranceFilter(exposureOutput, amount: self.vibranceAmount) else {
return
}
guard let unsharpMaskOutput = self.unsharpMaskFilter(vibranceOutput, intensity: self.unsharpMaskIntensity, radius: self.unsharpMaskRadius) else {
return
}
guard let sharpnessOutput = self.sharpenFilter(unsharpMaskOutput, sharpness: self.unsharpMaskIntensity) else {
return
}
if let cgimg = self.context.createCGImage(sharpnessOutput, from: vibranceOutput.extent) {
DispatchQueue.main.async {
self.cgImage = cgimg
}
}
}
}
OK, I just found the answer - use MTKView, which is working fine except for getting the image to fill the view correctly!
For the benefit of others here are the basics... I have yet to figure out how to position the image correctly in the view - but I can see the filter applied in realtime!
class ViewController: NSViewController, MTKViewDelegate {
....
#objc dynamic var cgImage: CGImage? {
didSet {
if let cgimg = cgImage {
ciImage = CIImage(cgImage: cgimg)
}
}
}
var ciImage: CIImage?
// Metal resources
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var sourceTexture: MTLTexture! // 2
let colorSpace = CGColorSpaceCreateDeviceRGB()
var context: CIContext!
var textureLoader: MTKTextureLoader!
override func viewDidLoad() {
super.viewDidLoad()
// Do view setup here.
let metalView = MTKView()
metalView.translatesAutoresizingMaskIntoConstraints = false
self.imageView.addSubview(metalView)
NSLayoutConstraint.activate([
metalView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
metalView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
metalView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
metalView.topAnchor.constraint(equalTo: view.topAnchor)
])
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
metalView.delegate = self
metalView.device = device
metalView.framebufferOnly = false
context = CIContext()
textureLoader = MTKTextureLoader(device: device)
}
public func draw(in view: MTKView) {
if let ciImage = self.ciImage {
if let currentDrawable = view.currentDrawable {
let commandBuffer = commandQueue.makeCommandBuffer()
let inputImage = ciImage // 2
exposureFilter.setValue(inputImage, forKey: kCIInputImageKey)
exposureFilter.setValue(ev, forKey: kCIInputEVKey)
context.render(exposureFilter.outputImage!,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
commandBuffer?.commit()
}
}
}

Simple CIFilter Passthru with CGImage conversion returns black pixels

The following code:
let skView = SKView()
let scene = SKScene()
override func viewDidLoad() {
super.viewDidLoad()
self.scene.scaleMode = .resizeFill
self.skView.presentScene(self.scene)
self.scene.backgroundColor = UIColor.black
self.view.addSubview(skView)
self.scene.shouldEnableEffects = true
let sprite = SKSpriteNode(imageNamed: "NAME_THAT_PIC")
sprite.position = CGPoint(x: 300, y: 400)
let effectNode = SKEffectNode()
effectNode.filter = MyFilter()
effectNode.addChild(sprite)
will call this custom filter that does nothing but create a CGImage from a CIImage, correctly invoking context.createCGImage() as reported by many people (CIImages are not pixel buffered.)
MyFilter is reduced to a simple repro test:
class MyFilter: CIFilter {
var inputImage: CIImage?
var inputImageRect: CGRect? {
guard let image = self.inputImage else {
return nil
}
return image.extent
}
public override init() {
super.init()
}
required public init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override open var outputImage: CIImage? {
guard let inputImage = self.inputImage else {
return nil
}
let context = CIContext(options:nil)
let cgImage = context.createCGImage(inputImage, from: inputImageRect!)
// ... DO SOMETHING WITH CGIMAGE DATA ...
return CIImage(cgImage: cgImage!)
}
}
If I replace MyFilter() by another built-in filter, it works and will show the altered image so the viewcontroller code works. If instead, I return inputImage directly from the filter output call, it works and the image passed in will display.
When I dump the CGImage, the dimensions of the image are correct but every pixels are set to black.
I tried creating a UIImage using UIImage(cgImage: cgImage!) but the same happens.
What is causing pixels not to be loaded in the cgImage I generated from the inputImage?

How do you add MKPolylines to MKSnapShotter in swift 3?

Is there a way to take a screenshot of mapView and include the polyline? I believe I need to draw CGPoint's on the image that the MKSnapShotter returns, but I am unsure on how to do so.
Current code
func takeSnapshot(mapView: MKMapView, withCallback: (UIImage?, NSError?) -> ()) {
let options = MKMapSnapshotOptions()
options.region = mapView.region
options.size = mapView.frame.size
options.scale = UIScreen.main().scale
let snapshotter = MKMapSnapshotter(options: options)
snapshotter.start() { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image{
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
}
}
I had the same problem today. After several hours of research, here is how I solve it.
The following codes are in Swift 3.
1. Init your polyline coordinates array
// initial this array with your polyline coordinates
var yourCoordinates = [CLLocationCoordinate2D]()
yourCoorinates.append( coordinate 1 )
yourCoorinates.append( coordinate 2 )
...
// you can use any data structure you like
2. take the snapshot as usual, but set the region based on your coordinates:
func takeSnapShot() {
let mapSnapshotOptions = MKMapSnapshotOptions()
// Set the region of the map that is rendered. (by polyline)
let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
// Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
mapSnapshotOptions.scale = UIScreen.main.scale
// Set the size of the image output.
mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)
// Show buildings and Points of Interest on the snapshot
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
snapShotter.start() { snapshot, error in
guard let snapshot = snapshot else {
return
}
// Don't just pass snapshot.image, pass snapshot itself!
self.imageView.image = self.drawLineOnImage(snapshot: snapshot)
}
}
3. Use snapshot.point() to draw Polylines on Snapshot Image
func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
let image = snapshot.image
// for Retina screen
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let context = UIGraphicsGetCurrentContext()
// set stroking width and color of the context
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.orange.cgColor)
// Here is the trick :
// We use addLine() and move() to draw the line, this should be easy to understand.
// The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
// Thus we use snapshot.point() to save the pain.
context!.move(to: snapshot.point(for: yourCoordinates[0]))
for i in 0...yourCoordinates.count-1 {
context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
context!.move(to: snapshot.point(for: yourCoordinates[i]))
}
// apply the stroke to the context
context!.strokePath()
// get the image from the graphics context
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
// end the graphics context
UIGraphicsEndImageContext()
return resultImage!
}
That's it, hope this helps someone.
References
How do I draw on an image in Swift?
MKTile​Overlay,MKMap​Snapshotter & MKDirections
Creating an MKMapSnapshotter with an MKPolylineRenderer
Render a Map as an Image using MapKit
What is wrong with:
snapshotter.start( completionHandler: { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image {
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
})
If you just want a copy of the image the user sees in the MKMapView, remember that it's a UIView subclass, and so you could do this...
public extension UIView {
public var snapshot: UIImage? {
get {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, UIScreen.main.scale)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
}
// ...
if let img = self.mapView.snapshot {
// Do something
}

Custom CIFilter subclass returns image with scale out of whack

I'm new to writing CIFilters, and I'm stuck on this problem. Here is my source image being displayed in a UIImageView with contentMode set to Aspect Fit:
Here is the image returned from my CIFilter object being displayed in the same UIImageView:
I've tried copying over the original scale and orientation from my source image to the UIImage being constructed from the CIImage returned from the filter with no luck.
What might be causing this?
I'm thinking I'm doing something wrong in my CIFilter class. I am starting to suspect something in my outputImage method:?
func outputImage() -> CIImage? {
if let inputImage = inputImage {
let dod = inputImage.extent()
if let kernel = kernel {
let args = [inputImage as AnyObject]
let dod = inputImage.extent().rectByInsetting(dx: -1, dy: -1)
return kernel.applyWithExtent(dod, roiCallback: {
(index, rect) in
return rect.rectByInsetting(dx: -1, dy: -1)
}, arguments: args)
}
}
return nil
}

Resources