CoreML Memory Leak in iOS 14.5 - ios

In my application, I used VNImageRequestHandler with a custom MLModel for object detection.
The app works fine with iOS versions before 14.5.
When iOS 14.5 came, it broke everything.
Whenever try handler.perform([visionRequest]) throws an error (Error Domain=com.apple.vis Code=11 "encountered unknown exception" UserInfo={NSLocalizedDescription=encountered unknown exception}), the pixelBuffer memory is held and never released, it made the buffers of AVCaptureOutput full then new frame not came.
I have to change the code as below, by copy the pixelBuffer to another var, I solved the problem that new frame not coming, but memory leak problem is still happened.
Because of memory leak, the app crashed after some times.
Notice that before iOS version 14.5, detection works perfectly, try handler.perform([visionRequest]) never throws any error.
Here is my code:
private func predictWithPixelBuffer(sampleBuffer: CMSampleBuffer) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
// Get additional info from the camera.
var options: [VNImageOption : Any] = [:]
if let cameraIntrinsicMatrix = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
options[.cameraIntrinsics] = cameraIntrinsicMatrix
}
autoreleasepool {
// Because of iOS 14.5, there is a bug that when perform vision request failed, pixel buffer memory leaked so the AVCaptureOutput buffers is full, it will not output new frame any more, this is a temporary work around to copy pixel buffer to a new buffer, this currently make the memory increased a lot also. Need to find a better way
var clonePixelBuffer: CVPixelBuffer? = pixelBuffer.copy()
let handler = VNImageRequestHandler(cvPixelBuffer: clonePixelBuffer!, orientation: orientation, options: options)
print("[DEBUG] detecting...")
do {
try handler.perform([visionRequest])
} catch {
delegate?.detector(didOutputBoundingBox: [])
failedCount += 1
print("[DEBUG] detect failed \(failedCount)")
print("Failed to perform Vision request: \(error)")
}
clonePixelBuffer = nil
}
}
Has anyone experienced the same problem? If so, how did you fix it?

iOS 14.7 Beta available on the developer portal seems to have fixed this issue.

I have a partial fix for this using #Matthijs Hollemans CoreMLHelpers library.
The model I use has 300 classes and 2363 anchors. I used a lot of the code Matthijs provided here to convert the model to MLModel.
In the last step a pipeline is built using the 3 sub models: raw_ssd_output, decoder, and nms. For this workaround you need to remove the nms model from the pipeline, and output raw_confidence and raw_coordinates.
In your app you need to add the code from CoreMLHelpers.
Then add this function to decode the output from your MLModel:
func decodeResults(results:[VNCoreMLFeatureValueObservation]) -> [BoundingBox] {
let raw_confidence: MLMultiArray = results[0].featureValue.multiArrayValue!
let raw_coordinates: MLMultiArray = results[1].featureValue.multiArrayValue!
print(raw_confidence.shape, raw_coordinates.shape)
var boxes = [BoundingBox]()
let startDecoding = Date()
for anchor in 0..<raw_confidence.shape[0].int32Value {
var maxInd:Int = 0
var maxConf:Float = 0
for score in 0..<raw_confidence.shape[1].int32Value {
let key = [anchor, score] as [NSNumber]
let prob = raw_confidence[key].floatValue
if prob > maxConf {
maxInd = Int(score)
maxConf = prob
}
}
let y0 = raw_coordinates[[anchor, 0] as [NSNumber]].doubleValue
let x0 = raw_coordinates[[anchor, 1] as [NSNumber]].doubleValue
let y1 = raw_coordinates[[anchor, 2] as [NSNumber]].doubleValue
let x1 = raw_coordinates[[anchor, 3] as [NSNumber]].doubleValue
let width = x1-x0
let height = y1-y0
let x = x0 + width/2
let y = y0 + height/2
let rect = CGRect(x: x, y: y, width: width, height: height)
let box = BoundingBox(classIndex: maxInd, score: maxConf, rect: rect)
boxes.append(box)
}
let finishDecoding = Date()
let keepIndices = nonMaxSuppressionMultiClass(numClasses: raw_confidence.shape[1].intValue, boundingBoxes: boxes, scoreThreshold: 0.5, iouThreshold: 0.6, maxPerClass: 5, maxTotal: 10)
let finishNMS = Date()
var keepBoxes = [BoundingBox]()
for index in keepIndices {
keepBoxes.append(boxes[index])
}
print("Time Decoding", finishDecoding.timeIntervalSince(startDecoding))
print("Time Performing NMS", finishNMS.timeIntervalSince(finishDecoding))
return keepBoxes
}
Then when you receive the results from Vision, you call the function like this:
if let rawResults = vnRequest.results as? [VNCoreMLFeatureValueObservation] {
let boxes = self.decodeResults(results: rawResults)
print(boxes)
}
This solution is slow because of the way I move the data around and formulate my list of BoundingBox types. It would be much more efficient to process the MLMultiArray data using underlying pointers, and maybe use Accelerate to find the maximum score and best class for each anchor box.

In my case it helped to disable neural engine by forcing CoreML to run on CPU and GPU only. This is often slower but doesn't throw the exception (at least in our case). At the end we implemented a policy to force some of our models to not run on neural engine for certain iOS devices.
See MLModelConfiguration.computeUntis to constraint the hardware coreml model can use.

Related

MTKView frequently displaying scrambled MTLTextures

I am working on an MTKView-backed paint program which can replay painting history via an array of MTLTextures that store keyframes. I am having an issue in which sometimes the content of these MTLTextures is scrambled.
As an example, say I want to store a section of the drawing below as a keyframe:
During playback, sometimes the drawing will display exactly as intended, but sometimes, it will display like this:
Note the distorted portion of the picture. (The undistorted portion constitutes a static background image that's not part of the keyframe in question)
I describe the way I Create individual MTLTextures from the MTKView's currentDrawable below. Because of color depth issues I won't go into, the process may seem a little round-about.
I first get a CGImage of the subsection of the screen that constitutes a keyframe.
I use that CGImage to create an MTLTexture tied to the MTKView's device.
I store that MTLTexture into a MTLTextureStructure that stores the MTLTexture and the keyframe's bounding-box (which I'll need later)
Lastly, I store in an array of MTLTextureStructures (keyframeMetalArray). During playback, when I hit a keyframe, I get it from this keyframeMetalArray.
The associated code is outlined below.
let keyframeCGImage = weakSelf!.canvasMetalViewPainting.mtlTextureToCGImage(bbox: keyframeBbox, copyMode: copyTextureMode.textureKeyframe) // convert from MetalTexture to CGImage
let keyframeMTLTexture = weakSelf!.canvasMetalViewPainting.CGImageToMTLTexture(cgImage: keyframeCGImage)
let keyframeMTLTextureStruc = mtlTextureStructure(texture: keyframeMTLTexture, bbox: keyframeBbox, strokeType: brushTypeMode.brush)
weakSelf!.keyframeMetalArray.append(keyframeMTLTextureStruc)
Without providing specifics about how each conversion is happening, I wonder if, from an architecture design point, I'm overlooking something that is corrupting my data stored in the keyframeMetalArray. It may be unwise to try to store these MTLTextures in volatile arrays, but I don't know that for a fact. I just figured using MTLTextures would be the quickest way to update content.
By the way, when I swap out arrays of keyframes to arrays of UIImage.pngData, I have no display issues, but it's a lot slower. On the plus side, it tells me that the initial capture from currentDrawable to keyframeCGImage is working just fine.
Any thoughts would be appreciated.
p.s. adding a bit of detail based on the feedback:
mtlTextureToCGImage:
func mtlTextureToCGImage(bbox: CGRect, copyMode: copyTextureMode) -> CGImage {
let kciOptions = [convertFromCIContextOption(CIContextOption.outputPremultiplied): true,
convertFromCIContextOption(CIContextOption.useSoftwareRenderer): false] as [String : Any]
let bboxStrokeScaledFlippedY = CGRect(x: (bbox.origin.x * self.viewContentScaleFactor), y: ((self.viewBounds.height - bbox.origin.y - bbox.height) * self.viewContentScaleFactor), width: (bbox.width * self.viewContentScaleFactor), height: (bbox.height * self.viewContentScaleFactor))
let strokeCIImage = CIImage(mtlTexture: metalDrawableTextureKeyframe,
options: convertToOptionalCIImageOptionDictionary(kciOptions))!.oriented(CGImagePropertyOrientation.downMirrored)
let imageCropCG = cicontext.createCGImage(strokeCIImage, from: bboxStrokeScaledFlippedY, format: CIFormat.RGBA8, colorSpace: colorSpaceGenericRGBLinear)
cicontext.clearCaches()
return imageCropCG!
} // end of func mtlTextureToCGImage(bbox: CGRect)
CGImageToMTLTexture:
func CGImageToMTLTexture (cgImage: CGImage) -> MTLTexture {
// Note that we forego the more direct method of creating stampTexture:
//let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
// because MTKTextureLoader seems to be doing additional processing which messes with the resulting texture/colorspace
let width = Int(cgImage.width)
let height = Int(cgImage.height)
let bytesPerPixel = 4
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
texDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.shaderRead.rawValue)
texDescriptor.storageMode = .shared
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else {
return brushTextureSquare // return SOMETHING
}
let dstData: CFData = (cgImage.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
let region = MTLRegionMake2D(0, 0, width, height)
print ("[MetalViewPainting]: w= \(width) | h= \(height) region = \(region.size)")
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
return stampTexture
} // end of func CGImageToMTLTexture (cgImage: CGImage)
The type of distortion looks like a bytes-per-row alignment issue between CGImage and MTLTexture. You're probably only seeing this issue when your image is a certain size that falls outside of the bytes-per-row alignment requirement of your MTLDevice. If you really need to store the texture as a CGImage, ensure that you are using the bytesPerRow value of the CGImage when copying back to the texture.

Very slow scrolling/zooming experience with GMUClusterRenderer (Google Maps Clustering) on iOS

I will try to explain my issue, and what I have done so far.
Introduction:
I am using the iOS Utils Library from Google Maps in order to display around 300 markers on the map.
The algorithm used for the Clustering is the GMUNonHierarchicalDistanceBasedAlgorithm.
Basically, our users can send us the weather they observe through their window, so that we can display the real time weather around the world.
It enables us to improve and/or adjust the weather forecasts.
But my scrolling/zooming experience isn't smooth at all. By the way I am testing it with an iPhone X ...
Let's get to the heart of the matter:
Here is how I configure the ClusterManager
private func configureCluster(array: [Observation]) -> Void {
let iconGenerator = GMUDefaultClusterIconGenerator()
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = GMUDefaultClusterRenderer(mapView: mapView,
clusterIconGenerator: iconGenerator)
renderer.delegate = self
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm,
renderer: renderer)
clusterManager.add(array)
clusterManager.cluster()
clusterManager.setDelegate(self, mapDelegate: self)
}
Here is my Observation class, I tried to keep it simple :
class Observation : NSObject, GMUClusterItem {
static var ICON_SIZE = 30
let timestamp: Double
let idObs: String
let position: CLLocationCoordinate2D
let idPicto: [Int]
let token: String
let comment: String
let altitude: Double
init(timestamp: Double, idObs: String, coordinate: CLLocationCoordinate2D, idPicto: [Int], token: String, comment: String, altitude: Double) {
self.timestamp = timestamp
self.idObs = idObs
self.position = coordinate
self.idPicto = idPicto
self.token = token
self.comment = comment
self.altitude = altitude
}
}
And finally, the delegate method for the rendering :
func renderer(_ renderer: GMUClusterRenderer, willRenderMarker marker: GMSMarker) {
if let cluster = marker.userData as? GMUCluster {
if let listObs = cluster.items as? [Observation] {
if listObs.count > 1 {
let sortedObs = listObs.sorted(by: { $0.timestamp > $1.timestamp })
if let mostRecentObs = sortedObs.first {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: mostRecentObs)
}
}
} else {
if let obs = listObs.last {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: obs)
}
}
}
}
}
}
Users can only send one observation, but this observation can be composed with various weather phenomenoms (like Clouds + Rain + Wind) or only Rain if they want.
To differenciate them, if it's only 1 phenomenom, the marker.iconView property will be set directly.
On the other hand, if it's an observation with multiple phenomenoms, I will create a View containing all the images representing the phenomenoms.
func setIconViewForMarker(marker: GMSMarker, obs: Observation) -> Void {
let isYourObs = Observation.isOwnObservation(id: obs.idObs) ? true : false
if isYourObs {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
} else {
// Observation with more than 1 phenomenom
if obs.idPicto.count > 1 {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
// Observation with only 1 phenomenom
} else if obs.idPicto.count == 1 {
if let id = obs.idPicto.last {
marker.iconView = Observation.setImageForPhenomenom(id: id)
}
}
}
}
And the last piece of code, to show you how I build this custom view (I think my issue is probably here)
class func viewForPhenomenomArray(ids: [Int], isYourObs: Bool) -> UIView {
let popupView = UIView()
popupView.frame = CGRect.init(x: 0, y: 0, width: (ICON_SIZE * ids.count) + ((ids.count + 1) * 5) , height: ICON_SIZE)
if (isYourObs) {
popupView.backgroundColor = UIColor(red:0.25, green:0.61, blue:0.20, alpha:1)
} else {
popupView.backgroundColor = UIColor(red:0.00, green:0.31, blue:0.57, alpha:1)
}
popupView.layer.cornerRadius = 12
for (index, element) in ids.enumerated() {
let imageView = UIImageView(image: Observation.getPictoFromID(id: element))
imageView.frame = CGRect(x: ((index + 1) * 5) + index * ICON_SIZE, y: 0, width: ICON_SIZE, height: ICON_SIZE)
popupView.addSubview(imageView)
}
return popupView
}
I also tried with very small image, to understand if the issue comes from rendering a lot of PNGs on the map, but seriously, it's an iPhone X, it should be able to render some simple weather icon on a map.
Do you think I am doing something wrong ? Or is it a known issue in the Google Maps SDK ? (I have read that it is fixed at 30 fps)
Do you think rendering a lot of images (as marker.image) on a map takes that much GPU? To a point where the experience isn't acceptable at all?
If you have any advice, I'll take them all.
I was facing the same issue. After debugging a lot and checking google's code even, i come to the conclusion that, issue was from GMUDefaultClusterIconGenerator. This class is creating images at runtime for given cluster size that you are displaying. So, when you zoom in or zoom out the map, the cluster size is going to update, and this class creates new image for new number(Even it keep images cached, if same number get repeated).
So, the solution that i found is to use buckets. You will get surprised by seeing this new term. Let me explain the bucket concept by giving simple example.
suppose you kept bucket sizes as 10, 20, 50, 100, 200, 500, 1000.
Now, if your cluster is 3, then it will show 3.
If cluster size = 8, show = 8.
If cluster size = 16, show = 10+.
If cluster size = 22, show = 20+.
If cluster size = 48, show = 20+.
If cluster size = 91, show = 50+.
If cluster size = 177, show = 100+.
If cluster size = 502, show = 500+.
If cluster size = 1200004, show = 1000+.
Now here, for any cluster size, the marker images that are going to be rendered will be from 1, 2, 3, 4, 5, 6, 7, 8, 9, 10+, 20+, 50+, 100+, 200+, 500+, 1000+. As it caches the images, so this images is going to be reused. So, the time+cpu that it was using for creating new images is lowered(only few images required to be created).
You must have got the idea, about buckets now. As, if cluster is having very small number, then cluster size matters, but if increases, then bucket size is enough to get idea about cluster size.
Now, question is how to achieve this.
Actually, GMUDefaultClusterIconGenerator class has already this functionality implemented, you just need to change its initialisation to this:
let iconGenerator = GMUDefaultClusterIconGenerator(buckets: [ 10, 20, 50, 100, 200, 500, 1000])
GMUDefaultClusterIconGenerator class have other init methods, by using which you can give different background colors to different buckets, different background images to to different buckets and many more.
Let me know, if any further help required.

How do I use MetalKit texture loader with Metal heaps?

I have a set of Metal textures that are stored in an Xcode Assets Catalog as Texture Sets. I'm loading these using MTKTextureLoader.newTexture(name:scaleFactor:bundle:options).
I then use a MTLArgumentEncoder to encode all of the textures into a Metal 2 argument buffer.
This works great. However, the Introducing Metal 2 WWDC 2017 session recommends combining argument buffers with resource heaps for even better performance, and I'm quite keen to try this. According to the Argument Buffer documentation, instead of having to call MTLRenderCommandEncoder.useResource on each texture in the argument buffer, you just call useHeap on the heap that the textures were allocated from.
However, I haven't found a straightforward way to use MTKTextureLoader together with MTLHeap. It doesn't seem to have a loading option to allocate the texture from a heap.
I'm guessing that the approach would be:
load the textures with MTKTextureLoader
reverse-engineer a set of MTLTextureDescriptor objects for each texture
use the texture descriptors to create an appropriately sized MTLHeap
assign a new set of textures from the MTLHeap
use some method to copy the textures across, perhaps replaceBytes or maybe even a MTLBlitCommandEncoder
deallocate the original textures loaded with the MTKTextureLoader
It seems like a fairly long-winded approach, and i've not seen any examples of this, so I thought I'd ask here first in case I'm missing something obvious.
Should I abandon MTKTextureLoader, and search out some pre-MetalKit art on loading textures from asset catalogs?
I'm using Swift, but happy to accept Objective-C answers.
Well, the method I outlined above seems to work. As predicted, it's pretty long-winded. I'd be very interested to know if anyone has anything more elegant.
enum MetalError: Error {
case anErrorOccured
}
extension MTLTexture {
var descriptor: MTLTextureDescriptor {
let descriptor = MTLTextureDescriptor()
descriptor.width = width
descriptor.height = height
descriptor.depth = depth
descriptor.textureType = textureType
descriptor.cpuCacheMode = cpuCacheMode
descriptor.storageMode = storageMode
descriptor.pixelFormat = pixelFormat
descriptor.arrayLength = arrayLength
descriptor.mipmapLevelCount = mipmapLevelCount
descriptor.sampleCount = sampleCount
descriptor.usage = usage
return descriptor
}
var size: MTLSize {
return MTLSize(width: width, height: height, depth: depth)
}
}
extension MTKTextureLoader {
func newHeap(withTexturesNamed names: [String], queue: MTLCommandQueue, scaleFactor: CGFloat, bundle: Bundle?, options: [MTKTextureLoader.Option : Any]?, onCompletion: (([MTLTexture]) -> Void)?) throws -> MTLHeap {
let device = queue.device
let sourceTextures = try names.map { name in
return try newTexture(name: name, scaleFactor: scaleFactor, bundle: bundle, options: options)
}
let storageMode: MTLStorageMode = .private
let descriptors: [MTLTextureDescriptor] = sourceTextures.map { source in
let desc = source.descriptor
desc.storageMode = storageMode
return desc
}
let sizeAligns = descriptors.map { device.heapTextureSizeAndAlign(descriptor: $0) }
let heapDescriptor = MTLHeapDescriptor()
heapDescriptor.size = sizeAligns.reduce(0) { $0 + $1.size }
heapDescriptor.cpuCacheMode = descriptors[0].cpuCacheMode
heapDescriptor.storageMode = storageMode
guard let heap = device.makeHeap(descriptor: heapDescriptor),
let buffer = queue.makeCommandBuffer(),
let blit = buffer.makeBlitCommandEncoder()
else {
throw MetalError.anErrorOccured
}
let destTextures = descriptors.map { descriptor in
return heap.makeTexture(descriptor: descriptor)
}
let origin = MTLOrigin()
zip(sourceTextures, destTextures).forEach {(source, dest) in
blit.copy(from: source, sourceSlice: 0, sourceLevel: 0, sourceOrigin: origin, sourceSize: source.size, to: dest, destinationSlice: 0, destinationLevel: 0, destinationOrigin: origin)
blit.generateMipmaps(for: dest)
}
blit.endEncoding()
buffer.addCompletedHandler { _ in
onCompletion?(destTextures)
}
buffer.commit()
return heap
}
}

Swift 3 loop to center pixel of image

Language:
Swift 3
What I am using:
Hardware: An IPhone 6 and FLIR One thermal camera
Software: Xcode 8.1, Locked down FLIROne SDK (written and documented for obj-c and has little documentation at all)
What I am trying to do:
Get the temperature of the center pixel from the returned camera stream
The problem:
I have everything in the app I am building setup and working as expected not including getting this thermal data. I have been able to get temperature from the returned stream from the camera, however I can only get it for the 0,0 (top left) pixel. I have crosshairs in the center of the image view that the stream feeds into and I would like to get the pixel at this center point (the exact center of the image). I have been working on this for two days and I can't seem to figure out how to do this. The SDK does not allow you to specify which pixel to read from and from what I have read on online, you have to loop through each pixel and stop at the center one.
The below code will get the temperature from pixel 0,0 and display it correctly. I want to get temperature to read center pixel instead. temperature MUST be UInt16 to provide correct Kelvin readout (from what I understand at least), but I receive the radiometric data from Data! as UInt8. NOTE: NSData! does not work with these delegates. Attempting to use it (even in swift 2.3) causes the delegate to never fire. I have to use Data! for it to even run the delegate. I found it very odd that I couldn't use swift 2.3 because it only has NSData. If this is because I screwed something up please let me know.
func flirOneSDKDelegateManager(_ delegateManager: FLIROneSDKDelegateManager!, didReceiveRadiometricData radiometricData: Data!, imageSize size: CGSize) {
let byteArray = radiometricData?.withUnsafeBytes{
[UInt8](UnsafeBufferPointer(start: $0, count: (radiometricData?.count)!))
}
let temperature = UnsafePointer(byteArray!).withMemoryRebound(to: UInt16.self, capacity: 1){
$0.pointee
}
debugPrint(temperature)
DispatchQueue.main.async{
self.tempLabel.text = NSString(format: "%.1f",self.convertToFarenheit(kelvin: temperature)) as String
}
}
I am new to Swift so I am not sure if this is the best way to do this so if you have a better way please advise me on it.
Current solution:
Gets the center pixel, but not dynamically (which is what I want)
let byteArray = radiometricData?.withUnsafeBytes{
[UInt8](UnsafeBufferPointer(start: $0, count: (radiometricData?.count)!))
}
let bytes:[UInt8] = [(byteArray?[74170])!,(byteArray?[74171])!]
let temperature = UnsafePointer(bytes).withMemoryRebound(to: UInt16.self, capacity: 1){
$0.pointee
}
assuming you try to read some UInt16 values from swift's Data
import Foundation
var data = Data([1,0,2,0,255,255,1]) // 7 bytes
var arr: [UInt16] = []
// in my data only 6 bytes could be represented as Int16 values, so i have to ignore the last one ...
for i in stride(from: 0, to: 2 * (data.count / 2) , by: MemoryLayout<UInt16>.stride) {
arr.append(data.subdata(in: i..<(i+MemoryLayout<UInt16>.stride)).withUnsafeBytes { $0.pointee })
}
print(arr) // [1, 2, 65535]
or you could use something like
let arr0: [UInt16] = data.withUnsafeBytes { (p: UnsafePointer<UInt8>)->[UInt16] in
let capacity = data.count / MemoryLayout<UInt16>.stride
return p.withMemoryRebound(to: UInt16.self, capacity: capacity) {
var arr = [UInt16]()
for i in 0..<capacity {
arr.append(($0 + i).pointee)
}
return arr
}
}
print(arr0) // [1, 2, 65535]
or
extension Data {
func scan<T>(offset: Int, bytes: Int) -> T {
return self.subdata(in: offset..<(offset+bytes)).withUnsafeBytes { $0.pointee }
}
}
let k: UInt16 = data.scan(offset: 2, bytes: MemoryLayout<UInt16>.stride)
print(k) // 2
or even better
extension Data {
func scan<T>(from: Int)->T {
return self.withUnsafeBytes { (p: UnsafePointer<UInt8>)->T in
p.advanced(by: from).withMemoryRebound(to: T.self, capacity: 1) {
$0.pointee
}
}
}
}
let u0: UInt16 = data.scan(from: 2)
print(u0) // 2
or
extension Data {
func scan2<T>(from: Int)->T {
return self.withUnsafeBytes {
// Precondition: The underlying pointer plus offset is properly aligned for accessing T.
// Precondition: The memory is initialized to a value of some type, U, such that T is layout compatible with U.
UnsafeRawPointer($0).load(fromByteOffset: from, as: T.self)
}
}
}
let u2: UInt16 = data.scan2(from: 2)
print(u2) // 2
what is the right offset in your Data, to read the value? it is hard to say from the information you provided in your question.

Strange bug/artefacts on SCNNode rendering

On some iOS devices (iPhone 6s Plus) there is a partial and arbitrary disappearance of object parts.
How to avoid this?
All sticks must be the same and are clones of one SCNNode.
16 complex SCNNodes, from 3 SCNNode: box, ball and stick. Node containing a geometry by node.flattenedClone().
It must be like this:
Сode fragment:
func initBox()
{
var min: SCNVector3 = SCNVector3()
var max: SCNVector3 = SCNVector3()
let geom1 = SCNBox(width: boxW, height: boxH, length: boxL, chamferRadius: boxR)
geom1.firstMaterial?.reflective.contents = UIImage(data: BoxData)
geom1.firstMaterial?.reflective.intensity = 1.2
geom1.firstMaterial?.fresnelExponent = 0.25
geom1.firstMaterial?.locksAmbientWithDiffuse = true
geom1.firstMaterial?.diffuse.wrapS = SCNWrapMode.Repeat
let geom2 = SCNSphere(radius: 0.5 * boxH)
geom2.firstMaterial?.reflective.contents = UIImage(data: BalData)
geom2.firstMaterial?.reflective.intensity = 1.2
geom2.firstMaterial?.fresnelExponent = 0.25
geom2.firstMaterial?.locksAmbientWithDiffuse = true
geom2.firstMaterial?.diffuse.wrapS = SCNWrapMode.Repeat
let geom3 = SCNCapsule(capRadius: stickR, height: stickH)
geom3.firstMaterial?.reflective.contents = UIImage(data: StickData)
geom3.firstMaterial?.reflective.intensity = 1.2
geom3.firstMaterial?.fresnelExponent = 0.25
geom3.firstMaterial?.locksAmbientWithDiffuse = true
geom3.firstMaterial?.diffuse.wrapS = SCNWrapMode.Repeat
let box = SCNNode()
box.castsShadow = false
box.position = SCNVector3Zero
box.geometry = geom1
Material.setFirstMaterial(box, materialName: Materials[boxMatId])
let bal = SCNNode()
bal.castsShadow = false
bal.position = SCNVector3(0, 0.15 * boxH, 0)
bal.geometry = geom2
Material.setFirstMaterial(bal, materialName: Materials[balMatId])
let stick = SCNNode()
stick.castsShadow = false
stick.position = SCNVector3Zero
stick.geometry = geom3
stick.getBoundingBoxMin(&min, max: &max)
stick.pivot = SCNMatrix4MakeTranslation(0, min.y, 0)
Material.setFirstMaterial(stick, materialName: Materials[stickMatId])
box.addChildNode(bal)
box.addChildNode(stick)
boxmain = box.flattenedClone()
boxmain.name = "box"
}
Add nodes to the scene:
func Boxesset()
{
let Boxes = SCNNode()
Boxes.name = "Boxes"
var z: Float = -4.5 * radius
for _ in 0..<4
{
var x: Float = -4.5 * radius
for _ in 0..<4
{
let B: SCNNode = boxmain.clone()
B.position = SCNVector3(x: x, y: radius, z: z)
Boxes.addChildNode(B)
x += 3 * Float(radius)
}
z += 3 * Float(radius)
}
self.rootNode.addChildNode(Boxes)
}
This is tested and works great on the simulator - all devices,
on the physical devices - iPad Retina and iPhone 5.
Glitch is observed only at ultra modern iPhone 6s Plus (128 Gb).
The problem is clearly visible on the video ->
The problem with graphics can be solved by changing the Default rendering API to OpenGL ES...
...but you may have unexpected problems in pure computing modules that are not associated with graphics on iPhone 6S Plus. (the iPhone 6 has no such problems).
What's wrong?
TL;DR
Add scnView.prepareObject(boxmain, shouldAbortBlock: nil) to the end of your initBox.
I had a quick look at your code running on my 6s Plus and saw similar results. One of the corner nodes was missing, and was consistently missing each run. But we're not running the same code, mine's missing the materials data...
SceneKit is lazy, often things are not done until an object is added to a scene. I first came across this extracting geometry from a SceneKit primitive (SCNSphere etc), you're finding it when you clone a clone of something via the following lines.
let B: SCNNode = boxmain.clone()
...
boxmain = box.flattenedClone()
I'd say SceneKit is simply not completing the clone before the second clone occurs consistently. I have no way of knowing this for sure.
Removing the first clone fixes this issue for me. For example replace boxmain = box.flattenedClone() with boxmain = box. But I'd say what you've done is best practice, flattening these nodes will reduce the number of draw calls and improve performance (probably not an issue on the 6s).
SceneKit also provides a method - prepareObject:shouldAbortBlock: that will perform the operations required before an object is added to a scene (in this case the .flattenedClone()).
Adding the following line to the end of your initBox function also fixes the problem and is a better solution.
scnView.prepareObject(boxmain, shouldAbortBlock: nil)
Just say, I don't know the right answer to my question, but I found an acceptable solution for myself.
It turned out, it's all in the "diffuse" property of the SCNMaterial.
For whatever reason, Metal does not very like when diffuse = UIColor(...)
But if at least one element in a compound SCNNode (as in my case) is diffuse.contents = UIImage(...), then everything begins to work perfectly.
it works
diffuse=<SCNMaterialProperty: 0x7a6d50a0 | contents=<UIImage: 0x7a6d5b40> size {128, 128} orientation 0 scale 1.000000>
it doesn't work
diffuse=<SCNMaterialProperty: 0x7e611a50 | contents=UIDeviceRGBColorSpace 0.25 0.25 0.25 0.99>
I have found the solution of the problem is simply:
I just added one small, inconspicuous element with diffuse.contents = UIImage(...) to the existing 3 elements with diffuse.contents = UIColor(...) and it worked great.
So, my recommendations:
be careful when working with Metal. (I have a problems on the 5S devices and above)
thoroughly test the SceneKit applications on real devices, don't trust only the simulator
I hope, it's temporary bugs and it will be fix in future releases of Xcode.
Have a nice apps!
P.S. By the way, the finished app is completely free in the AppStore now
Qubic: tic-tac-toe 4x4x4

Resources