My team is in the process of updating to Mapbox v10. We have several tile map layers that are all crashing immediately upon loading.
One weird thing I've noticed is that the crash only happens when connected to Xcode debugger. If I launch the app directly from the device, everything works as expected.
It looks like the crash is Metal-related, but Mapbox is supposed to abstract away all of the Metal code, so I'm not sure if this is a me bug or a Mapbox bug.
The error:
[MTLDebugBlitCommandEncoder generateMipmapsForTexture:]:1109: failed assertion `Generate Mipmaps For Texture Validation [tex mipmapLevelCount](1) must be > 1.'
The code:
private func prepareFrames() {
// Array of String
for frame in tileFrameProvider.frames {
let currentTimeString = "\(Date().timeIntervalSince1970))"
let layerID = frame + mapOverlay.name + currentTimeString
let sourceID = frame + mapOverlay.name + "source" + currentTimeString
let url = urlTemplate.replacingOccurrences(of: "{t}", with: frame)
var source = RasterSource()
source.tiles = [url]
source.tileSize = Double(tileSize)
sources.append(source)
var rasterLayer = RasterLayer(id: layerID)
rasterLayer.source = sourceID
rasterLayer.rasterOpacity = .constant(Double(initialAlpha))
rasterLayers.append(rasterLayer)
}
guard tileFrameProvider.frames.count != 0 else { return }
for (index, source) in self.sources.enumerated() {
if let layer = self.rasterLayers[safeIndex: index] {
try? self.mapView.mapboxMap.style.addSource(source, id: layer.source ?? "")
try? self.mapView.mapboxMap.style.addLayer(layer, layerPosition: .above("building-line"))
}
}
}
Looks like it is a Mapbox bug. My app doesn't use Metal and disabling Metal API Validation stops the crash from happening.
I need to identify if the device is rebooted.
Currently saving time in the database, and periodically check time interval from last boot using the following code as suggested in Apple forums:
func bootTime() -> Date? {
var tv = timeval()
var tvSize = MemoryLayout<timeval>.size
let err = sysctlbyname("kern.boottime", &tv, &tvSize, nil, 0);
guard err == 0, tvSize == MemoryLayout<timeval>.size else {
return nil
}
return Date(timeIntervalSince1970: Double(tv.tv_sec) + Double(tv.tv_usec) / 1_000_000.0)
}
But the problem with this is even without a reboot, tv.tv_sec value differs around 30 (it varies from 0 seconds to 30 secs).
Anybody have any idea about this variation? or any other better way to identify device reboot without using sysctl or other reliable sysctl.
https://developer.apple.com/forums/thread/101874?answerId=309633022#309633022
Any pointer is highly appreciated.
I searched over SO, all the answer points to the solution mentioned here. Which have the issue as I mentioned. Please don't mark duplicate.
I'm working with the Photos framework, specifically I'd like to keep track of the current camera roll status, thus updating it every time assets are added, deleted or modified (mainly when a picture is edited by the user - e.g a filter is added, image is cropped).
My first implementation would look something like the following:
private var lastAssetFetchResult : PHFetchResult<PHAsset>?
func photoLibraryDidChange(_ changeInstance: PHChange) {
guard let fetchResult = lastAssetFetchResult,
let details = changeInstance.changeDetails(for: fetchResult) else {return}
let modified = details.changedObjects
let removed = details.removedObjects
let added = details.insertedObjects
// update fetch result
lastAssetFetchResult = details.fetchResultAfterChanges
// do stuff with modified, removed, added
}
However, I soon found out that details.changedObjects would not contain only the assets that have been modified by the user, so I moved to the following implementation:
let modified = modifiedAssets(changeInstance: changeInstance)
with:
func modifiedAssets(changeInstance: PHChange) -> [PHAsset] {
var modified : [PHAsset] = []
lastAssetFetchResult?.enumerateObjects({ (obj, _, _) in
if let detail = changeInstance.changeDetails(for: obj) {
if detail.assetContentChanged {
if let updatedObj = detail.objectAfterChanges {
modified.append(updatedObj)
}
}
}
})
return modified
}
So, relying on the PHObjectChangeDetails.assetContentChanged
property, which, as documentation states indicates whether the asset’s photo or video content has changed.
This brought the results closer to the ones I was expecting, but still, I'm not entirely understanding its behavior.
On some devices (e.g. iPad Mini 3) I get the expected result (assetContentChanged = true) in all the cases that I tested, whereas on others (e.g. iPhone 6s Plus, iPhone 7) it's hardly ever matching my expectation (assetContentChanged is false even for assets that I cropped or added filters to).
All the devices share the latest iOS 11.2 version.
Am I getting anything wrong?
Do you think I could achieve my goal some other way?
Thank you in advance.
i don't understand why this code doesn't work, the detector is always nil with the CIDetectorTypeQRCode constant, everything work with CIDetectorTypeFace.
I Supect a bug from API of Apple. This a the official doc : Apple documentation
#IBAction func analyseTag(sender: AnyObject) {
var detector:CIDetector = CIDetector(ofType: CIDetectorTypeQRCode, context:nil, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh])
var decode = ""
var ciImage:CIImage = CIImage(image: ImgChoosed.image)
var message:String = "";
let features = detector.featuresInImage(ciImage)
for feature in features as [CIQRCodeFeature] {
message += feature.messageString
}
if(message == "") {
println("nothing")
} else {
println("\(message)")
}
}
Have you a solution?
Thank in advance guy's
The code you provided can't have a nil detector because it's not an optional and the compiler would complain about several places in your code if it was.
If features is empty then you know it didn't find a QR code in your image. Try providing a better image or turning down the CIDetectorAccuracy.
If features isn't empty then your cast is failing.
Edit:
You can't pass a nil context in the constructor.
This happened to us as well. iPhone 4s doesn't return a CIDetector of type QRCode. The other types (rectangle, face) work though…
The same code works as expected on the iPhone 6. Haven't tested on a 5 or 5s yet.
But two weeks ago it was still working on the 4s, I believe. It was still on iOS 8 back then, I guess.
Make sure your ImgChoosed.image is not nil.
Change another input image for testing.
Try for feature in features as! [CIQRCodeFeature].
I found that using on a device resolved this issue. The simulator seemed to always return nil for me.
I'm accessing the camera in iOS and using session presets as so:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to get this programmatically so that I'm not just relying on magic numbers.
So, something like this (theoretically):
[captureSession resolutionForPreset:AVCaptureSessionPresetMedium];
which might return a CGSize of { width: 360, height: 480}. I have not been able to find any such API, so far I've had to resort to waiting to get my first captured image and querying it then (which for other reasons in my program flow is not good).
I am no AVFoundation pro, but I think the way to go is:
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureInput *input = [captureSession.inputs objectAtIndex:0]; // maybe search the input in array
AVCaptureInputPort *port = [input.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
I'm not sure about the last step and I didn't try it myself. Just found that in the documentation and think it should work.
Searching for CMVideoDimensions in Xcode you'll find the RosyWriter example project. Have a look at that code (I don't have time to do that now).
You can programmatically get the resolution from activeFormat before capture begins, though not before adding inputs and outputs: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat
private func getCaptureResolution() -> CGSize {
// Define default resolution
var resolution = CGSize(width: 0, height: 0)
// Get cur video device
let curVideoDevice = useBackCamera ? backCameraDevice : frontCameraDevice
// Set if video portrait orientation
let portraitOrientation = orientation == .Portrait || orientation == .PortraitUpsideDown
// Get video dimensions
if let formatDescription = curVideoDevice?.activeFormat.formatDescription {
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
resolution = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
if (portraitOrientation) {
resolution = CGSize(width: resolution.height, height: resolution.width)
}
}
// Return resolution
return resolution
}
FYI, I attach here an official reply from Apple.
This is a follow-up to Bug ID# 13201137.
Engineering has determined that this issue behaves as intended based on the following information:
There are several problems with the included code:
1) The AVCaptureSession has no inputs.
2) The AVCaptureSession has no outputs.
Without at least one input (added to the session using [AVCaptureSession addInput:]) and a compatible output (added using [AVCaptureSession addOutput:]), there will be no active connections, therefore, the session won't actually run in the input device. It doesn't need to -- there are no outputs to which to deliver any camera data.
3) The JAViewController class assumes that the video port's -formatDescription property will be non nil as soon as [AVCaptureSession startRunning] returns.
There is no guarantee that the format description will be updated with the new camera format as soon as startRunning returns. -startRunning starts up the camera and returns when it is completely up and running, but doesn't wait for video frames to be actively flowing through the capture pipeline, which is when the format description would be updated.
You're just querying too fast. If you waited a few milliseconds more, it would be there. But the right way to do this is to listen for the AVCaptureInputPortFormatDescriptionDidChangeNotification.
4) Your JAViewController class creates a PVCameraInfo object in retrieveCameraInfo: and asks it a question, then lets it fall out of scope, where it is released and dealloc'ed.
Therefore, the session doesn't have long enough to run to satisfy your dimensions request. You stop the camera too quickly.
We consider this issue closed. If you have any questions or concern regarding this issue, please update your report directly (http://bugreport.apple.com).
Thank you for taking the time to notify us of this issue.
Best Regards,
Developer Bug Reporting Team
Apple Worldwide Developer Relations
According to Apple, there's no API for that. It stinks, I've had the same problem.
May be you can provide a list of all posible preset resolutions for every iPhone model and check which device model the app is running on? - using something like this...
[[UIDevice currentDevice] platformType] // ex: UIDevice4GiPhone
[[UIDevice currentDevice] platformString] // ex: #"iPhone 4G"
However, you have to update the list for each newer device model. Hope this helps :)
if preset is .photo, the return size is for still photo size, not preview video size
if preset is not .photo, the return size is for video size, not for captured photo size.
if self.session.sessionPreset != .photo {
// return video size, not captured photo size
let format = videoDevice.activeFormat
let formatDescription = format.formatDescription
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
} else {
// other way to get video size
}
Answer of #Christian Beer is a good way for specified preset.
My way is a good for active preset.
The best way to do what you want (get a known video or image format) is to set the format of the capture device.
First find the capture device you want to use:
if #available(iOS 10.0, *) {
captureDevice = defaultCamera()
} else {
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if ((device as AnyObject).hasMediaType(AVMediaType.video)) {
// Finally check the position and confirm we've got the back camera
if((device as AnyObject).position == AVCaptureDevice.Position.back) {
captureDevice = device as AVCaptureDevice
}
}
}
}
self.autoLevelWindowCenter = ALCWindow.frame
if captureDevice != nil && currentUser != nil {
beginSession()
}
}
func defaultCamera() -> AVCaptureDevice? {
if #available(iOS 10.0, *) { // only use the wide angle camera never dual camera
if let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera,
for: AVMediaType.video,
position: .back) {
return device
} else {
return nil
}
} else {
return nil
}
}
Then find the formats that that device can use:
let options = captureDevice!.formats
var supportable = options.first as! AVCaptureDevice.Format
for format in options {
let testFormat = format
let description = testFormat.description
if (description.contains("60 fps") && description.contains("1280x 720")){
supportable = testFormat
}
}
You can do more complex parsing of the formats, but you might not care.
Then just set the device to that format:
do {
try captureDevice?.lockForConfiguration()
captureDevice!.activeFormat = supportable
// setup other capture device stuff like autofocus, frame rate, ISO, shutter speed, etc.
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice!))
// add the device to an active CaptureSession
}
You may want to look at the AVFoundation docs and tutorial on AVCaptureSession as there are lots of things you can do with the output as well. For example, you can convert the result to .mp4 using AVAssetExportSession so that you can post it on YouTube, etc.
Hope this helps
Apple is using 4:3 ratio for the iPhone camera.
You can you this ratio to get the frame size of the captured video by fixing either the width or height constraint of the AVCaptureVideoPreviewLayer and set the aspect ratio constraint to 4:3.
In the left image, the width was fixed to 300px and the height was retrieved by setting the 4:3 ratio, and it was 400px.
In the right image, the height was fixed to 300px and width was retrieved by setting the 3:4 ratio, and it was 225px.