How to get USD / USDZ model metersPerUnit value in iOS? - ios

I've been trying for a few years to find out how to get the USD / USDZ metersPerUnit value in an iOS / macCatalyst app but so far have not discovered any solution. The issue has become more important as our users utilize more of their own USDZ models to create multi-model 3D and AR scenes in our app.
From Apple's SceneKit documentation I would expect that SCNSceneSource.getProperty(forKey: SCNSceneSourceAssetUnitKey) would provide the value, but I have not found the getProperty API to ever return any value other than nil for any type of model file including: .obj, .scn, .dae, .usdc, .usdz.
According to Pixar's USD spec, the default value for metersPerUnit if unspecified is 0.01 (i.e., centimeters) and some USDZ sources like Sketchfab seem to nearly always set it to 0.01. But Apple's apps like RealityConverter and usdconvert enable the user to set the value directly so we're seeing lots of models with other values.
I'd like to do something like the sample code below, but I cannot get it to work.Since SCNSource seems to use ModelIO I would have thought ModelIO would have an API for this, but I have not discovered it. Is there some API anyone can suggest to get the metersPerUnit value?
do {
var options: [SCNSceneSource.LoadingOption : Any] = [
.animationImportPolicy: SCNSceneSource.AnimationImportPolicy.doNotPlay
]
if let modelSource = SCNSceneSource(url: url) {
if let units = modelSource.property(forKey: SCNSceneSourceAssetUnitKey) as? [String : Any] {
if let metersPerUnit = units[SCNSceneSourceAssetUnitMeterKey] as? Float {
options[.convertUnitsToMeters] = NSNumber(value: metersPerUnit)
}
}
let scene = try modelSource.scene(options: options)
}
} catch {
throw NSError(domain: "OurApp", code: 0, userInfo: [NSLocalizedDescriptionKey: "Model \(url.lastPathComponent) cannot be loaded"])
}

Related

Up-to-date list of built-in Core Image filters?

Apple's Core Image Filter Reference, which describes all of the built-in CIFilters, is marked as "no longer being updated".
Looks like it was last updated in 2016. Since then, WWDC videos for 2017 and 2018 have announced additional filters (which, indeed, don't appear on this page).
Does anybody know of a more up-to-date list of built-in Core Image filters?
(Question has also been asked, but so far not answered, on the Apple Dev Forum.)
I've created a website called CIFilter.io which lists all the built-in CIFilters and a companion app which you can use to try the filters out if you like. This should have all the up to date CIFilter information - I've updated it for iOS 13 and intend to continue to keep it updated.
More info about the project is available in this blog post.
I created a small project to query an iOS device and (1) list out all available filters and (2) list everything about each input attributes. This project can be found here.
The relevant code:
var ciFilterList = CIFilter.filterNames(inCategories: nil)
This line creates a [String] of all available filters. If you only wish for all available filters of category "CICategoryBlur", replace the nil with it.
print("=======")
print("List of available filters")
print("-------")
for ciFilterName in ciFilterList {
print(ciFilterName)
}
print("-------")
print("Total: " + String(ciFilterList.count))
Pretty self-explanatory. When I ran this on an iPad mini running iOS 12.0.1, 207 filters were listed. NOTE: I have never tried this on macOS, but since it really doesn't use UIKit I believe it will work.
let filterName = "CIZoomBlur"
let filter = CIFilter(name: filterName)
print("=======")
print("Filter Name: " + filterName)
let inputKeys = filter?.inputKeys
if inputKeys?.count == 0 {
print("-------")
print("No input attributes.")
} else {
for inputKey in inputKeys! {
print("-------")
print("Input Key: " + inputKey)
if let attribute = filter?.attributes[inputKey] as? [String: AnyObject],
let attributeClass = attribute[kCIAttributeClass] as? String,
let attributeDisplayName = attribute["CIAttributeDisplayName"] as? String,
let attributeDescription = attribute[kCIAttributeDescription] as? String {
print("Display name: " + attributeDisplayName)
print("Description: " + attributeDescription)
print("Attrbute type: " + attributeClass)
switch attributeClass {
case "NSNumber":
let minimumValue = (attribute[kCIAttributeSliderMin] as! NSNumber).floatValue
let maximumValue = (attribute[kCIAttributeSliderMax] as! NSNumber).floatValue
let defaultValue = (attribute[kCIAttributeDefault] as! NSNumber).floatValue
print("Default value: " + String(defaultValue))
print("Minimum value: " + String(minimumValue))
print("Maximum value: " + String(maximumValue))
case "CIColor":
let defaultValue = attribute[kCIAttributeDefault] as! CIColor
print(defaultValue)
case "CIVector":
let defaultValue = attribute[kCIAttributeDefault] as! CIVector
print(defaultValue)
default:
// if you wish, just dump the variable attribute to look at everything!
print("No code to parse an attribute of type: " + attributeClass)
break
}
}
}
}
}
print("=======")
Again, fairly self-explanatory. The app I'm writing only works with filters using a single CIImage and with attributes restricted to NSNumber, CIColor, and CIVector, so things will fall to the default part of the switch statement. However, it should get you started! If you wish to see the "raw" version, jut look at the attribute variable.
Finally, I'd recommend something developed by Simon Gladman called Filterpedia. It's an iPad app (restricted to landscape) that allows you to experiment with pretty much all available filters along with all attributes with default/max/min values. Be aware of two things though. (1) It's written in Swift 2, but the is a Swift 4 fork here. (2) There are also numerous custom filters using custom CIKernels.

Why is the Vision framework unable to align two images?

I'm trying to take two images using the camera, and align them using the iOS Vision framework:
func align(firstImage: CIImage, secondImage: CIImage) {
let request = VNTranslationalImageRegistrationRequest(
targetedCIImage: firstImage) {
request, error in
if error != nil {
fatalError()
}
let observation = request.results!.first
as! VNImageTranslationAlignmentObservation
secondImage = secondImage.transformed(
by: observation.alignmentTransform)
let compositedImage = firstImage!.applyingFilter(
"CIAdditionCompositing",
parameters: ["inputBackgroundImage": secondImage])
// Save the compositedImage to the photo library.
}
try! visionHandler.perform([request], on: secondImage)
}
let visionHandler = VNSequenceRequestHandler()
But this produces grossly mis-aligned images:
You can see that I've tried three different types of scenes — a close-up subject, an indoor scene, and an outdoor scene. I tried more outdoor scenes, and the result is the same in almost every one of them.
I was expecting a slight misalignment at worst, but not such a complete misalignment. What is going wrong?
I'm not passing the orientation of the images into the Vision framework, but that shouldn't be a problem for aligning images. It's a problem only for things like face detection, where a rotated face isn't detected as a face. In any case, the output images have the correct orientation, so orientation is not the problem.
My compositing code is working correctly. It's only the Vision framework that's a problem. If I remove the calls to the Vision framework, put the phone of a tripod, the composition works perfectly. There's no misalignment. So the problem is the Vision framework.
This is on iPhone X.
How do I get Vision framework to work correctly? Can I tell it to use gyroscope, accelerometer and compass data to improve the alignment?
You should set secondImage as targetImage, and perform handler with firstImage.
I use your composite way.
check out this example from MLBoy:
let request = VNTranslationalImageRegistrationRequest(targetedCIImage: image2, options: [:])
let handler = VNImageRequestHandler(ciImage: image1, options: [:])
do {
try handler.perform([request])
} catch let error {
print(error)
}
guard let observation = request.results?.first as? VNImageTranslationAlignmentObservation else { return }
let alignmentTransform = observation.alignmentTransform
image2 = image2.transformed(by: alignmentTransform)
let compositedImage = image1.applyingFilter("CIAdditionCompositing", parameters: ["inputBackgroundImage": image2])

Apple Watch input values via scribble only

I'm working on WatchKit App. In this app, there are some fields that the user should fill it,
I searched how to deal with input fields in iWatch, and I found the following code:
presentTextInputController(withSuggestions: ["1"], allowedInputMode: WKTextInputMode.plain) { (arr: [Any]?) in
if let answers = arr as? [String] {
if let answer = answers[0] as? String {
self.speechLabel.setText(answer)
}
}
}
and this code gives me two choices: Diction and scribble, i.e
In my App, I want to support only the scribble not both of them,
I tried to pass withSuggestions parameter as nil, but the app direct me to dictiation, not to scribble.
Is there a way to let the user only use scribble?

Best way to show realtime data in iOS with Charts API

I know there are some posts about it but I can't find something useful. Therefore, I'm opening new post. I have a device which sends some values over bluetooth. I have function which gets this value like every second. I need to show this data in line chart realtime (Two line). I know Android API has a function for realtime but I can't find it in iOS API.
How I'm doing now is like this;
if (uuid == kUUIDECGSensor) {
let dataDict: [AnyHashable: Any] = LBValueConverter.manageValueElectrocardiography(dataValue)
// print("kUUIDECGSensor dict: \(dataDict)")
let allValues = Array(dataDict.values)
print(allValues)
data_array1.append(allValues[0] as! Double)
data_array2.append(allValues[1] as! Double)
//print(data_array1)
//print(data_array2)
temp_time += 5
time.append("\(temp_time)")
setChart(dataPoints: time, values: data_array1)
}
It is working but It doesnt seem like realtime. What am I missing?
Thank You!

Swift 2 - Type casting and optional chaining

I am relatively new to Swift and programming. I'm developing an app which heavily relies on information downloaded from the server. So in a lot of ViewControllers, I use NSURLSession and NSJSONSerialization to download the JSON into my app.
Every time I wanted to subscript the dictionary, for example timetableDict?["timetable"]["class"]["day"]["lesson"][0]["name"], something like Cannot subscript a value of type [String : AnyObject] with an index type of String shows up as an error.
I understand that I should avoid using AnyObject in my code, but the dictionary from the server is heavily nested with structures like this one:
"timetable": ["class": ({
day = ({
lesson = ({
name = (MATHEMATICS, ENGLISH),
classOrder = 0,
teacher = (Someone)
}),
({
name = FRENCH,
classOrder = 1,
teacher = (Someone)
)}
)}
)}]
The problem with this structure is that it is heavily nested and has different types when it gets to "name", "classOrder" and "teacher". It is very hard for me not to use AnyObject. However, this error has been annoying for me for a very long time. I would greatly appreciate it if someone could help me out on this. Thanks in advance!
I suggest taking a look at SwiftyJSON : https://github.com/SwiftyJSON/SwiftyJSON
It's a framework/library designed to handle JSON in a very much more elegant way than what's build into swift (especially for heavy nested structures like yours). It's easy to use and has an excellent tutorial.
EDIT: (Added sample code)
Example from the swiftyJSON tutorial :
let JSONObject: AnyObject? = NSJSONSerialization.JSONObjectWithData(data, options: nil, error: nil)
if let statusesArray = JSONObject as? [AnyObject],
let status = statusesArray[0] as? [String: AnyObject],
let user = status["user"] as? [String: AnyObject],
let username = user["name"] as? String {
// Finally we got the username
}
even with optional chaining quite messy :
let JSONObject: AnyObject? = NSJSONSerialization.JSONObjectWithData(data, options: nil, error: nil)
if let username = (((JSONObject as? [AnyObject])?[0] as? [String: AnyObject])?["user"] as? [String: AnyObject])?["name"] as? String {
// What a disaster
}
With swiftyJSON
let json = JSON(data: dataFromNetworking)
if let userName = json[0]["user"]["name"].string {
//Now you got your value
}
SwiftlyJSON that #Glenn mentions is not a bad system (though I find it over-reliant on string lookups, which is fragile). The deeper point, however, is that you want to validate and parse your JSON in one place, and turn it into a non-JSON Swift data structure for use by the rest of your program. SwiftlyJSON can be a decent tool for doing that, but I would use it to unload data into a an array of structs.
Working with JSON throughout your system is extremely error-prone and cumbersome, even wrapped up in SwiftlyJSON. If you unload the data once, then you can check for errors one time and everywhere else you know that the data is correct. If you pass around JSON structures, then you must check every single time and deal with possibly missing or incorrect data. Think through the case where the server sends you JSON in a format you didn't expect. How many times do you want to test for that?

Resources