Mixed topology (quad/tri) with ModelIO - ios

I'm importing some simple OBJ assets using ModelIO like so:
let mdlAsset = MDLAsset(url: url, vertexDescriptor: nil, bufferAllocator: nil, preserveTopology: true, error: nil)
... and then adding them to a SceneKit SCN file. But, whenever I have meshes that have both quads/tris (often the case, for example eyeball meshes), the resulting mesh is jumbled:
Incorrect mesh topology
Re-topologizing isn't a good option since I sometimes have low-poly meshes with very specific topology, so I can't just set preserveTopology to false... I need a result with variable topology (i.e. MDLGeometryType.variableTopology).
How do I import these files correctly preserving their original topology?

I reported this as a bug at Apple Bug Reporter on 25th of November, bug id: 35687088
Summary: SCNSceneSourceLoadingOptionPreserveOriginalTopology does not actually preserve the original topology. Instead, it converts the geometry to all quads, messing up the 3D model badly. Based on its name it should behave exactly like preserveTopology of Model IO asset loading.
Steps to Reproduce: Load an OBJ file that has both triangles and polygons using SCNSceneSourceLoadingOptionPreserveOriginalTopology and load the same file into an MDLMesh using preserveTopology of ModelIO. Notice how it only works properly for the latter. Even when you create a new SCNGeometry based on the MDLMesh, it will "quadify" the mesh again to contain only quads (while it should support 3-gons and up).
On December 13th I received a reply with a request for sample code and assets, which I supplied 2 days later. I have not received a reply since (hopefully because they are just busy from catching up from the holiday season...).
As I mentioned in my bug report's summary, loading the asset with Model I/O does work properly, but then when you create a SCNNode based on that MDLMesh it ends up messing up the geometry again.
In my case the OBJ files I load have a known format as they are always files also exported with my app (no normals, colors, UV). So what I do is load the information of the MDLMesh (buffers, facetopology etc) manually into arrays, from which I then create a SCNGeometry manually. I don't have a complete separate piece of code of that for you as it is a lot and mixed with a lot of code specific to my app, and it's in Objective C. But to illustrate:
NSError *scnsrcError;
MDLAsset *asset = [[MDLAsset alloc] initWithURL:objURL vertexDescriptor:nil bufferAllocator:nil preserveTopology:YES error:&scnsrcError];
NSLog(#"%#", scnsrcError.localizedDescription);
MDLMesh * newMesh = (MDLMesh *)[asset objectAtIndex:0];
for (MDLSubmesh *faces in newMesh.submeshes) {
//MDLSubmesh *faces = newMesh.submeshes.firstObject;
MDLMeshBufferData *topo = faces.topology.faceTopology;
MDLMeshBufferData *vertIx = faces.indexBuffer;
MDLMeshBufferData *verts = newMesh.vertexBuffers.firstObject;
int faceCount = (int)faces.topology.faceCount;
int8_t *faceIndexValues = malloc(faceCount * sizeof(int8_t));
memcpy(faceIndexValues, topo.data.bytes, faceCount * sizeof(int8_t));
int32_t *vertIndexValues = malloc(faces.indexCount * sizeof(int32_t));
memcpy(vertIndexValues, vertIx.data.bytes, faces.indexCount * sizeof(int32_t));
SCNVector3 *vertValues = malloc(newMesh.vertexCount * sizeof(SCNVector3));
memcpy(vertValues, verts.data.bytes, newMesh.vertexCount * sizeof(SCNVector3));
....
....
}
In short, the preserveTopology option in SceneKit isn't working properly. To get from the working version in Model I/O to SceneKit I basically had to write my own converter.

Related

AKTableView issue 100% CPU usage

I'm trying to assign a table view to an already existing UIView object on my storyboard but every time I assign it by specifying the correct frame I get an issue.
let sampleTable = AKTableView(AKTable(file: sampleReferences[indexPath.row].sampleFile!), frame: samplePlot.bounds)
samplePlot.addSubview(sampleTable)
This is the code I'm using at the moment. I've also used samplePlot.frame.
Basically the sampleReferences[indexPath.row].sampleFile! is an AKAudioFile stored earlier from a recording stored as a reference from a UITableView element. It's quite a large program split into multiple files so it's hard to show everything.
The only extra issue I can think of is I haven't stopped any of the current AudioKit processes. I'm not sure if the table can only be drawn at startup.
The following works with a .sine specifier AKTable(.sine) so it's not an issue with my UI code. It also fails when trying to load even a basic .mp3 file from .resources.
Update:
I've compressed the test .mp3 file significatly and shortened it's overall length to 2 seconds and I'm still getting the same issue. I think it might be an issue with the actual function used to draw the file. Although I could be wrong.
Another update:
So everything works fine using the following code
let file = EZAudioFile(url: sampleAudio!.url)
guard let data = file?.getWaveformData() else {
print("Failed to get waveform data")
return }
let waveform = EZAudioPlot()
waveform.frame = samplePlot.bounds
waveform.plotType = EZPlotType.buffer
waveform.shouldFill = true
waveform.shouldMirror = true
waveform.color = UIColor.black
waveform.updateBuffer(data.buffers[0], withBufferSize: data.bufferSize )
samplePlot.addSubview(waveform)
but obviously I'm using EZAudioPlot. I grabbed this from a previous stack overflow issue for the same thing and edited it a bit.

ARKit, metal shader for ARSCNView

Trying to figure out how to solve my issue of applying shaders to my ARSCNView.
Previously, when using a standard SCNView, i have successfully been able to apply a distortion shader the following way:
if let path = Bundle.main.path(forResource: "art.scnassets/distortion", ofType: "plist") {
if let dict = NSDictionary(contentsOfFile: path) {
let technique = SCNTechnique(dictionary: dict as! [String : AnyObject])
scnView.technique = technique
}
}
Replacing SCNView with ARSCNView gives me the following error(s):
"Error: Metal renderer does not support nil vertex function name"
"Error: _executeProgram - no pipeline state"
I was thinking it's because that ARSCNView uses a different renderer than SCNView. But logging ARSCNView.renderingAPI tells me nothing about the renderer, and i can't seem to choose one when i construct my ARSCNView instance. I must be missing something obvious, because i can't seem to find a single resource when scouring for references online.
My initial idea was instead use a SCNProgram to apply the shaders. But i can't find any resources of how to apply it to an ARSCNView, or if it's even a correct/possible solution, SCNProgram seems to be reserved for materials.
Anyone able to give me any useful pointers of how to solve vertex+fragment shaders for ARSCNView?
SCNTechnique for ARSCNView does not work with GLSL shaders, instead Metal functions need to be provided in the technique's plist file under the keys metalVertexShader and metalFragmentShader.
To the contrary, documentation says any combination of shader should work:
You must specify both fragment and vertex shaders, and you must
specify either a GLSL shader program, a pair of Metal functions, or
both. If both are specified, SceneKit uses whichever shader is
appropriate for the current renderer.
So it might be a mistake, but I guess the documentation is outdated. Since all ARKit running devices also run Metal, GLSL support has not been added to ARSCNViews.
As iOS12 deprecates OpenGL this looks like planned.
I had this issue in ARKit iOS11.4 and 12 and it came down to a series of miss-spelt shaders. I hope this might help someone.

Dynamic naming of objects in AudioKit (SpriteKit)

I am trying to create an app similar to the Reactable.
The user will be able to drag "modules" like an oscillator or filter from a menu into the "play area" and the module will be activated.
I am thinking to initialize the modules as they intersect with the "play area" background object. However, this requires me to name the modules automatically, i.e.:
let osci = AKOscillator()
where osci will automatically count up to be:
let osci1 = AKOscillator()
let osci2 = AKOscillator()
...
etc.
How will I be able to do this?
Thanks
edit: I am trying to use an array by creating an array of
var osciArray = [AKOscillator]()
and in my function to add an oscillator, this is my code:
let oscis = AKOscillator()
osciArray.append(oscis)
osciArray[oscCounter].frequency = freqValue
osciArray[oscCounter].amplitude = 0.5
osciArray[oscCounter].start()
selectedNode.userData = ["counter": oscCounter]
oscCounter += 1
currentOutput = osciArray[oscCounter]
AudioKit.output = currentOutput
AudioKit.start()
My app builds fine, but once the app starts running on the Simulator I get error : fatal error: Index out of range
I haven't used AudioKit, but I read about it a while ago and I have quite a big interest in it. From what I understand from the documentation, it's structured pretty much like SpriteKit: nodes connected together.
I guess then that most classes in the library derive from a base class, just like everything in SpriteKit derives from the SKNode class.
Since you are linking the audio kit nodes with visual representations via SpriteKit nodes, why don't you simply subclass from an SKSpriteNode and add an optional audioNode property with the base class from AudioKit?
That way you can just use SpriteKit to interact directly with the stored audio node property.
I think there's a lot of AudioKit related code in your question, but to answer the question, you only have to look at oscCounter. You don't show its initial value, but I am guessing it was zero. Then you increment it by 1 and try to access osciArray[oscCounter] which has only one element so it should be accessed by osciArray[0]. Move the counter lower and you'll be better off. Furthermore, your oscillators look like local variables, so they'll get lost once the scope is lost. They should be declared as instance variables in your class or whatever this is part of.

GKObstacleGraph<GKGraphNode2D> does not handle archiving

I'm using GameplayKit's GKObstacleGraph to add path finding support to my iOS 10.2 SpriteKit game.
The game is a top-down 2D game, it features impassable obstacles which my 'soldiers' can't walk through.
Creating a GKObstacleGraph for my game level, with about 500 obstacles, takes ~50 seconds. This is too long for my users to wait.
Since the game map layout never changes dynamically after being loaded, I decided to create the graph once:
let graph = GKObstacleGraph(obstacles: obstacles, bufferRadius: 5, nodeClass: GKGraphNode2D.self)
Archive it to file:
let directories = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
if let documents = directories.first {
if let filePath = String(documents + "/obstacle.graph") {
NSKeyedArchiver.archiveRootObject(graph, toFile: filePath)
}
}
Copy the file from device to my laptop and add the file to my bundle. Then I just unarchive the graph object when I load my game level:
if let filePath = Bundle.main.path(forResource: "obstacle", ofType: "graph") {
let graph = NSKeyedUnarchiver.unarchiveObject(withFile: filePath) as! GKObstacleGraph<GKGraphNode2D>
return graph
}
In theory, this should be much quicker since I don't have to calculate the graph, only read it(unarchive) from file.
However, depending on the number of obstacles and their relative placement one of three things happens:
NSKeyedArchiver.archiveRootObject crashes.
Archive works but NSKeyedUnarchiver.unarchiveObject crashes.
Both archive and unarchive works, but I can not find a path around obstacles with GKObstacleGraph.findPath
I can get all of this working if I skip the (un)archive steps:
Successful path finding
Also, on simulator (iPhone 7), (un)archive never crashes, but path finding always fails afterwards.
What is going on here? I've reported it as a bug to Apple, but I'm still hoping that I've missed something.
I tried alternative solutions to the problem where I'm writing/reading nodes and obstacles to file using my own format. But the obstacle property of GKObstacleGraph is get only, which leaves me with the constructor, and then I'm back to waiting 50 seconds again.
I've created a public test project git repo:
https://bitbucket.org/oixx/gkobstaclegraphcrashproof.git
which shows the three different scenarios that fails when running on device. For simplicity it reads/writes the file to device, not bundle.
I don't know anything about the crashes, but #3 in your list is probably caused by connectNodeToLowestCostNode:bidirectional: not actually connecting your soldier to the graph because the nodes array is immutable after unarchiving.
The solution is to archive the nodes instead of the graph, then just create a graph from the nodes
NSArray *nodes = [NSKeyedUnarchiver unarchiveObjectWithData:self.nodesArray];
GKGraph *graph = [[GKGraph alloc] initWithNodes:nodes];

CVPixelBufferRef as a GPU Texture

I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.

Resources