Dynamic naming of objects in AudioKit (SpriteKit) - ios

I am trying to create an app similar to the Reactable.
The user will be able to drag "modules" like an oscillator or filter from a menu into the "play area" and the module will be activated.
I am thinking to initialize the modules as they intersect with the "play area" background object. However, this requires me to name the modules automatically, i.e.:
let osci = AKOscillator()
where osci will automatically count up to be:
let osci1 = AKOscillator()
let osci2 = AKOscillator()
...
etc.
How will I be able to do this?
Thanks
edit: I am trying to use an array by creating an array of
var osciArray = [AKOscillator]()
and in my function to add an oscillator, this is my code:
let oscis = AKOscillator()
osciArray.append(oscis)
osciArray[oscCounter].frequency = freqValue
osciArray[oscCounter].amplitude = 0.5
osciArray[oscCounter].start()
selectedNode.userData = ["counter": oscCounter]
oscCounter += 1
currentOutput = osciArray[oscCounter]
AudioKit.output = currentOutput
AudioKit.start()
My app builds fine, but once the app starts running on the Simulator I get error : fatal error: Index out of range

I haven't used AudioKit, but I read about it a while ago and I have quite a big interest in it. From what I understand from the documentation, it's structured pretty much like SpriteKit: nodes connected together.
I guess then that most classes in the library derive from a base class, just like everything in SpriteKit derives from the SKNode class.
Since you are linking the audio kit nodes with visual representations via SpriteKit nodes, why don't you simply subclass from an SKSpriteNode and add an optional audioNode property with the base class from AudioKit?
That way you can just use SpriteKit to interact directly with the stored audio node property.

I think there's a lot of AudioKit related code in your question, but to answer the question, you only have to look at oscCounter. You don't show its initial value, but I am guessing it was zero. Then you increment it by 1 and try to access osciArray[oscCounter] which has only one element so it should be accessed by osciArray[0]. Move the counter lower and you'll be better off. Furthermore, your oscillators look like local variables, so they'll get lost once the scope is lost. They should be declared as instance variables in your class or whatever this is part of.

Related

AudioKit 5 - player started when in a disconnected state

Trying to use AudioKit 5 to dynamically create a player with a mixer, and attach it to a main mixer. I'd like the resulting chain to look like:
AudioPlayer -> Mixer(for player) -> Mixer(for output) -> AudioEngine.output
My example repo is here: https://github.com/hoopes/AK5Test1
You can see in the main file here (https://github.com/hoopes/AK5Test1/blob/main/AK5Test1/AK5Test1App.swift) that there are three functions.
The first works, where an mp3 is played on a Mixer that is created when the controller class is created.
The second works, where a newly created AudioPlayer is hooked directly to the outputMixer.
However, the third, where I try to set up the chain above, does not, and crashes with the "player started when in a disconnected state" error. I've copied the function here:
/** Try to add a mixer with a player to the main mixer */
func doesNotWork() {
let p2 = AudioPlayer()
let localMixer = Mixer()
localMixer.addInput(p2)
outputMixer.addInput(localMixer)
playMp3(p: p2)
}
Where playMp3 just plays an example mp3 on the AudioPlayer.
I'm not sure how I'm misusing the Mixer. In my actual application, I have a longer chain of mixers/boosters/etc, and getting the same error, which led me to create the simple test app.
If anyone can offer advice, I'd love to hear it. Thanks so much!
In your case, you can swap outputMixer.addInput(localMixer) and localMixer.addInput(p2) then it works
Once you have started the engine: work backwards from the output with your audio chain connections. So, your problem was that you attached a player to a mixer that was disconnected from the output. You needed to first attach the output to the mixer and then attach that mixer to the player.
The advice I wound up getting from an AudioKit contributor was to do everything possible to create all audio objects that you need up front, and dynamically change their volume to "connect" and "disconnect", so to speak.
Imagine you have a piano app (a contrived example, but hopefully gets the point across) - instead of creating a player when a key is pressed, connecting it, and disconnecting/disposing when the note is complete, create a player for every key on startup, and deal with them dynamically - this prevents any weirdness with "disconnected state", etc.
This has been working pretty flawlessly for me since.

Why does the Augmented Image Sceneform SDK Sample doesn't work only with run-time constructed AR objects?

I'm tinkering with the Sceneform SDK's Augmented Image sample code after I completed the accompanying code lab. The completed sample adds two types of objects to the AR scene: one is modeled with a CAD software and loaded from an sfb binary (that's the green maze) and the other one is a red ball which is constructed run-time using the MaterialFactory and ShapeFactory.
A simple experiment is to remove the green maze to only have the red ball (and remove the physics engine of course as well). In that case however the red ball does not appear on the AR scene.
The interesting thing is that the green maze does not have to appear on the scene - by that I mean I don't have to create the Node, assign renderable, etc. https://github.com/CsabaConsulting/sceneform-android-sdk/blob/master/samples/augmentedimage/app/src/main/java/com/google/ar/sceneform/samples/augmentedimage/AugmentedImageNode.java#L139:
mazeNode = new Node();
mazeNode.setParent(this);
mazeNode.setRenderable(mazeRenderable.getNow(null));
But if I take out the loading code https://github.com/CsabaConsulting/sceneform-android-sdk/blob/master/samples/augmentedimage/app/src/main/java/com/google/ar/sceneform/samples/augmentedimage/AugmentedImageNode.java#L89
mazeRenderable =
ModelRenderable.builder()
.setSource(context, Uri.parse("GreenMaze.sfb"))
.build();
and most importantly the code in the setImage which waits until the model is fully loaded and built https://github.com/CsabaConsulting/sceneform-android-sdk/blob/master/samples/augmentedimage/app/src/main/java/com/google/ar/sceneform/samples/augmentedimage/AugmentedImageNode.java#L125
if (!mazeRenderable.isDone()) {
CompletableFuture.allOf(mazeRenderable)
.thenAccept((Void aVoid) -> setImage(image))
.exceptionally(
throwable -> {
Log.e(TAG, "Exception loading", throwable);
return null;
});
return;
}
The ball won't appear. The ball (and any other run-time constructed objects I add) won't appear if I take out this .isDone() section above. I haven't found any indicator in the AR Session or anywhere else which would indicate that something is not ready to work yet. In my application I may only use run-time built 3D objects, do I need an sfb only for the sake of those appearing?
This happened because implicitly the Factory based scene building contains CompletableFuture as well! More specifically the material building is a function which returns CompletableFuture.
Not realizing this, I haven't quoted the important code section in the question. You can see that the just below the Maze model loader instructions:
https://github.com/CsabaConsulting/sceneform-android-sdk/blob/master/samples/augmentedimage/app/src/main/java/com/google/ar/sceneform/samples/augmentedimage/AugmentedImageNode.java#L94
MaterialFactory.makeOpaqueWithColor(context, new Color(android.graphics.Color.RED))
.thenAccept(
material -> {
ballRenderable =
ShapeFactory.makeSphere(0.01f, new Vector3(0, 0, 0), material); });
Here we can see the tell-tale sign of .thenAccept( which reveals that makeOpaqueWithColor returns a Future. While the model loading was also in the code we also had this check later:
if (!mazeRenderable.isDone()) {
CompletableFuture.allOf(mazeRenderable)
That code unfortunately doesn't pay attention to the material which is also asynchronously built. But the wait for the 3D model load gives enough time so that the material building can finish as well when it is accessed. However as soon as I removed the maze together with the future waiter code section, there was no safeguard for waiting for the material building to finish. This caused the whole scene hierarchy to be constructed before the material actually is ready. This results in an invisible scene.
https://github.com/googlecodelabs/arcore-augmentedimage-intro/issues/7

How to get values for primary light intensity etc, from ARDirectionalLightEstimate

So I'm trying to use the front camera of iPhone XR to get the approximate location for light sources. I decided to use ARDirectionalLightEstimate but I can't figure out how to access it. I can easily access lightEstimate property.
The Docs said that the lightEstimate property of each frame has an instance of ARDirectionalLightEstimate but I can't access it using the dot operator, I even tried to type cast it to ARDirectionalLightEstimate (like I saw someone doing, I can't find the link right now but I will update) but that didn't work too. I am inexperienced in swift so it's possible I messed up somewhere.
ARDirectionalLightEstimate is a subclass of type ARLightEstimate, so to access you need to type cast lightEstimate:
let lightEstimate = sceneView?.session.currentFrame?.lightEstimate
if let directionalLightEstimate = lightEstimate as? ARDirectionalLightEstimate {
// add logic here
let primaryLightIntensity = directionalLightEstimate.primaryLightIntensity
// ...
}

Mixed topology (quad/tri) with ModelIO

I'm importing some simple OBJ assets using ModelIO like so:
let mdlAsset = MDLAsset(url: url, vertexDescriptor: nil, bufferAllocator: nil, preserveTopology: true, error: nil)
... and then adding them to a SceneKit SCN file. But, whenever I have meshes that have both quads/tris (often the case, for example eyeball meshes), the resulting mesh is jumbled:
Incorrect mesh topology
Re-topologizing isn't a good option since I sometimes have low-poly meshes with very specific topology, so I can't just set preserveTopology to false... I need a result with variable topology (i.e. MDLGeometryType.variableTopology).
How do I import these files correctly preserving their original topology?
I reported this as a bug at Apple Bug Reporter on 25th of November, bug id: 35687088
Summary: SCNSceneSourceLoadingOptionPreserveOriginalTopology does not actually preserve the original topology. Instead, it converts the geometry to all quads, messing up the 3D model badly. Based on its name it should behave exactly like preserveTopology of Model IO asset loading.
Steps to Reproduce: Load an OBJ file that has both triangles and polygons using SCNSceneSourceLoadingOptionPreserveOriginalTopology and load the same file into an MDLMesh using preserveTopology of ModelIO. Notice how it only works properly for the latter. Even when you create a new SCNGeometry based on the MDLMesh, it will "quadify" the mesh again to contain only quads (while it should support 3-gons and up).
On December 13th I received a reply with a request for sample code and assets, which I supplied 2 days later. I have not received a reply since (hopefully because they are just busy from catching up from the holiday season...).
As I mentioned in my bug report's summary, loading the asset with Model I/O does work properly, but then when you create a SCNNode based on that MDLMesh it ends up messing up the geometry again.
In my case the OBJ files I load have a known format as they are always files also exported with my app (no normals, colors, UV). So what I do is load the information of the MDLMesh (buffers, facetopology etc) manually into arrays, from which I then create a SCNGeometry manually. I don't have a complete separate piece of code of that for you as it is a lot and mixed with a lot of code specific to my app, and it's in Objective C. But to illustrate:
NSError *scnsrcError;
MDLAsset *asset = [[MDLAsset alloc] initWithURL:objURL vertexDescriptor:nil bufferAllocator:nil preserveTopology:YES error:&scnsrcError];
NSLog(#"%#", scnsrcError.localizedDescription);
MDLMesh * newMesh = (MDLMesh *)[asset objectAtIndex:0];
for (MDLSubmesh *faces in newMesh.submeshes) {
//MDLSubmesh *faces = newMesh.submeshes.firstObject;
MDLMeshBufferData *topo = faces.topology.faceTopology;
MDLMeshBufferData *vertIx = faces.indexBuffer;
MDLMeshBufferData *verts = newMesh.vertexBuffers.firstObject;
int faceCount = (int)faces.topology.faceCount;
int8_t *faceIndexValues = malloc(faceCount * sizeof(int8_t));
memcpy(faceIndexValues, topo.data.bytes, faceCount * sizeof(int8_t));
int32_t *vertIndexValues = malloc(faces.indexCount * sizeof(int32_t));
memcpy(vertIndexValues, vertIx.data.bytes, faces.indexCount * sizeof(int32_t));
SCNVector3 *vertValues = malloc(newMesh.vertexCount * sizeof(SCNVector3));
memcpy(vertValues, verts.data.bytes, newMesh.vertexCount * sizeof(SCNVector3));
....
....
}
In short, the preserveTopology option in SceneKit isn't working properly. To get from the working version in Model I/O to SceneKit I basically had to write my own converter.

GKObstacleGraph<GKGraphNode2D> does not handle archiving

I'm using GameplayKit's GKObstacleGraph to add path finding support to my iOS 10.2 SpriteKit game.
The game is a top-down 2D game, it features impassable obstacles which my 'soldiers' can't walk through.
Creating a GKObstacleGraph for my game level, with about 500 obstacles, takes ~50 seconds. This is too long for my users to wait.
Since the game map layout never changes dynamically after being loaded, I decided to create the graph once:
let graph = GKObstacleGraph(obstacles: obstacles, bufferRadius: 5, nodeClass: GKGraphNode2D.self)
Archive it to file:
let directories = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
if let documents = directories.first {
if let filePath = String(documents + "/obstacle.graph") {
NSKeyedArchiver.archiveRootObject(graph, toFile: filePath)
}
}
Copy the file from device to my laptop and add the file to my bundle. Then I just unarchive the graph object when I load my game level:
if let filePath = Bundle.main.path(forResource: "obstacle", ofType: "graph") {
let graph = NSKeyedUnarchiver.unarchiveObject(withFile: filePath) as! GKObstacleGraph<GKGraphNode2D>
return graph
}
In theory, this should be much quicker since I don't have to calculate the graph, only read it(unarchive) from file.
However, depending on the number of obstacles and their relative placement one of three things happens:
NSKeyedArchiver.archiveRootObject crashes.
Archive works but NSKeyedUnarchiver.unarchiveObject crashes.
Both archive and unarchive works, but I can not find a path around obstacles with GKObstacleGraph.findPath
I can get all of this working if I skip the (un)archive steps:
Successful path finding
Also, on simulator (iPhone 7), (un)archive never crashes, but path finding always fails afterwards.
What is going on here? I've reported it as a bug to Apple, but I'm still hoping that I've missed something.
I tried alternative solutions to the problem where I'm writing/reading nodes and obstacles to file using my own format. But the obstacle property of GKObstacleGraph is get only, which leaves me with the constructor, and then I'm back to waiting 50 seconds again.
I've created a public test project git repo:
https://bitbucket.org/oixx/gkobstaclegraphcrashproof.git
which shows the three different scenarios that fails when running on device. For simplicity it reads/writes the file to device, not bundle.
I don't know anything about the crashes, but #3 in your list is probably caused by connectNodeToLowestCostNode:bidirectional: not actually connecting your soldier to the graph because the nodes array is immutable after unarchiving.
The solution is to archive the nodes instead of the graph, then just create a graph from the nodes
NSArray *nodes = [NSKeyedUnarchiver unarchiveObjectWithData:self.nodesArray];
GKGraph *graph = [[GKGraph alloc] initWithNodes:nodes];

Resources