Can't use multiple images for SCNPlane array - ios

I have an array of SCNPlane objects, and I want to add different images to each one. My problem is that whenever I initialize the SCNPlanes in the for loop, each plane will end up with the exact same image. Here is what my loop basically looks like:
var layer = [CALayer](count: 8, repeatedValue: CALayer())
var tmpPhoto = [CGImage]()
for var i = 0; i < 8; ++i{
tmpPhoto.append("image data")
layer[i].contents = tmpPhoto[i]
layer[i].frame = CGRectMake(0, 0, "image width", "image height")
//initialize the SCNPlane and add it to an array of SCNNodes
plane[i].geometry?.firstMaterial?.locksAmbientWithDiffuse = true
plane[i].geometry?.firstMaterial?.diffuse.contents = layer[i]
//add SCNPlane constraints
}
What I've noticed is that the image displayed will be the last image that was added/altered. I know this because after this loop, I tried modifying the first entry in the plane array only. At run time, the image for the first array entry would be displayed on all the other SCNPlanes instead! Keep in mind that I am not using displayLayer() or setNeedsDisplay() at all.
Here is what I have tried:
using a similar layer variable instead of an array, and just modifying it at the start of each loop
manipulating layer variables outside of a loop
directly adding on a UIImage without converting to CGImage first (I know that each image is being loaded into the array)
trying to modify the SCNPlane's layer directly
using an SCNMaterial variable (followed this, but with one layer added to the materials variable for each SCNPlane)
adding the layers to existing view structures using either addSublayer() (doesn't work) or layoutSublayersOfLayer() (crashes the app due to an uncaught exception)
Could I be missing something important?
EDIT: Forgot a line.

Create a different SCNPlane for each SCNNode. If you create a single SCNPlane for multiple nodes then they will share the geometry. Setting a material on the geometry will change all nodes that use that geometry.

Related

How to change the color of the 3D Text Object provided by SceneKit via Interface Builder [duplicate]

How do I set the texture of a SCNText object? This is what I have and nothing changes in the appearance:
// myNode is a working SCNText element
let mat = SCNMaterial()
met.diffuse.contents = UIImage(contentsOfFile: "texture.png")
myNode.geometry.firstMaterial = mat
A text geometry may contain one, three, or five geometry elements:
If its extrusionDepth property is 0.0, the text geometry has one element corresponding to its one visible side.
If its extrusion depth is greater than zero and its chamferRadius property is 0.0, the text geometry has three elements, corresponding to its front, back, and extruded sides.
If both extrusion depth and chamfer radius are greater than zero, the text geometry has five elements, corresponding to its front, back, extruded sides, front chamfer, and back chamfer.
Scene Kit can render each element using a different material. For details, see the description of the materials property in SCNGeometry Class Reference.
just like for any other geometry, simply set its materials property.

Creating a dynamically sized SKSpriteNode platform for endless runner game

I am creating my first Endless Runner game for IOS and I want it to be as dynamic as possible. I would like to create one large "Platform" image and then use that one image to create platforms of various size.
The idea is to randomly select a number for the width of the platform and then generate the sprite and body to match the chosen size. After this is done get the image to fill in the sprite by using only part of the image.
At the minute I am using the following but this creates the node based upon the size of the UIImage instead.
SKSpriteNode *spritePlatform = [[Platform alloc] initWithImageNamed:#"Platform"];
[spritePlatform setPosition:CGPointMake(self.frame.size.width + (spritePlatform.frame.size.width / 2), 200)];
spritePlatform.name = #"Platform";
spritePlatform.physicsBody = [SKPhysicsBody bodyWithTexture:spritePlatform.texture size:CGSizeMake(300, 40)];
spritePlatform.physicsBody.affectedByGravity = NO;
spritePlatform.physicsBody.dynamic = NO;
// 1
spritePlatform.physicsBody.usesPreciseCollisionDetection = YES;
// 2
spritePlatform.physicsBody.categoryBitMask = CollisionCategoryPlatform;
spritePlatform.physicsBody.contactTestBitMask = CollisionCategoryPlayer;
[self addChild:spritePlatform];
[self movePlatform:spritePlatform];
So ideally I would like to
Create a sprite based upon a random width and fixed height.
Use part of a larger image to fill in the sprite.
Any ideas how I can go about doing this?
Thanks
Create a sprite based upon a random width and fixed height.
To pick a random number for the width is simple. You can use arc4random_uniform and be sure to pick the number in a reasonable range (less than your platform image).
Use part of a larger image to fill in the sprite.
This can be done by using textureWithRect:inTexture:. The first parameter is a rectangle in the unit coordinate space that specifies the portion of the texture to use. And the second parameter is the whole platform texture to create the new texture from.
Here's the hints for how to set the size/portion of your each platform:
(0, 0) is the left-lower corner of the coordinate of the whole platform.
The range for x/y coordinate is from 0 to 1, not the real dimension of the platform image.
Given a texture created by the platform image platformAllTexture and a random width in the first step, the platform texture may be:
// Fixed height is set as half of the platform image
SKTexture *platformTexture = [SKTexture textureWithRect:CGRectMake(0, 0, width/platformAllTexture.size.width, 1/2.0)
inTexture:platformAllTexture];
In this way, you have got the texture platformTexture for the dynamic sized platforms.
In the example above, if the rectangle is defined as CGRectMake(0, 0, 1/3.0, 1/2.0), you will get this similar result:

SceneKit - Textures not properly displayed

I have a cube (rounded) and want to display a texture on one of it's side. I can access the material on that side with:
var tex1: SCNMaterial! = cube.geometry?.materialWithName("_1")!
I then set it's image contents:
tex1.diffuse.contents = "cube1"
This then looks like this:
This shows me that it does work, but the white part is not in the center as
it should be. (The image I am using has the white part in the center.)
I tried to use offset to move the image around on the surface, I would like scale it as well. I tried it like this:
tex1.diffuse.contents.offset = SCNVector3Make(20, 0, 0)
That gives me errors: it says it cannot assign the result of that expression. (I also tried contentMode, same error, I think because these are for UI, not SCN)
Questions
Does anyone know what I can do?
Maybe offset is not the way to go?
How can I scale the image?
The type of a material property's contents is AnyObject, which means the compiler will allow you to call any method (defined on any object type) on it. That doesn't mean all methods or property accessors are actually really implemented by the actual class that's in your particular contents.
Material properties do have a contentsTransform option, though. Have you looked at that?
Here is my solution :
create offset:
let offsetVal = SCNMatrix4MakeTranslation(0, -0.05, 0)
create scale:
let scaleVal = SCNMatrix4MakeScale(1.5, 1.5, 1.5)
if you want to set Offset Property only:
material.diffuse.contentsTransform = offsetVal
if you want to set Scale Property only:
material.diffuse.contentsTransform = scaleVal
if you want to mix them:
material.diffuse.contentsTransform = SCNMatrix4Mult(scaleVal, offsetVal)
hope this helpful!!!

Custom Video File how much would I extract the images?

I alot of people would recommend that why not go with Bink or use DirectShow in order to play a video or even ffmpeg. However, what are movies anyways - just images put all together with sound.
I've already created a program where I take a bunch of images and place them into the customize video file. The cool thing about this - is that I can easily place it on a quad. The issue I'm having is I can only extract one image from the custom video file. When I have more than one; I have problems, which I fully understand.
I have a index lookup table of all the images sizes then the raw images. The calculation I was following was:
offset = NumberOfImages + 1 * sizeof(long).
So, with the one image - if you'll perform the offset of finding the first image would be quite easy. During the for loop it always starts with 0 and and reaches the number of images which is 1. So, it would translate like this:
offset = 1 + 1 * 4 = 8.
So, now I know the offset just for one image which is great. However, a video is with a bunch of images all together. So, I've been thinking to myself...If there was a way to reach up to a certain point then stuff the read data inside a vector.
currentPosition = 0; //-- Set the current position to zero before looping through images in file.
for (UINT i = 0; i < elements; i++) {
long tblSz = (elements + 1) * sizeof(long); // elements + 1 * 4
long off = tblSz + currentPosition; // -- Now let's calculate the offset position inside the file knowing the table size.
// in.seekg(off, std::ios_base::end); //-- Not used.
long videoSz = sicVideoIndexTable[i]; //-- Let's retreive the image size from the index table that's stored inside the file before we process each image.
// in.seekg(0, std::ios_base::beg); //-- Not used.
dataBuf.resize(videoSz); //-- Let's resize the data Buffer vector to fit the image size.
in.seekg(off, std::ios_base::beg); //-- Let's go to the calculated offset position to retrieve the image data.
std::streamsize currpos = in.gcount(); //-- Prototype not used.
in.read(&dataBuf[0], videoSz); //-- Let's read in the data according to the image size.
sVideoDesc.dataPtr = (void*)&dataBuf[0]; //-- Pass what we've read into the temporary structor before pushing it inside a vector to store the collection of images.
sVideoDesc.fileSize = videoSz;
sicVideoArray.push_back(sVideoDesc);
dataBuf.empty(); //-- Now can empty the data vector so it can be reused.
currentPosition = videoSz; //-- Set the current position to the video size so it can recalculate the offset for the next image.
}
I believe the problem lies within the seekg and in.read but that's just my gut telling me that. As you see the current position always changes.
Buttom line question is if I can load one image then why won't I be able to load multiple images from the custom video file? I'm not sure if I'm using seekg or should I just get every character until a certain point them dump the content inside a data buffer vector. I thought reading the block of data would be the answer - but I'm becoming very unsure.
I think I finally understand what your code does. You really should use more descriptive variable names. Or at least add an explanation of what each variable means. Anyway...
I believe your problem is in this line:
currentPosition = videoSz;
When it should be
currentPosition += videoSz;
You basically don't advance through your file.
Also, if you just read the images in sequentially, you might want to change your file format so that instead of a table of image sizes at the beginning, you store each image size directly followed by the image data. That way you don't need to do any of the offset calculations or seeking.

Optimal way to render multiple scenes with large numbers of objects in Three.js

Imagine that you want to draw two scenes, each with hundreds of spheres, and provide the ability to switch between these scenes. What is an optimal way to do this?
Currently a switch takes about 4 to 5 seconds because I am deleting, creating, and drawing all of the spheres on each switch. Below is an example of the code that runs on a scene switch.
clearObjects();
resetCamera();
for(var i = 0; i < 500; i++) {
var geometry = new THREE.SphereGeometry(radius, 50, 50);
var material = new THREE.MeshLambertMaterial({color: 0xFFCC33});
var sphere = new THREE.Mesh(geometry, material);
sphere.position.set(randX, randY, randZ);
scene.add(sphere);
objects.push(sphere);
}
Once again, why don't you just use one scene, split it in 2 parts, set your camera FOV (field of view) so that you can see just one scene part at a time and then just move your camera position... Doesn't it sound more efficient?
If there are no particular reasons for using 2 scenes, you can always implement your code with just one scene. So try the method I described above or explain your reasons to use 2 scenes.
Edit: You can also use two THREE.Object3D containers to represent your 2 scenes, where you can store all of your certain scene objects and then just show/hide one of the containers at a time. Than way you can manipulate all of the container's contents using yourContainer.children[n].
So generally, that is what you want to do:
var scene1Container = new THREE.Object3D();
var scene2Container = new THREE.Object3D();
scene1Container.add(firstObjectFromScene1);
//.....
scene1Container.add(nObjectFromScene1);
scene2Container.add(firstObjectFromScene2);
//.....
scene2Container.add(nObjectFromScene2);
now you can just show/hide one of the containers at a time using scene1Container.visible = true/false; (And manage scene1Container.traverse to apply visibility change to all of the children of your object).

Resources