UILabel text doesn't appear when using ARKit - ios

I'm programmatically generating a set of UILabels, attaching them to SCNNodes and then placing them in a scene.
The problem is that the text on some of the labels doesn't appear. This occurs (seemingly) randomly.
Here's the code:
var labels = [SCNNode]()
var index: Int
var x: Float
var y: Float
let N = 3
for i in 0 ... N-1 {
for k in 0 ... N-1 {
let node = label()
labels.append(node)
index = labels.count - 1
x = Float(i) * 0.5 - 0.5
y = Float(k) * 0.5 - 0.5
sceneView.scene.rootNode.addChildNode(labels[index])
labels[index].position = SCNVector3Make(x, y, -1)
}
}
and the method to create the label node:
func label() -> SCNNode {
let node = SCNNode()
let label = UILabel(frame: CGRect(x: CGFloat(0), y: CGFloat(0),
width: CGFloat(100), height: CGFloat(50)))
let plane = SCNPlane(width: 0.2, height: 0.1)
label.text = "test"
label.adjustsFontSizeToFitWidth = true
plane.firstMaterial?.diffuse.contents = label
node.geometry = plane
return node
}
The labels themselves always appear correctly, it's just that some of them are blank, with no text.
I've tried playing around with the size of the label, the size of the plane it is attached to, the font size etc - nothing seems to work.
I've also tried enclosing the label creation in DispatchQueue.main.async { ... }, which didn't help either.
I'm moderately new to Swift and iOS, so could easily have missed something very obvious.
Thanks in advance!
EDIT:
(1) Setting label.backgroundColor = UIColor.magenta makes it clear that in fact the label is not being created, but the node / plane is.
Some of the labels are left white (i.e only the SCNNode is being rendered), however after a short delay they sometimes then become magenta and the text will appear. Some of the labels will remain missing though.
(2) It further appears that it's related to the position and orientation of the node (label) relative to the camera. I created a large (10x10) grid of labels, then tested placing the camera at different initial positions in the grid. The likelihood that a node appeared seemed directly related to the distance of the node from the initial camera position. Those nodes directly in front of the camera were always rendered, and those far away almost never were.
(3) workaround / hack is to convert the labels to images, and use them instead - code is at https://github.com/Jordan-Campbell/uiimage-arkit-testing if anyone is interested.

If you are labeling things in AR, 99% of the time it is better to do so in "Screen Space" rather than in "Perspective".
Benefits of labels in Screen Space:
ALWAYS readable, regardless of user's distance from the label
You can use regular UILabels, no need to draw them to an image and then map the image to an SCNPlane.
Your app will have a first party feel to it because Apple uses Screen Space for their labels in all of their AR apps (see Measure).
You will be able to use standard animations on your UILabel, animations are much more complex to set up when working with content in Perspective.
If you are sold on Screen Space, let me know and I'll be happy to put up some code showing you the basics.

Use main thread for creating and adding labels to scene. This will make things faster, and avoid coupling this addition to scene with plane detection this really slows down the rendering. this works at times.
DispatchQueue.main.async {
}
Use a UIView SuperView as Parent to your label this would make things smoother.

Related

iOS - How to resize elements on a screen depending on the amount of the elements

So I am developing a game using Spritekit that uses a pyramid of Sprites (let's say circles for a simple instance). The user can choose the amount of rows of sprites they would like to have in the game. The sprites are to form a pyramid, so if you have 1 row, you have 1 sprite node. It increases by 2 the farther down you go (the more rows you choose) - creating the pyramid shape. So if a user picked 3 rows, the game board would look like this:
O
O O O
O O O O O
However, when it gets to 5 rows, it loses its pyramid shape because the screen is only so wide and it has to fit all the elements onto the screen (elements are more smushed together in rows further down).
My question is, to fix this issue, what would I have to do to make the pyramid resize and change its spacing between elements depending on how many rows are chosen? Would I have to multiply the spacing by a certain factor? I have also heard of people adding layers onto the screen - maybe drawing the sprites in some sort of container so that it always resizes the pyramid to fit the screen without skewing the pyramid shape?
Your idea is correct! Make a SKNode container, then update it's .size property, or do .setScale.
(not at xcode right now, pardon if not 100%)
// Say that our scene's size is 400x400:
let bkg = SKShapeNode(rectangleOfSize: self.size)
bkg.addChild(firstSprite)
bkg.addChild(secondSprite) // And so on...
// Find the farthest point in bkg:
var farthestX = CGFloat(0)
for node in bkg.children {
if node.position.x + node.frame.size.width / 2 > farthestX {
farthestX = node.position.x + node.frame.size.width / 2
}
}
// Do math to resize the bkg:
if self.size.width < farthestX {
let scaler = self.size.width / farthestX
bkg.setScale(scaler)
}
This should work, or at least the general idea should work... You would want to check for Y values and Height as well.
You can easily compute a symmetrical size proportional to the number of rows and resize your sprites accordingly. This is my idea in pseudocode:
let computedSize = deviceWidth/(2*(rows-1) + 1)
for sprite in sprites {
sprite.size.width = computedSize
sprite.size.height = computedSize
}

Performance optimization of waveform drawing

I'm building an app that draw the waveform of input audio data.
Here is a visual representation of how it looks:
It behaves in similar way to Apple's native VoiceMemos app. But it lacks the performance. Waveform itself is a UIScrollView subclass where I draw instances of CALayer to represent purple 'bars' and add them as a sublayers. At the beginning waveform is empty, when sound input starts I update waveform with this function:
class ScrollingWaveformPlot: UIScrollView {
var offset: CGFloat = 0
var normalColor: UIColor?
var waveforms: [CALayer] = []
var lastBarRect: CGRect?
var kBarWidth: Int = 5
func updateGraph(with value: Float) {
//Create instance
self.lastBarRect = CGRect(x: self.offset,
y: self.frame.height / 2,
width: CGFloat(self.barWidth),
height: -(CGFloat)(value * 2))
let barLayer = CALayer()
barLayer.bounds = self.lastBarRect!
barLayer.position = CGPoint(x: self.offset + CGFloat(self.barWidth) / 2,
y: self.frame.height / 2)
barLayer.backgroundColor = self.normalColor?.cgColor
self.layer.addSublayer(barLayer)
self.waveforms.append(barLayer)
self.offset += 7
}
...
}
When last rect of purple bar reaches the middle of screen I begin to increase contentOffset.x of waveform to keep it running like Apple's VoiceMemos app.
The problem is: when bar count reaches ~500...1000 some noticeable lag of waveform begin to happen during setContentOffset.
self.inputAudioPlot.setContentOffset(CGPoint(x: CGFloat(self.offset) - CGFloat(self.view.frame.width / 2 - 7),y: 0), animated: false)
What can be optimized here? Any help appreciated.
Simply remove the bars that get scrolled off-screen from their superlayer. If you want to get fancy you could put them in a queue to reuse when adding new samples, but this might not be worth the effort.
If you don’t let the user scroll inside that view it might even be worth it to not use a scroll view at all and instead move the visible bars to the left when you add a new one.
Of course if you do need to let the user scroll you have some more work to do. Then you first have to store all values you are displaying somewhere so you can get them back. With this you can override layoutSubviews to add all missing bars during scrolling.
This is basically how UITableView and UICollectionView works, so you might be able to implement this using a collection view and a custom layout. Doing it manually though might be easier and also perform a little better as well.
I suspect that this will probably be a larger refactoring than you can afford but SpriteKit would probably give you better performance and control over rendering of the waveform.
You can use SpiteNodes (which are much faster than shape nodes) for the wave bars. You can implement scrolling by merely moving a parent node that contains all these sprites.
Spritekit is quite good at skipping non visible node when rendering and you can even dynamically remove/add node from the scene if the number of nodes becomes a problem.

How to position a SCNNode to cover the whole SCNView?

I am very new to SceneKit and your help will be really appreciated!
I have a 200x200 sized SCNView in my UIView, which is at the centre of super view.
I want to put a SCNCylinder inside, such that the SCNCylinder covers full SCNView. I read that all these views of Scenekit are defined in meters, so how do I form a relationship between the dimensions of my screen and the
SCNCylinder.
I tried:
var coinNode = SCNNode()
let coinGeometry = SCNCylinder(radius: 100, height: 2)
coinNode = SCNNode(geometry: coinGeometry)
coinNode.position = SCNVector3Make(0, 0, 0)
coinScene.rootNode.addChildNode(coinNode)
let rotate90AboutZ = SCNAction.rotateByX(-CGFloat(M_PI_2), y: 0.0, z: CGFloat(M_PI_2), duration: 0.0)
coinNode.runAction(rotate90AboutZ)
ibOutletScene.scene = coinScene
But this leaves a margin between my coinScene and the ibOutletScene. How do I remove this space?
I also tried adding Camera:
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3Make(0, 0, 100)
coinScene.rootNode.addChildNode(cameraNode)
But I see random behaviour with this and the coinNode gets hidden! How should I position my camera? Or is there any other way to remove extra space from my ibOutletScene?
Edit:
This is how it looks if I don't add camera. There is a margin between red scene and green coin. I tried multiple sizes for the coin, but I am unable to remove this margin unless I add a camera. But, If I add camera, I get another problem, mentioned below this screenshot.
If I don't add the camera, The rotation animation on coin works perfectly, but If I add camera,the rotation enlarges the and then becomes small again with the animation. How can I rotate it on its axis, without increasing the size?
I am using following code to rotate the coin:
The same code works fine without camera, but enlarges the coin after adding camera. Checkout the snapshot.
let rotate = SCNAction.rotateByX(0, y: -2 * CGFloat(M_PI_2), z: 0, duration: 2)
coinNode.runAction(rotate)
The random behavior might be caused by the last line in your first code snippet. You're starting an animation, and then adding the scene to the view.
Instead, build your scene, attach it to the view, and then start your animation. Setting a non-zero duration for the action will give you a more pleasing transition.
As for the extra space, it would help us understand if you post a screenshot. But you're going to have to do a bit of trigonometry.
It looks like you have a scene that you want to be blocked by a coin, that then rotates out of the way? Simulate that yourself with real objects. Put your eye down at the edge of your desk. Put a coin out a ways from your eye. How far does that coin have to be in order to block particular objects farther away on your desk?
In SceneKit, you can query the field of view of the SCNCamera. You know the size of your coin and the size of the view. Calculate the distance from the camera needed for the projected diameter of your coin to equal the width of your view. Put the coin there.

Setting UIView.transform to arbitrary translate CGAffineTransform does nothing

I have a UIView called container that I want to move (offset) using affine transfrom. This view contains UIImageView and is a subview of UICollectionViewCell.
So it should be simple:
container.transform = CGAffineTransformMakeTranslation(100, 200) //render container 100 points right and 200 points down
Instead it is very hard, because theat code does not do anything. The view is rendered excatly on the same place as if I delete that line. So I added 'print' to verify what affine translation was set:
container.transform = CGAffineTransformMakeTranslation(100, 200)
print(container.transform) //prints: CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 100.0, ty: 200.0)
That seems all right. So I tried rotating the container view instead with CGAffineTransformMakeRotation and it rotates the view just not around its center as it should according to documentation. I tried different combinations of translate, rotation and scale transforms just to find that the affine transformation matrixes set are OK, but attributes tx and ty seems to be ignored and a, b, c and d seems to be using different anchor point then the centre of the view (cannot say what that point is).
Any ideas on what can be causing this and how to fix it?
There must be something like auto layout messing things up for you. In the absence of outside influence, setting a view's affine transform to CGAffineTransformMakeTranslation(100, 200) will shift it right 100 points and down 200. I verified this by making a new Single View Project in Xcode and changing the viewDidLoad method in the ViewController.swift class to:
override func viewDidLoad()
{
super.viewDidLoad()
view.backgroundColor = UIColor.blueColor();
let container = UIView(frame: CGRectMake(0,0,100,100));
container.backgroundColor = UIColor.greenColor();
container.transform = CGAffineTransformMakeTranslation(100, 200);
view.addSubview(container);
}
As expected this makes the green container view appear 100 points to the right and 200 points down, even though its frame is (0,0,100,100).
So please check for auto layout and other such things that might influence the placement of this view, and if you can't find anything please post more code. Also, if your container view doesn't have a background color, please give it one so that you can see its position directly, instead of deducing its position by looking at the image view.
n.b. Setting a view's transform doesn't actually move the view itself, it just changes how/where it draws its content.

Random movements / turbulences - SWIFT

I'm developing a game on Iphone and Ipad like a space invaders.
Balloons to destroy are falling from the top of the screen in a straight line.
Here my codes to add them :
func addBalloonv(){
var balloonv:SKSpriteNode = SKSpriteNode (imageNamed: "ballonvert.png")
balloonv.physicsBody = SKPhysicsBody (circleOfRadius: balloonv.size.width/2)
balloonv.physicsBody.dynamic = true
balloonv.physicsBody.categoryBitMask = balloonCategory | greenCategory
balloonv.physicsBody.contactTestBitMask = flechetteCategory
balloonv.physicsBody.collisionBitMask = balloonCategory
balloonv.physicsBody.mass = 1
balloonv.physicsBody.restitution = 1
balloonv.physicsBody.allowsRotation = true
let minX = balloonv.size.width/2
let maxX = self.frame.size.width - balloonv.size.width/2
let rangeX = maxX - minX
let position:CGFloat = CGFloat(arc4random()) % CGFloat(rangeX) + CGFloat(minX)
balloonv.position = CGPointMake(position, self.frame.size.height+balloonv.size.height)
self.addChild(balloonv)
I have one func by balloon color.
So for the moment they move in straight line and I'm looking for random movements (with turbulences like balloon in air) from the top and both sides.
How can I do that?
Thank you very much !!
This is exactly what the new Physics Fields feature in SpriteKit (as of iOS 8 / OS X Yosemite) is for. These let you apply different kinds of forces to all physics bodies in region, like gravity, drag, and turbulence. See the SKFieldNode class docs for details.
Fields are a kind of node, so to get what you're after, you'd add one noise (or turbulence) field to your scene, covering the area that the balloons fall through, and it'll perturb the path of each balloon that passes. The simplest way to do it goes something like this:
let field = SKFieldNode.noiseFieldWithSmoothness(0.5, animationSpeed: 0.1)
scene.addChild(field)
You'll want to tweak the smoothness, animation speed, and field.strength till you get just the level of noise you want. You might also look into whether you want just a noise field, which applies random forces in random directions, or a turbulence field, which does the same thing, but with forces that get stronger when bodies are moving faster.
The above code gets you a field whose region of effect is infinite. You might want to limit it to a specific area (for example, so it doesn't keep knocking your balloons around after they land). I did this to make a field that covers only the top 3/4 of a 300x200 scene:
field.region = SKRegion(size: CGSize(width: 300, height: 100))
field.position = CGPoint(x: 150, y: 150)

Resources