How to create a hole in the box in SceneKit? - ios

I'm using SceneKit to create a 3D Room for a Swift iOS app.
I'm using multiple boxes and placing it together to create different walls of the room. I want to also add doors and windows to the room for which I need to cut holes into the walls. This looks like a very common scenario but yet I couldn't find any relevant answers out there.
I know there are multiple ways of doing it -
Simplest being, don't cut the box. Place another box with door or wall texture.
But I do want to keep a light source outside of the room and want it to flow into the room through these doors and windows
Create multiple boxes for single wall and put them together to make a geometry
My last resort maybe.
Create custom geometry.
Feels too complicated since it requires me to draw each triangle myself. Not sure?
But what I was actually expecting -
Subtract geometries from geometries?
Library that's already handling these complexities?
Any pointers would be very helpful.
Thanks.

Scene kit offers some awesome potential but it's not a substitute for a 3D modeling program. If you want something much beyond assembling with primitives and extrusion in a plane you should think about constructing your model in a dedicated 3-D package and exporting the model into SceneKit as a .dae file. You might take a look at Blender. It's free and readily available on the net. I suspect it can easily do what you want and the learning curve will be compensated by the higher level functions of a graphics program versus coding.

I think #bpedit described the best approach.
A weak second choice would be to use SCNShape to build your geometry. That still leaves you the problem of constructing a Bezier path that matches your wall layout/topology. That might be a helpful hack in the short term, to save you from an immediate learning curve in modeling software. But I predict you'll still eventually move to a tool like Blender, SketchUp, Cheetah 3D, or Maya.

Related

How to create a soft body in SceneKit

So.
After many years of iOS development I said it's time to try to do a little game for myself. Now I chose to do it using Apple's SceneKit since it looks like it provides everything I need.
My problem is that I've stumbled upon a huge problem (for me) and searching on Google doesn't yeld any results.
Any idea how do I go about having an object (a sphere for that matter) that deforms itself, say, because of a gravitational force. So basically it should squash on impact with the ground.
Or, how do I go about deforming it when it collides with other spheres, like a soft beach ball would?
Any starting point along those lines would be helpful.
I can post my code here, but I'm afraid it has nothing to do with my problem since I really don't know where to start.
Thanks!
Update
After doing a bit more reading I think that what I want could be doable with Vertex Shaders. Is that a right path to follow?
For complicated animations, you'll generally be better off using a 3D modeling tool like Blender, Maya, or Cheetah3D to build the body and construct the animation. Those tools let you think at a higher level of abstraction. Then you can export that model to Collada (DAE) format and then import it into SceneKit.
https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Basic_Animation/Bounce has a tutorial on building a deforming, bouncing ball using Blender.
SceneKit only does physics using rigid bodies. If you want something to deform, you would have to do it yourself.
It is probably because SceneKit has no way of knowing how an object should be deformed. Should it just compress, should it compress in one direction and expand in all others to preserve it's volume, should only part of the model compress and the rest stay rigid (like the tires on a car).
What you could try is wait for a collision to occur and do the following
calculate and store the velocity after the bounce
disable collision checking on the object
run an animation for the "squash"
enable collision checking on the object
apply the calculated velocity
It will be entirely up to you how real or cartoony you want to make the bounce look.

PARABOLIC (not panoramic) video stitching?

I want to do something like this but in reverse-- so that the cameras are outside and pointing inward. Let's start with the abstract and get specific:
1) Are there any TOOLS that will do this for me? How close can I get using existing software?
2) Say the nearest tool is a graphics library like OpenCV. I've taken linear algebra and have an undergraduate degree in CS but without any special training in graphics. Where should I go from there?
3) If I really am undergoing a decade-long spiritual quest of a self-teaching+programming exercise to make this happen, are there any papers or other resources that you aware of that might aid me?
I think the demo you linked uses a 360° camera (see the black circle on the bottom) and does not involve stitching in any way.
About your question, are you aware of this work? They don't do stitching either, just blending between different views.
If you use inward views, then the objects you will observe will probably be quite close to the cameras, while standard stitching assumes that objects are far away. Close 3D objects mean high distortion when you change the viewpoint (i.e. parallax & occlusions), which makes it difficult to interpolate between two views. Hence, if you want stitching, then your main problem is to correctly handle parallax effects & occlusions between the views.
In my opinion, the most promising approach would be to do live stereo matching (i.e. dense 3D reconstruction) between the two camera images closest to your current viewpoint, and then interpolate the estimated disparities to generate an expected image. However, it's not likely to run in real-time, as demonstrated in the demo you linked, and the result could be quite ugly...
EDIT
You can also have a look at this paper, which uses a different but interesting approach, however maybe not directly useful in your case since it requires the new viewpoint to be visible in the available images.

Is there a way to create a CGPath matching outline of a SKSpriteNode?

My goal is to create a CGPath that matches the outline of a SKSpriteNode.
This would be useful in creating glows/outlines of SKSpriteNodes as well as a path for physics.
One thought I have had, but I have not really worked much at all with CIImage, so I don't know if there is a way to access/modify images on a pixel level.
Then maybe I would be able to port something like this to Objective-C :
http://www.sakri.net/blog/2009/05/28/detecting-edge-pixels-with-marching-squares-algorithm/
Also very open to other approaches that make this process automated as opposed to me creating shape paths for every sprite I make for physics or outline/glow effects.
What you're looking for is called a contour tracing algorithm. Moore neighbor tracing is popular and works well for images and tilemaps. But do check out the alternatives because they may better fit your purposes.
AFAIK marching squares and contour tracing are closely related, if not the same (class of) algorithms.
An implementation for tilemaps (to create physics shapes from tiles) is included in Kobold Kit. The body of the algorithm is in the traceContours method of KKTilemapLayerContourTracer.m.
It looks more complex than it really is, on the other hand it takes a while to wrap your head around it because it is a "walking" algorithm, meaning the results of prior steps is used in the current step to make decisions.
The KK implementation also includes a few minor fixes specifically for tilemaps (ie two or more horizontally or vertically connected tiles become a single line instead of dividing the line into tile-sized segments). It was also created with a custom point array structure, and when I ported it to SK I decided it would be easier to continue with that and only at the end convert the point arrays to CGPath objects.
You can make certain optimizations if you can safely assume that the shape you're trying to trace is not going to touch the borders, and there can not be any tiles that are only connected diagonally. All of this becomes clearer when you're actually implementing the algorithm for your own purposes.
But as far as a ready-made, fits-all-purposes solution goes: there ain't none.

How can I create a corner pin effect in XNA 4.0?

I am trying to write a strategy game using XNA 4.0, with a dynamically generating map, and it's really difficult to create all the ground textures, having to distort them individually in photoshop.
So what I want to do is create a flat image, and then apply the distortion programatically to simulate perspective, by moving the corners of the image.
Here is an example done in photoshop:
How can I do that in XNA?
My answer isn't XNA-specific as I've never actually used the library; however the concept should still apply.
In general, the best way to get a good perspective effect is to actually give 3d coordinates and transformations and let DirectX/OpenGL handle the rest. This has great benefits over attempting to do it yourself - specifically, ease of use, performance (much of the work is passed on to your graphics card), and perspective-correct texturing. And nothing's stopping you from doing 3d and 2d in the same scene, if that's a concern. There are numerous tutorials online for getting set up in the third dimension with XNA. I'd suggest heading over to MSDN.

Creating a 3D effect from a 2D image

I have a random 2D image. I would like to be able to present the image in 3D. This doesn't have to be very detailed, even if the image were arbitrarily broken into layers like a pop-up cutout from a children's book.
The goal would be that a given image would look normal when directly viewed but that if a viewer were to move/tilt left, right, up, down there would be a 3d effect.
This is similar but not exactly the same as this question here:
How to create 3D streoscopic images using MATLAB with image tool?
This is complete over-kill:
http://make3d.cs.cornell.edu/
And this is probably on the right track:
http://www.imagemagick.org/Usage/distorts/#perspective
My ideal implementation would be a automated PHP script with ImageMagick take is fed an image and spits out as a result either (in order of preference):
Images representing each layer, from
nearest to deepest (closer to the
childs pop-up book layer analogy)
5 images representing the said views
(direct, left, right, top, bottom)
Has this been done (either of the above ideal implementations), or does anyone know how to do all, or part, of this?
As far as the first part of your question is concerned, it sounds like your ideal implementation is http://make3d.cs.cornell.edu/, except that:
you want it simpler (return images from a fixed set of angles as opposed to a walkthrough)
you want it with imagemagick and PHP
I think that last restriction is unrealistic because there's a fair amount of maths and computer vision behind this kind of problem. Imagemagick will help you with lower level-image processing tasks like affine transforms, but it doesn't really provide the required higher-level computer vision functionality like 3D image reconstruction.
So my advice would be to try and work around that restriction somehow. If you implement the approach using more suitable tools (like C++ and OpenCV, for example, or Matlab, as the Make3D guys did), then you can wrap that in a CGI application so your PHP scripts can access it. Cornell (the authors of Make3D) had a similar thing going a while back, but it looks like they're not doing it any more.
For the second part of your question, the theory behind what you want to do has been fairly well-researched. See here for a list of depth estimation papers. Here is what things look like in source.

Resources