I am trying to make a terrain map for a 3D game in SceneKit. I have tried a few options starting with using a MapKit as the floor but it didn't workout very well. I am not sure what is the proper way of doing the same. I have seen a few search results that talk about procedural generation but none with SceneKit.
I am reaching out to you guys to point me in the right direction. I need to know how to start a project to render a randomly generated terrain. What kinda components should I have in my Scene to do this ? I am not after fully functional code but some established ideas on how this is done normally. I also would like to know what kinda resources I need to generate a map like that. For 2D I can use TileMaps which has tile images put together randomly but how is it done in SceneKit ?
Related
I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.
I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.
I'm trying to find the best strategy to align a SCNScene to a physical table. Just like the ARKit app WWWFreeRivers.
Currently I'm just testing out to map a simple plane model, with the same dimensions as the table. If I draw out the plane that ARKit detects, I can see that the plane is not very accurate with the edges. They always go outside of the edges (image below).
So I can't really rely on that plane, to just place the model in the center of this. The model is not rotated correctly either (image below).
I had another idea to use the ARReferenceImage technique, to take a picture of the table top texture, and let ARKit find and match this "image" of the table. But even with wood grain texture, it wasn't enough data for ARKit to recognize it. And ARKit just fails if you have these errors. It doesn't even try to do a bad match.
How can I go about doing this?
Ideas I've had so far:
Take image of table and use ARImageReference feature to match it. This didn't work. Maybe if I add some more interesting feature points to the table, like some sort of QR codes in the corners.
Detect the plane, and then tap the four corners on the table to map out a square, and use this.
Do as the WWW app, just place the object randomly on the plane, and then let the user scale, move and rotate the model to give it correct placement.
Any more ideas? What do you think will be the best approach to this?
Two options I can think of you could use.
You could create an ARWorldMap (iOS12+ only) and use it instead of the ARImageReference, walk around the area while creating a map that subsequent ARKit Sessions will remember. You can experiment slightly as to how to fit your models within the four corners of the table (this is slightly tedious w/o much help from the SceneView editor). However, when you load the saved ARWorldMap and localized against it (just like the ARImageReference), your model should fit within the four corners of the table every time.
If you use something like Unity (and its ARKit plugin), it has much more powerful Editor tools (3D viewer/designer). There are some tools that can help you save the map just like ARWorldMap but then bring in details of the map into the editor so you can line things up right really easily. Placenote's Spatial Capture toolkit can help here. Placenote (iOS11+) creates its own "World Map" but it exposes the visual details in the Unity editor, making it easier to line things up and then localize against (Example). The map is also stored on a managed cloud from the get-go to make sharing across phones much easier.
P.S: Both these options require you to keep the environment generally static (not large lighting changes etc.), though this was a similar constraint to when using ARIMageReference.
I've just cloned the three.js project from github. I'm interested in creating a circle on a 2d plane that I can drag with my mouse. I have no experience in graphics programming (WebGL or OpenGL).
Can someone please point me in the right direction? I've tried google, but the examples seem too complicated.
Many thanks in advance,
I think this is something you need to have a go at. It's not nearly as complicated as you might think. If the maths are scaring you off, don't worry three.js handles it all for you, you just need to add a camera, some shapes and ask it to render them.
Please do take the time to go through aerotwists three js tutorials as these will give you a good grounding in how to setup a basic scene.
You will quickly realise that once you have a scene you can change objects quite easily using your current scene.
As for dragging things around, I'm sure that will come just try to walk before you can run.
I am newbie with Cocos 2D, Chipmunk and Box 2D.
I have started basic docs and started to develop games.
Currently I am working with chipmunk.
I stuck at few points And they are as follow.
In my application, there is a player Who kick the soccer ball and ball will move to distance according to force applied by kick of player.
I am confused for ..
How do I make players whole body static and can make his one lag moveable to let it kick the ball.
How do I calculate force and vector and distance etc.
How do i move to next screen if my ball goes to out of current screen.
Please let me know the url from which I can easily get all type of examples for chipmunk application.
First off, you should learn chipmunk first and then try to solve your problem. I see a lot of people just wanting their problem to go away without actually making an effort to solve it. Here's some google results on Chipmunk tutorials.
https://www.google.co.cr/webhp?sourceid=chrome-instant&ix=sea&ie=UTF-8&ion=1#sclient=psy-ab&hl=en&site=webhp&source=hp&q=chipmunk%20tutorials&oq=&aq=&aqi=&aql=&gs_l=&pbx=1&fp=37838802d5e34660&ix=sea&ion=1&bav=on.2,or.r_gc.r_pw.,cf.osb&biw=1680&bih=882
About the 3 questions:
If you learn chipmunk or box2d, you can easily go and test different settings, from static bodys to joints to density. Depending on what you want to do, the solution differs. I suggest you look into that.
You can use several functions on each body you register. For box2d, youd use body->getAngle(), body->getLinearVelocity(), and body->getPosition(). With these 3 functions you can calculater force, vector and distance from every object. Im pretty sure chipmunk has something like this.
Really?? PSEUDOCODE: if (ball.position OUTSIDE screen.bounds) nextLEVEL();