I am having a game object.
I am increasing the size of the game object using game logic by setting the scale of the game object, however I am not able to change the size of the box of the collision object. Is there an API reference document or a better way to achieve this?
The support for physics scaling was added today in Defold 1.2.170. Read more in the release notes: https://forum.defold.com/t/defold-1-2-170-has-been-released/65631
You need to check the Allow Dynamic Transforms option in game.project to enable this feature.
Related
I have a kinematic body in my game that I switch to a dynamic body when it needs to jump so the physics simulation can handle all the complexities of gravity. I do this by replacing the entire physics body of the node. However, doing this also resets all the customisation you can do to physics bodies. Is there any way to change the physics body of a node without creating a new one? Or at least a way that somehow “copies” all the values like restitution, angular dampening, etc.
What I’ve tried so far:
Changing the type property on a physics body but the documentation says it’s supposed to be a constant (even though it’s a get-set property???). I don’t want to go against the documentation so I don’t think this should be the right way
Experimenting with pointers and reflections to copy the values somehow. I started reading about Mirror yesterday so I’m not sure if functionality like this even would be possible, but I am considering it.
Using the copy() function, however, I’d still have to manually copy all the small settings
Please let me know if there’s a way to switch the physics body type while preserving its properties such as bitmasks, physics settings, etc.
Thanks
As a SCNPhysicsBody consists of its own "constructed" geometry, you will need to recreate it each time you want to change it. But you could i.Ex. predefine all physicsBodies you need (dynamic, static, kinematic, including all properties like bitmasks, etc...) in a set of variables or constants and then you assign the one you want at the moment you need it.
I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.
I want to Place an AR object(Chair) onto another AR Object(Platform).
Lets say chair is 3X3 and platform is 6x6 in horizontal plane. I want to ask if it is possible? If Yes, I want to ask if it is possible in which of the following 1. ARCore, 2. ARKit, 3. VIRO React.
I know AR detects real world planes and we can place objects onto it. Also I have seen Videos of APP where in ARCore objects interact with each other.
It is possible, but you do not stack ar object over another ar object. You have to manually create a new type and then you stack the new type over one another.
An example:
You spawn a class Platform and then you detect platform similar way you detect plane, you can use a raycast to do this. Then when you detected the platform, you then offset chair from platform and you spawn it on top of it.
I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.
I trying to rewrite simple game (developed by me with Cocos2d+Box2d sometime ago) using Sprite Kit framework. Everything looks much simpler in Sprite Kit, which is great, but I have a problem adjusting physics world parameters in the new project. I have noticed that sprites created using exactly same graphic images (all have basic rect-based bodies) have four times lower mass in Sprite Kit than it had in Cocos2d+Box2d. Setting bodies density to 4 solve the problem, unfortunately that's not the main issue. It looks like the same problem with 4-time multiplier works for all forces in the physics world. I have done some testing in Sprite Kit and create a body with mass four time higher than in Cocos2d+Box2d, I have also set the world gravity to be four time lower than in Cocos2d+Box2d. As a result physic in both projects (first using Cocos2d+Box2, second using Sprite Kit) behaves similarly. I can't find anything like PIXEL_TO_METER_RATIO (that was available in Box2) in Sprite Kit. Is there any option that allows to adjust the physics world in Sprite Kit to behave like in Cocos2d+Box2d without multiplying all forces, masses etc? Maybe there is some kind of configuration property that allows to adjust it. If I leave the same values for gravity, mass and forces in Sprite Kit that I was using in Cocos2d+Box2d everything in the game will be simulated too fast. My question is how to deal with problems like this when migrating from Cocos2d with Box2d to Sprite Kit framework?
The only solution is to re-tweak the forces and other settings until things feel right.
Internally Sprite Kit uses Box2D but we have no way of knowing if and how Apple may have modified it. What is known is that they use different default settings for the Box2D world which means physics values can not be ported as is and expect the same results.
I believe this was discussed in the developer forum (under Sprite Kit) where someone investigated the actual numbers for changed settings. Note that these are settings in Box2D's code most users won't even consider to modify, so we have to assume Apple had their reasons to change them in the first place.