Im working on a Project based on Goblin XNA.You can get an idea on what I really want do by checking this image http://img97.imageshack.us/i/markerdetectioncopy.jpg/
I manage to retreive finger cordinates using open cv and pass it to XNA. What I want to do is to see if there is a collision between the finger cordinates and the object generated by Goblin.
I would really appriciate if anyone could give me some guidance on this issue.
Thanks
First you need the 2d coordinates from 3d object (using Viewport.Unproject function). Then you only have to do a simple circle collision between the two points.
Related
Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas
I am creating points on a detected plane, but sometimes the plane is not tracked anymore (fast movement for example) and a hitTest may not return a hit on this plane.
ARKit will return hit result for every known plane could ARCore do the same?
Apparently, this notion exists in Unreal integration (EGoogleARCoreLineTraceChannel::InfinitePlane), could it be available in the Java API?
Also to work around this problem, I do a manual ray cast and for some reason, I have a really small offset between my computed position and the hitTest result.
A Screen to World coordinates would help to make sure no bias is introduced there. Is that something possible?
Thanks in advance for the help!
Julien.
As a current alternative I used the code from Jonas Jongejan and Dan Moore AR Drawing to get the right Ray origin and it is working much better.
The secret was to generate 2 points near and front screen point and start the ray at touchRay.direction.scale(AppSettings.getStrokeDrawDistance());. I now have a very accurate match between my manual ray cast and the result of the hitTest.
I am working on making an iOS game where a player can flick their finger to throw an object. After the object is thrown, I want to animate the trajectory of that object like in the linked image below. I also want to allow the player to rotate around the thrown object image in 3D. (think of someone throwing a baseball or a golfer hitting a ball and seeing the arc the ball makes in 3D)
My question is what would be the best way of accomplishing this?
I have looked into opengl-es 2.0, core animation, 3rd part frameworks like Cocos3D, etc. But I am not sure how low level I need to go to accomplish this.
Any guidance would be greatly appreciated!
trajectory Image
You can't use core animation. Core animation is 2.5D, not 3D. It allows you to animate flat image tiles in 3-D space. You'll need to use OpenGL or maybe SpriteKit or one of the other game APIs like Unity.
Regardless of which framework, you'll need to understand 3-D transformation matrices and display frustums.
Stack overflow is not a good site for "which framework is the best" type questions, so I'm not going to trying to answer that part of the question
We are doing a virtual dressing room project and we could map .png image files on body joints. But we need to map 3D clothes to the body and the final out put should be a person(real person not an avatar) wearing a 3D cloth on a live video out put. But we don't know how to do this. Any help is so much appreciated.
Thanks in advance.
Answer for user3588017, its too lengthy but I'll try to explain how to get your project done. First play with XNA with their basic tutorials if you are totally new to 3D gaming. In my project I only focused on bending hands down. For this in you need to create a 3D model in blender and export it to .xna. I used this link as a start. the very beginning Actually this is the most hardest part after this you know how to animate your model using maths. Then you need to map kinect data as (x,y,z) to the parts of your model.
Ex:- map the model rotation to kinect's model rotation. For this I used simple calculations like measuring the depth of the shoulders and calculating the rotated angle and applied it to my cloth model.
I want to make a plan which cover 360angle, and my device camera remain inside of this 360 angle plan. I can also able to show some textures on it. When user move camera toward right then plan should move toward left and vice versa. Actually what I want to achieve I am making Geocaching app in Unity. I will have to show 2d and 3d objects with help of compass. So how can i achieve this in unity, if someone have any idea lets share with me. Thanks in advance?
You will need to flip the direction of the face normals of your cylinder to make the texture appearing from inside the cylinder. Have a look at this Blender model to get an idea. In Blender (the only modelling software I know) you have to select all faces in edit mode and then click Flip Direction in tool shelf. Then you can proceed with UV mapping as usual.