I want to make a plan which cover 360angle, and my device camera remain inside of this 360 angle plan. I can also able to show some textures on it. When user move camera toward right then plan should move toward left and vice versa. Actually what I want to achieve I am making Geocaching app in Unity. I will have to show 2d and 3d objects with help of compass. So how can i achieve this in unity, if someone have any idea lets share with me. Thanks in advance?
You will need to flip the direction of the face normals of your cylinder to make the texture appearing from inside the cylinder. Have a look at this Blender model to get an idea. In Blender (the only modelling software I know) you have to select all faces in edit mode and then click Flip Direction in tool shelf. Then you can proceed with UV mapping as usual.
Related
I'm building a 3D app that uses SceneKit. My scene will have various 3D objects and a moveable perspective camera.
The user can load a 2D image into the scene, which I will display on a 3D plane using the image as the material.
What I need to be able to do is to initially show the image as if it were actually 2D, where the pixel width and height are the same as the image and it is not distorted by the camera perspective. So basically I need to know how to position that plane in relation to the camera to make it look 2D.
Thanks in advance for any tips :)
It's not clear where you are getting stuck.
If you're just looking for where to start, look at SCNPlane and SCNBillboardConstraint.
I do not know SceneKit, so what I suggest might not be doable in the specific program, but could you perhaps use an orthographic camera perspective? It removes a lot of the visual depth from a scene, combining that with some flat lighting might accomplish the look you are going for.
Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas
I am working on making an iOS game where a player can flick their finger to throw an object. After the object is thrown, I want to animate the trajectory of that object like in the linked image below. I also want to allow the player to rotate around the thrown object image in 3D. (think of someone throwing a baseball or a golfer hitting a ball and seeing the arc the ball makes in 3D)
My question is what would be the best way of accomplishing this?
I have looked into opengl-es 2.0, core animation, 3rd part frameworks like Cocos3D, etc. But I am not sure how low level I need to go to accomplish this.
Any guidance would be greatly appreciated!
trajectory Image
You can't use core animation. Core animation is 2.5D, not 3D. It allows you to animate flat image tiles in 3-D space. You'll need to use OpenGL or maybe SpriteKit or one of the other game APIs like Unity.
Regardless of which framework, you'll need to understand 3-D transformation matrices and display frustums.
Stack overflow is not a good site for "which framework is the best" type questions, so I'm not going to trying to answer that part of the question
We are doing a virtual dressing room project and we could map .png image files on body joints. But we need to map 3D clothes to the body and the final out put should be a person(real person not an avatar) wearing a 3D cloth on a live video out put. But we don't know how to do this. Any help is so much appreciated.
Thanks in advance.
Answer for user3588017, its too lengthy but I'll try to explain how to get your project done. First play with XNA with their basic tutorials if you are totally new to 3D gaming. In my project I only focused on bending hands down. For this in you need to create a 3D model in blender and export it to .xna. I used this link as a start. the very beginning Actually this is the most hardest part after this you know how to animate your model using maths. Then you need to map kinect data as (x,y,z) to the parts of your model.
Ex:- map the model rotation to kinect's model rotation. For this I used simple calculations like measuring the depth of the shoulders and calculating the rotated angle and applied it to my cloth model.
I want to track the head of a player in order to move the camera inside XNA.
When the player rotates left or right, the camera inside XNA will respond to this action and will also rotate.
I tried using the head joint from Skeleton Data and taking the vector value X,Y but this is not an accurate solution. I need another solution that can rotate the camera inside XNA.
Any suggestions?
You could use the Face Tracking API and see the difference from a certain point on the users face (like their nose) to decide whether or not the user looked in a different direction. The points on a users face are assembled like this:
Then you can see if the X changed and by what amount to see the rotation effects.
(You might want to see Facial Recognition with Kinect)