Recently I am working on a game in iOS and trying to get a feature like Pokemon Go game where an object stays in a specific position and trying to find this object by camera view.
So, I read some tutorial and got some help from these articles:
Augmented Reality Tutorial for iOS from Ray Wenderlich Blog
Augmented Reality iOS Tutorial: Location Based from Ray Wenderlich Blog
From this tutorial I successfully completed to find object from camera view only in one device orientation. i.e. Only Landscape Left & Landscape Right are working but when I rotate my device from landscape to portrait, object runs away and can't see in camera when I pointing to that same position.
My problem: How can I calculate position for an Object irrespective of different camera orientation like landscape to portrait and vice versa? What is the mathematical calculation for handling this in different orientation??
the thing is pretty easy from mathematical point of view. To achieve that you need to know where the object is in real world. For example you need to have GPS coordinates of the virtual object. Based on that you need to get the azimuth of this location.
The next step is to count on which azimuth the user is looking through the camera. Calculate it in degrees so you will have a result <0, 360)
When you have both results you need to check if the azimuth you're looking is in your field of view.
For example if we assume that you`re looking at azimuth 0 and your field of view is 180 degrees than you see everything from in <90, 0), <0, 270>
Related
I built a complete structure as a node (with its child nodes) and the user will walk through it using ARKit.
At some point, if the user cannot continue because of some real obstacle in the real world, I added a "pause" button which should freeze whatever the user currently sees in front of the camera, the user could then move freely to some other open space and when the user will release the pause button he/she will be able to resume where they left off (only someplace else in the real world).
A while ago I asked about it in the Apple Developer forum and an Apple Frameworks Engineer gave the following reply:
For "freezing" the scene, you could transform the anchor's position (in world coordinates) to camera coordinates, and then anchor your content to the camera. This will give you the effect that the scene is "frozen", i.e., does not move relative to the camera.
I'm currently not using an anchor because I don't necessarily need to find a flat surface. Rather, my node is placed at a certain position relative to where we start at (0,0,0).
My question is how do I exactly do what the Apple engineer told me to do?
I have the following code which I'm still stuck with. When I add the node to the camera (pointOfView, last line of the code below), it does freeze in place, but I can't get it to freeze in the same position and orientation as it was before it was frozen.
#IBAction func pauseButtonClicked(_ sender: UIButton) {
let currentPosition = sceneView.pointOfView?.position
let currentEulerAngles = sceneView.pointOfView?.eulerAngles
var internalNodeTraversal = lastNodeRootPosition - currentPosition! // for now, lastNodeRootPosition is (0,0,0)
internalNodeTraversal.y = lastNodeRootPosition.y + 20 // just so it’s positioned a little higher in front of the camera
myNode?.removeFromParentNode() // remove the node from the Real World view. Looks like this line has no effect and just adding the node as a child to the camera (pointOfView) is enough, but it feels more right to do this anyway.
myNode?.position = internalNodeTraversal // the whole node is moved respectively in the opposite direction from the root to where I’m standing to reposition the camera in my current position inside the node
// myNode?.eulerAngles = (currentEulerAngles! * -1) — this code put the whole node in weird positions so I removed it
myNode?.eulerAngles.y = currentEulerAngles!.y * -1 // opposite orientation of the node so the camera will be oriented in the same direction
myNode?.eulerAngles.x = 0.3 // just tilting it up a little bit to have a better view, more similar to the view as before it was locked to the camera
// I don’t think I need to change the eulerAngles.z
myNode!.convertPosition(internalNodeTraversal, to: sceneView.pointOfView) // I’m not sure I wrote this correctly. Also, this line doesn’t seem tp change anything
sceneView.pointOfView?.addChildNode(myNode!) // attaching the node to the camera so it will remain stuck while the user moves around until the button is released
}
So I first calculate where in the node I'm currently standing and then I change the position of the node in the opposite direction so that the camera will now be in that position. That seems to be correct.
Now I need to change the orientation of the node so that it will point in the right direction and here things get funky. I've been trying so many things for days now.
I use the eulerAngles for the orientation. If I set the whole vector multiplied by -1, it would show weird orientations. I ended up only using the eulerAngles.y which is the left/right orientation and I hardcoded the x orientation (up/down).
Ultimately what I have in the code above is the closest that I was able to get. If I'm pointing straight, the freeze will be correct. If I turn just a little bit, the freeze will be pretty close as well. Almost the same as what the user saw before the freeze. But the more I turn, the more the frozen image is off and more slanted. At some point (say I turn 50 or 60 degrees to the side) the whole node is off the camera and cannot be seen.
Somehow I have a feeling that there must be an easier and more correct way to achieve the above.
The Apple engineer wrote to "transform the anchor's position (in world coordinates) to camera coordinates". For that reason I added the "convertPosition" function in my code, but a) I'm not sure I used it correctly and b) it doesn't seem to change anything in my code if I have that line or not.
What am I doing wrong?
Any help would be very much appreciated.
Thanks!
I found the solution!
Actually, the problem I had was not even described as I didn't think it was relevant. I built the AR nodes 2 meters in front of the origin (-2 for the z-coordinate) while the center of my node was still at the origin. So when I changed the rotation or eulerAngles, it rotated around the origin so my nodes moved in a large curve and in fact also changed their position as a result.
The solution was to use a simdPivot. Instead of changing the position and rotation of the node itself, I created a translation matrix and a rotation matrix which was at the point of the camera (where the user is standing) and I then multiplied both matrices. Now when I added the node as a child of the camera (pointOfView) this would freeze the image and in effect show exactly what the user was seeing before it was frozen as the position is the same and the rotation is exactly around the user's standing position.
I want to rotate apple map based on user's driving direction,
Any idea or solution to rotate map like that way?
I can collect array of last 5-10 moved location, but based on it, Can I calculate heading or anything else to rotate map?
I want to add a 3d marker for showing cars on map with rotation like Uber does but I can't find any information on adding 3d objects on Google Maps SDK for iOS.
Would appreciate any help.
Apparently no one is seeing what OP and I are seeing so here's a video of a Uber car turning 90 degrees. Play it frame by frame and you'll notice that it's not a simple image rotation. Either Uber went through the trouble of doing ~360 angles of each vehicles, or it really is a 3D model. Doing 360 images of every car seems foolish to me.
From what I can tell, they are not using 3D objects. They are also not animating between 400 images of a car at a different angle. They're doing a mix of rotating image assets and animating between ~50-70 images of a car at different angles. The illusion is perfect because it really does look like they used 3D car models !
Look at this GIF of a Uber car turning a corner (Dropbox link):
We can clearly see that that the shadow and the car's view angle doesn't update as often as the car's rotation.
Here I overlaid 2 images of the car at different angles, but using the same car image:
We can see that the map is rotated ~5 degrees but the car image is perfectly clear because it hasn't changed, it was simply rotated.
Uber just released a blog post documenting this.
It looks like the vehicles were modeled in 3D software and then image assets depicting different angles were exported for the app. Depending on where the vehicle is on the map and its heading then a different asset is used.
First, they are NOT 3D Objects if that's what you referring to (It's possible to create one though, but waste of time) They are simply 3D image created in Photoshop or Illustrator (Mostly) that have 3D perspective (It's also retina optimized, that's why it looks very clear).
The reason you see that the car is rotated its because the UIImageView that the image is being held into is rotated (using CABasicAnimation mostly) using calculation base off of 3D device position (Same technology use for running apps to track your location etc), which you can use Core Location to retrieve that data.
It's a proccess, but very doable. Good Luck!
Thanks All answers are valid.
if you want you can see the video running, how it works
You can generate sprite sheet ( around 60 ) tiles
How i implement it and tools you need
3d source car model.
blender, animate camera using path animation elipse.
camera rotate around of car from top to bottom view
render 3d marker using sprite generated with blender, for angles use bearing change on location updates.
Your vehicle needs to be rendered to support most screens, so the base size for each tile was 64 px and I was scaling according to the dpi of the screens
Result implementation:
https://twitter.com/ronfravi/status/1133226618024022016?s=09
I believe a pair of marker images, one is the real marker, and another one is a darker blurry shadow can do the trick in a cheaper way. Putting the shadow marker beneath the marker, shifting X & Y axis to an amount where you feel the shadow would be put appropriately, and finally moving them as well as rotating them (on web version, you may need separate rotated images) at the same time should be able to [re]create the illusion.
As #Felix Lapalme already explained it beyond any easier, am not diving any deeper into explaning it.
check out my repo
I used a dea model and turned it according to the heading variable
https://github.com/hilalalhakani/HHMarker
I achieved this in Xamarin by rendering three.js in a webview then sending the image buffer directly to the marker instead of drawing it to the screen. It only took a couple of days, and for my use case it was needed because I want the drivers to be able to change the color and kind of car, but if this is not the functionality you need you're better of just using a sequence of rendered images.
If you want to Rotated your image as the marker. Want to show a moving object you can use .git image. It would be help full for you.
Swift 3
let position = CLLocationCoordinate2D(latitude: 51.5, longitude: -0.127)
//Rotate a marker
let degrees = 90.0
let london = GMSMarker(position: position)
london.title = "London"
//Customize the marker image
london.icon = UIImage(named: "YourGifImageName")
london.groundAnchor = CGPoint(x: 0.5, y: 0.5)
london.rotation = degrees
london.map = mapView
For more info Please check here
I'm trying to use Kudan AR in a project, and I have a couple questions:
1) The marker size relation to the scene seems pretty weird to me. For example, I'm using a 150x150 px image as a marker, and when I use it in the scene it occupies 150 unities! It requires all my objects to be extremely huge, sometimes even extending further than the camera far plane, which breaks the augmentation. Is it correct, or am I missing something?
2) I'm trying to use a marker to define the starter position of the augmentation, and then switch to the markerless tracking to have a broader experience. They have a sample code using the native iOS lib (https://wiki.kudan.eu/Marker_to_Markerless), but no reference on how to do it in Unity. That's what I'm trying:
markerlessDriver.localScale = new Vector3(markerDriver.localScale.x, markerDriver.localScale.x, markerDriver.localScale.z);
markerlessDriver.localPosition = markerDriver.localPosition;
markerlessDriver.localRotation = markerDriver.localRotation;
target.SetParent(markerlessDriver);
tracker.ChangeTrackingMethod(markerlessTracking);
// from the floor placer.
Vector3 floorPosition; // The current position in 3D space of the floor
Quaternion floorOrientation; // The current orientation of the floor in 3D space, relative to the device
tracker.FloorPlaceGetPose(out floorPosition, out floorOrientation);
tracker.ArbiTrackStart(floorPosition, floorOrientation);
It switches, but the position/rotation of the model goes off. Any idea on how that can be done?
Thanks in advance!
I've been fiddling with SceneKit recently and I wanted to make the following thing:
When creating a Game template from Xcode, you get a scene with a ship.
I wanted to animate this ship and orient it according to the relative position of my iPhone after I tap the screen. So for instance, if I hold my iPhone horizontally, tap the screen, this takes the reference attitude of my horizontal iPhone. Then, when I lift it (changing the pitch), I want the ship to orient itself as such.
I've been trying to change my ship node eulerAngles with the attitude pitch yaw and roll as in the following:
CMAttitude * attitude = deviceMotion.attitude
_ship.eulerAngles = SCNVector3Make(-attitude.pitch, attitude.yaw, attitude.roll);
Whenever I do that, the ship goes back to its original position in the scene. I can't seem to understand how to give it a speed in the direction it's facing without making it reset to its original position when I change its eulerAngles.
Ideally, the ship would have some sort of engine power accelerating it in the direction it's facing, while it would still be affected by gravity. How should I do that? Thanks!