Marker scale and switching to markerless (Kudan + Unity) - ios

I'm trying to use Kudan AR in a project, and I have a couple questions:
1) The marker size relation to the scene seems pretty weird to me. For example, I'm using a 150x150 px image as a marker, and when I use it in the scene it occupies 150 unities! It requires all my objects to be extremely huge, sometimes even extending further than the camera far plane, which breaks the augmentation. Is it correct, or am I missing something?
2) I'm trying to use a marker to define the starter position of the augmentation, and then switch to the markerless tracking to have a broader experience. They have a sample code using the native iOS lib (https://wiki.kudan.eu/Marker_to_Markerless), but no reference on how to do it in Unity. That's what I'm trying:
markerlessDriver.localScale = new Vector3(markerDriver.localScale.x, markerDriver.localScale.x, markerDriver.localScale.z);
markerlessDriver.localPosition = markerDriver.localPosition;
markerlessDriver.localRotation = markerDriver.localRotation;
target.SetParent(markerlessDriver);
tracker.ChangeTrackingMethod(markerlessTracking);
// from the floor placer.
Vector3 floorPosition; // The current position in 3D space of the floor
Quaternion floorOrientation; // The current orientation of the floor in 3D space, relative to the device
tracker.FloorPlaceGetPose(out floorPosition, out floorOrientation);
tracker.ArbiTrackStart(floorPosition, floorOrientation);
It switches, but the position/rotation of the model goes off. Any idea on how that can be done?
Thanks in advance!

Related

ARKit + Core location - points are not fixed on the same places

I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).

ARKit: How to detect only Horizontal floor excluding obstacles

I am developing horizontal plane detection application using ARKit. It seems to be working fine. Once floor is detected I am trying to place SCNPlane 2meter Hight and 2meter width horizontally from the centre point(detected floor). It is also working fine when floor is empty. If floor has some objects(obstacles like furniture) then SCNPlane is being placed over the object instead of the floor(under the object). How to detect only Horizontal floor excluding the objects. please guide me. thanks
When you are searching and have found the floor the ARKit will put out a grid, normally people use some kind of grid image to display this, but some don't want to show it. Once the grid has placed you place a SCNPlane, which i assume has an physical body as you say it falls towards the floor / furniture?
You can do this in 3 ways:
You can to stop the worldTrackingConfiguration once the floor has
been found.
You can once the floor has been found, fetch that Y-position and bind every object to fall towards that Y-position.
I guess you could check if the Y-position of the new detection overlaps with the floor detection, then it's fine else it's not. (I have not tested this one)

Sceneform and geolocation

Using ARCore and Sceneform Im am trying to add a portrait on a vertical surface (which is a wall) i am currently using ViewRenderer to do that from the Sceneform Library.. all works perfectly but im now facing 2 problems:
First problem was that ViewRenderer will render an Android view in 3D so i had to rotate the Node on its right vector 90 degrees so it flats out on the wall.. that works but now my second problem is that i need the portrait to always be straight up with earth's gravity.
How would it be possible to achieve that ?
You probably want to call Pose.extractTranslation() (https://developers.google.com/ar/reference/java/com/google/ar/core/Pose.html#extractTranslation()) when creating the Anchor for your object. This removes the rotation part from Pose that is added when ARCore handles the touch event.
With https://developers.google.com/ar/reference/java/com/google/ar/core/Session#createAnchor(com.google.ar.core.Pose) you can create a new Anchor from that pose.
If you have an ArFragment it would look like this:
arFragment?.setOnTapArPlaneListener { hitResult, plane, motionEvent ->
val arSession = arFragment.arSceneView.session
val hitPose = hitResult.hitPose
val poseWithoutRotation = hitPose.extractTranslation()
val anchor = arSession.createAnchor(poseWithoutRotation)
val anchorNode = AnchorNode(anchor)
...
After a lot of research i found that the best solution right now here.
But that solution would be perfect if you apply a bit of normalisation of the sensor data so it doesn't jump around often.

Unity ARKit automatically positions terrain on startup

I just started learning ARKit with Unity. I've downloaded SDK from Asset store, imported it, opened demo scene and added a terrain. I've added it under HitCubeParent as a child:
http://shrani.si/f/40/UP/1q7QqoFl/1/capture.jpg
I've added a Unity AR Hit Test Example Script on a terrain and linked HitCubeParent to it:
http://shrani.si/f/6/133/3w5sasQA/1/capture1.jpg
When I build a game on iPhone, ARKit is working, but one thing that bothers me is that terrain is positioned automatically when scene starts (even though i don't tap on the screen). It causes bad positioning like terrain floating in the air or similar issues. I would like to modify the kit so when the scene starts, only generated blue plane is visible. User should then adjust the position of a plane to a table or similar flat surface and tap on the screen to position the terrain on that plane.
Like this:
https://www.youtube.com/watch?v=OCzuNnejwy4
Any good tutorials on this ? I've searched a lot but couldn't find anything usefull.
Disable the Terrain and enable it after the first successful ARHitTestResult. See line 68 in UnityARHitTestExample.cs:
if (HitTestWithResultType (point, resultType))
{
return;
}
This is actually confusing since this HitTest method actually positions the m_HitTransform and is not merely a test.
In this if block you could enable your terrain, after you disabled it in the Awake method.

How can I add a 3d object as a marker on Google Maps like Uber does

I want to add a 3d marker for showing cars on map with rotation like Uber does but I can't find any information on adding 3d objects on Google Maps SDK for iOS.
Would appreciate any help.
Apparently no one is seeing what OP and I are seeing so here's a video of a Uber car turning 90 degrees. Play it frame by frame and you'll notice that it's not a simple image rotation. Either Uber went through the trouble of doing ~360 angles of each vehicles, or it really is a 3D model. Doing 360 images of every car seems foolish to me.
From what I can tell, they are not using 3D objects. They are also not animating between 400 images of a car at a different angle. They're doing a mix of rotating image assets and animating between ~50-70 images of a car at different angles. The illusion is perfect because it really does look like they used 3D car models !
Look at this GIF of a Uber car turning a corner (Dropbox link):
We can clearly see that that the shadow and the car's view angle doesn't update as often as the car's rotation.
Here I overlaid 2 images of the car at different angles, but using the same car image:
We can see that the map is rotated ~5 degrees but the car image is perfectly clear because it hasn't changed, it was simply rotated.
Uber just released a blog post documenting this.
It looks like the vehicles were modeled in 3D software and then image assets depicting different angles were exported for the app. Depending on where the vehicle is on the map and its heading then a different asset is used.
First, they are NOT 3D Objects if that's what you referring to (It's possible to create one though, but waste of time) They are simply 3D image created in Photoshop or Illustrator (Mostly) that have 3D perspective (It's also retina optimized, that's why it looks very clear).
The reason you see that the car is rotated its because the UIImageView that the image is being held into is rotated (using CABasicAnimation mostly) using calculation base off of 3D device position (Same technology use for running apps to track your location etc), which you can use Core Location to retrieve that data.
It's a proccess, but very doable. Good Luck!
Thanks All answers are valid.
if you want you can see the video running, how it works
You can generate sprite sheet ( around 60 ) tiles
How i implement it and tools you need
3d source car model.
blender, animate camera using path animation elipse.
camera rotate around of car from top to bottom view
render 3d marker using sprite generated with blender, for angles use bearing change on location updates.
Your vehicle needs to be rendered to support most screens, so the base size for each tile was 64 px and I was scaling according to the dpi of the screens
Result implementation:
https://twitter.com/ronfravi/status/1133226618024022016?s=09
I believe a pair of marker images, one is the real marker, and another one is a darker blurry shadow can do the trick in a cheaper way. Putting the shadow marker beneath the marker, shifting X & Y axis to an amount where you feel the shadow would be put appropriately, and finally moving them as well as rotating them (on web version, you may need separate rotated images) at the same time should be able to [re]create the illusion.
As #Felix Lapalme already explained it beyond any easier, am not diving any deeper into explaning it.
check out my repo
I used a dea model and turned it according to the heading variable
https://github.com/hilalalhakani/HHMarker
I achieved this in Xamarin by rendering three.js in a webview then sending the image buffer directly to the marker instead of drawing it to the screen. It only took a couple of days, and for my use case it was needed because I want the drivers to be able to change the color and kind of car, but if this is not the functionality you need you're better of just using a sequence of rendered images.
If you want to Rotated your image as the marker. Want to show a moving object you can use .git image. It would be help full for you.
Swift 3
let position = CLLocationCoordinate2D(latitude: 51.5, longitude: -0.127)
//Rotate a marker
let degrees = 90.0
let london = GMSMarker(position: position)
london.title = "London"
//Customize the marker image
london.icon = UIImage(named: "YourGifImageName")
london.groundAnchor = CGPoint(x: 0.5, y: 0.5)
london.rotation = degrees
london.map = mapView
For more info Please check here

Resources