I need to pass iOS app motion data to my WKWebView that loads a spherical panorama using THREE.js. The THREE.js relies on Quaternion calculated from alpha, beta, gamma, but since CMDeviceMotion on iOS does not provide those, but does provide Quaternion, I decided to pass Quaternion to the THREE.js web code from Swift iOS like this:
func didUpdateMotion(data: CMDeviceMotion) {
let quaternion = data.attitude.quaternion
let js = "iosQuaternion={x:\(quaternion.x),y:\(quaternion.y),z:\(quaternion.z),w:\(quaternion.w)}"
self.webView.evaluateJavaScript(js, completionHandler: {(result,error) in })}}
}
Now, here is a part of my web view code for THREE.js can uses Quaternion set by iOS:
function animate() {
if(typeof iosQuaternion === "undefined") return;
window.requestAnimationFrame( animate );
updateWithQuaternion();
renderer.render( scene, camera );
}
updateWithQuaternion = ( function () {
var lastQuaternion = new THREE.Quaternion();
return function () {
if (typeof iosQuaternion === "undefined") return;
scope.object.quaternion = new THREE.Quaternion();
scope.object.quaternion.x = iosQuaternion.x;
scope.object.quaternion.y = iosQuaternion.y;
scope.object.quaternion.z = iosQuaternion.z;
if ( 8 * ( 1 - lastQuaternion.dot( scope.object.quaternion ) ) > EPS ) {
lastQuaternion.copy( scope.object.quaternion );
scope.dispatchEvent( changeEvent );
}
};
})();
The above implementation does give me a functionality where spherical panorama is loaded and rotates based on passed device motion data, but the direction and initial orientation are completely off. Panorama is loaded in incorrect position (may be facing down when it should face a specific direction) and rotation direction is also affected as a result.
What may be a recommendation for me to ensure iOS motion data is correctly passed to the spherical panorama driven by THREE.js?
Related
I am new to AR, I am working on an APP using ARCore using this one AR-REMOTE-SUPPORT
When I am drawing it from my screen it is creating default android anchor, I want line instead of default android anchor.
How can I achieve this.
here is the function which is placing Anchors on the screen
public void onDrawFrame(GL10 gl) {
// Clear screen to notify driver it should not load any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);
try {
// Obtain the current frame from ARSession. When the configuration is set to
// UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
// camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();
// Handle taps. Handling only one tap per frame, as taps are usually low frequency
// compared to frame rate.
MotionEvent tap = queuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
// Creates an anchor if a plane or an oriented point was hit.
if ((trackable instanceof Plane && ((Plane) trackable).isPoseInPolygon(hit.getHitPose()))
|| (trackable instanceof Point
&& ((Point) trackable).getOrientationMode()
== Point.OrientationMode.ESTIMATED_SURFACE_NORMAL)) {
// Hits are sorted by depth. Consider only closest hit on a plane or oriented point.
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (anchors.size() >= 250) {
anchors.get(0).detach();
anchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3D model
// in the correct position relative both to the world and to the plane.
anchors.add(hit.createAnchor());
break;
}
}
}
// Draw background.
mBackgroundRenderer.draw(frame);
// If not tracking, don't draw 3d objects.
if (camera.getTrackingState() == TrackingState.PAUSED) {
return;
}
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();
if (isShowPointCloud()) {
// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);
mPointCloud.draw(viewmtx, projmtx);
// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();
}
// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() == Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
if (isShowPlane()) {
// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class), camera.getDisplayOrientedPose(), projmtx);
}
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space. The Anchor pose is updated
// during calls to session.update() as ARCore refines its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, mScaleFactor);
//mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
}
sendARViewMessage();
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
Any help would be appreciated
TIA
One simple way to draw a line in ARCore is to create it between two anchor points.
The line itself is generally a 3D object also.
Here is a tested working example, based on the nice approach in this answer: https://stackoverflow.com/a/52816504/334402
private void drawLine(AnchorNode node1, AnchorNode node2) {
//Draw a line between two AnchorNodes (adapted from https://stackoverflow.com/a/52816504/334402)
Log.d(TAG,"drawLine");
Vector3 point1, point2;
point1 = node1.getWorldPosition();
point2 = node2.getWorldPosition();
//First, find the vector extending between the two points and define a look rotation
//in terms of this Vector.
final Vector3 difference = Vector3.subtract(point1, point2);
final Vector3 directionFromTopToBottom = difference.normalized();
final Quaternion rotationFromAToB =
Quaternion.lookRotation(directionFromTopToBottom, Vector3.up());
MaterialFactory.makeOpaqueWithColor(getApplicationContext(), new Color(0, 255, 244))
.thenAccept(
material -> {
/* Then, create a rectangular prism, using ShapeFactory.makeCube() and use the difference vector
to extend to the necessary length. */
Log.d(TAG,"drawLine insie .thenAccept");
ModelRenderable model = ShapeFactory.makeCube(
new Vector3(.01f, .01f, difference.length()),
Vector3.zero(), material);
/* Last, set the world rotation of the node to the rotation calculated earlier and set the world position to
the midpoint between the given points . */
Anchor lineAnchor = node2.getAnchor();
nodeForLine = new Node();
nodeForLine.setParent(node1);
nodeForLine.setRenderable(model);
nodeForLine.setWorldPosition(Vector3.add(point1, point2).scaled(.5f));
nodeForLine.setWorldRotation(rotationFromAToB);
}
);
}
You can see the full source here: https://github.com/mickod/LineView
I'm making a "geolocational AR app" in which I use Paint() to draw bitmap on my screen with the use of sensors to do some translation so that my image will only be shown at a specific point.
I've done everything
However, the image shakes while I present it.
I really want the effect as the following videos presents
https://www.youtube.com/watch?v=8U3vWETmk2U
I've implemented low pass filter to ease the situation but it is still not as steady as the images from the video.
This is how I achieve AR movement
float dx = (float) ( (canvas.getWidth()/ horizontalFOV) * (Math.toDegrees(orientation[0])-curBearingTo));
float dy = (float) ( (canvas.getHeight()/ verticalFOV) * Math.toDegrees(orientation[1]));
canvas.translate(0.0f, 0.0f-dy);
canvas.translate(0.0f-dx, 0.0f);
The curBearingTois the result of android location API: BearingTo(location, destination)
And this is how I get Orientation matrix
if (lastAccelerometer != null && lastCompass != null) {
boolean gotRotation = SensorManager.getRotationMatrix(rotation,
identity, lastAccelerometer, lastCompass);
if (gotRotation) {
// remap such that the camera is pointing straight down the Y
// axis
SensorManager.remapCoordinateSystem(rotation,
SensorManager.AXIS_X, SensorManager.AXIS_Z,
cameraRotation);
// orientation vector
SensorManager.getOrientation(cameraRotation, orientation);
}
Any suggestion?
I am implementing an AR app in iOS platform with SceneKit. I wanna my object to rotate followed by mobile rotation. For the mobile rotation side, I found there is a parameter called quaternion under CMAttitude class but I am not sure how can I use this parameter to rotate the object I loaded in the scene. Any ideas?
let motionManager = CMMotionManager()
motionManager.deviceMotionUpdateInterval = 1.0 / 60.0
if motionManager.isDeviceMotionAvailable {
motionManager.startDeviceMotionUpdates(to: OperationQueue.main, withHandler: { (devMotion, error) -> Void in
//change the left camera node euler angle in x, y, z axis
cameraNode.eulerAngles = SCNVector3(
-Float((devMotion?.attitude.roll)!) - Float(M_PI_2),
Float((motionManager.deviceMotion?.attitude.yaw)!),
-Float((motionManager.deviceMotion?.attitude.pitch)!)
)
})}
I tried this with Playground app in iPad.
I have tried using Core Motion, but what I do is rotate the scnCamera
I recently updated my Cordova mobile mapping app from OL3 V3.1.1 to V3.7.0 to V3.8.2.
Am using PouchDB to store off-line tiles, and with V3.1.1 tiles were visible.
Here is the code snippet:
OSM_bc_offline_pouchdb = new ol.layer.Tile({
//maxResolution: 5000,
//extent: BC,
//projection: spherical_mercator,
//crossOrigin: 'anonymous',
source: new ol.source.XYZ({
//adapted from: http://jsfiddle.net/gussy/LCNWC/
tileLoadFunction: function (imageTile, src) {
pouchTilesDB_osm_bc_baselayer.getAttachment(src, 'tile', function (err, res) {
if (err && err.error == 'not_found')
return;
//if(!res) return; // ?issue -> causes map refresh on movement to stop
imageTile.getImage().src = window.URL.createObjectURL(res);
});
},
tileUrlFunction: function (coordinate, projection) {
if (coordinate == null)
return undefined;
// OSM NW origin style URL
var z = coordinate[0];
var x = coordinate[1];
var y = coordinate[2];
var imgURL = ["tile", z, x, y].join('_');
return imgURL;
}
})
});
trails_mobileMap.addLayer(OSM_bc_offline_pouchdb);
OSM_bc_offline_pouchdb.setVisible(true);
Moving to both V3.7.0 and V3.8.2 causes the tiles to not display. Read the API and I'm missing why this would happen.
What in my code needs updating to work with OL-V3.8.2?
Thanks,
Peter
Your issue might be related to the changes to ol.TileCoord in OpenLayers 3.7.0. From the release notes:
Until now, the API exposed two different types of ol.TileCoord tile coordinates: internal ones that increase left to right and upward, and transformed ones that may increase downward, as defined by a transform function on the tile grid. With this change, the API now only exposes tile coordinates that increase left to right and upward.
Previously, tile grids created by OpenLayers either had their origin at the top-left or at the bottom-left corner of the extent. To make it easier for application developers to transform tile coordinates to the common XYZ tiling scheme, all tile grids that OpenLayers creates internally have their origin now at the top-left corner of the extent.
This change affects applications that configure a custom tileUrlFunction for an ol.source.Tile. Previously, the tileUrlFunction was called with rather unpredictable tile coordinates, depending on whether a tile coordinate transform took place before calling the tileUrlFunction. Now it is always called with OpenLayers tile coordinates. To transform these into the common XYZ tiling scheme, a custom tileUrlFunction has to change the y value (tile row) of the ol.TileCoord:
function tileUrlFunction = function(tileCoord, pixelRatio, projection){
var urlTemplate = '{z}/{x}/{y}';
return urlTemplate
.replace('{z}', tileCoord[0].toString())
.replace('{x}', tileCoord[1].toString())
.replace('{y}', (-tileCoord[2] - 1).toString());
}
If this is your issue, try changing your tileUrlFunction to
function (coordinate, projection) {
if (coordinate == null)
return undefined;
// OSM NW origin style URL
var z = coordinate[0];
var x = coordinate[1];
var y = (-coordinate[2] - 1);
var imgURL = ["tile", z, x, y].join('_');
return imgURL;
}
Im doing some practices on XNA, and i created a class that represents a Camera.
My objective is that when the user press some keys make a translation of the camera (not the target) 90 degrees in the X axys (to see an object that i placed in the scene from different angles). By the moment i move the camera in X, Y, and Z without problems.
Actually to set up my camera i use the following lines of code:
public void SetUpCamera()
{
#region ## SET DEFAULTS ##
this.FieldOfViewAngle = 45.0f;
this.AspectRatio =1f;
this.NearPlane = 1.0f;
this.FarPlane = 10000.0f;
#endregion
this.ProjectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(this.FieldOfViewAngle), 16 / 9, this.NearPlane, this.FarPlane);
this.ViewMatrix = Matrix.CreateLookAt(new Vector3(this.PositionX, this.PositionY, this.PositionZ), new Vector3(this.TargetX, this.TargetY, this.TargetZ), Vector3.Up);
}
I have this method to move the camera:
public void UpdateView()
{
this.ViewMatrix = Matrix.CreateLookAt(new Vector3(this.PositionX, this.PositionY, this.PositionZ), new Vector3(this.TargetX, this.TargetY, this.TargetZ), Vector3.Up);
}
Then in the game (update event handler i have the following code)
if (keyboardstate.IsKeyDown(Keys.NumPad9))
{
this.GameCamera.PositionZ -= 1.0f;
}
if (keyboardstate.IsKeyDown(Keys.NumPad3))
{
this.GameCamera.PositionZ += 1.0f;
}
this.GameCamera.UpdateView();
I would like to know how to make this camera translation of 90 degrees to surround one object that i placed in the screen.
To explain my self better about the camera movement here is a video on youtube that uses the exact movement that im trying to describe (see from 14 second)
http://www.youtube.com/watch?v=19mbKZ0I5u4
Assuming the camera in the video is orbiting the car, here is how you would accomplish that in XNA.
For the sake of readability, we'll just use vectors instead of their individual components. So 'target' means it's a Vector3 that includes TargetX, TargetY, & TargetZ. Same with the camera’s position. You can break X, Y, Z values out into fields and make Vector3s out of them to plug into this code if you want to later, but really it would be best for you to work at vector level instead of component level.
//To orbit the car (target)
cameraPosition = Vector3.Transform(cameraPosition – target, Matrix.CreateRotationY(0.01f)) + target;
this.ViewMatrix = Matrix.CreateLookAt(cameraPosition, target, Vector3.Up);
Since all matrix rotations act about an axis that intersects the world origin, to use a rotation matrix to rotate the camera around the car, the object of rotation has to be shifted such that the target (and thus the rotation axis) is located at the world origin. CameraPosition - target accomplishes that. Now cameraPosition can be rotated a little bit. Once cameraPosition is rotated a little bit, it needs to be sent back to the scene at hand, that's what the '+ target' at the end of the line is for.
The 0.01f can be adjusted to whatever rotation rate suits you.