How do we use sceneform along with ARCloud anchors? - arcore

Every time I try to host an anchor "session.hostCloudAnchor(anchor);" it shows up NotTrackingException.
How can we host and resolve the Anchor that we get from arFragment.setOnTapArPlaneListener ?
This is the snippet of the code I was using,
arFragment.setOnTapArPlaneListener(
(HitResult hitResult, Plane plane, MotionEvent motionEvent) -> {
Camera camera = arFragment.getArSceneView().getArFrame().getCamera();
TrackingState cameraTrackingState = camera.getTrackingState();
if (andyRenderable == null) {
return;
}
if (plane.getType() != Type.HORIZONTAL_UPWARD_FACING) {
return;
}
if (cameraTrackingState == TrackingState.TRACKING && session!= null) {
// Create the Anchor.
anchor = hitResult.createAnchor();
try{
session.hostCloudAnchor(anchor);
}
catch (NotTrackingException e)
{
e.printStackTrace();
}
setNewAnchor(anchor);
appAnchorState = AppAnchorState.HOSTING;
Toast.makeText(HelloSceneformActivity.this, "Now, hosting anchor", Toast.LENGTH_SHORT)
.show();
AnchorNode anchorNode = new AnchorNode(anchor);
anchorNode.setParent(arFragment.getArSceneView().getScene());
// Create the transformable andy and add it to the anchor.
TransformableNode andy = new TransformableNode(arFragment.getTransformationSystem());
andy.setParent(anchorNode);
andy.setRenderable(andyRenderable);
andy.select();
checkUpdatedAnchor();
}
});

You need to check the anchor tracking state & proceed with hosting.
override fun onTapPlane(hitResult: HitResult, plane: Plane, motionEvent: MotionEvent?) {
if (plane.type == Plane.Type.HORIZONTAL_UPWARD_FACING) {
val anchor = hitResult.createAnchor()
val anchorNode = AnchorNode(anchor)
anchorNode.setParent(arFragment.arSceneView.scene)
if (anchor.trackingState == TrackingState.TRACKING) {
viewModel.hostAnchorToCloud(anchor)
}
}
}
Before hosting an anchor:
Try to look at the anchor from different angles.
Move around the anchor for at least a few seconds.
Make sure you are not too far away from the anchor.
Refer : https://developers.google.com/ar/develop/java/cloud-anchors/cloud-anchors-developer-guide-android

Related

detect a screen touch outside the spinnersearch view

i have created an android app via xamarin.android. i have a multispinnersearch in a fragment and when opened normally, all the items inside it are preselected. but i had a problem. if the user touches the screen outside the spinner, the latter closes and all the items get into my list. i don't want that. unless he clicks "ok" in the spinner, no items should be taken to my list. so i tried to handle the touch event to prevent the selection of items on screen touch but it didn't work. here are the codes i tried:
public override bool DispatchTouchEvent(MotionEvent ev)
{
if (ev.Action == MotionEventActions.Down)
{
View v = CurrentFocus;
if (v is MultiSpinnerSearch)
{
Rect outRect = new Rect();
v.GetGlobalVisibleRect(outRect);
if (!outRect.Contains((int)ev.RawX, (int) ev.RawY))
{
Toast.MakeText(this, "shgsg", ToastLength.Long).Show();
}
}
}
return base.DispatchTouchEvent(ev);
}
i tried this in my main activity but i didn't work. then i tried this in my fragment on the ontouch listener interface:
if (e.Action == MotionEventActions.Down)
{
if (labors_dropdown.IsFocused == true)
{
Android.Graphics.Rect rect = new Rect();
labors_dropdown.GetGlobalVisibleRect(rect);
if (!rect.Contains((int)e.RawX, (int)e.RawY))
{
Toast.MakeText(this.Context, "gfgf", ToastLength.Short).Show();
}
}
}
it didn't work too, what should i do? thanks in advance.
You could try the below method:
public override bool DispatchTouchEvent(MotionEvent ev)
{
if (ev.Action == MotionEventActions.Down)
{
View v = (MultiSpinnerSearch)FindViewById<MultiSpinnerSearch>(Resource.Id.xxxxx);
if (!IsTouchPointInView(v, (int)ev.GetX(), (int)ev.GetY()))
{
Toast.MakeText(this, "shgsg", ToastLength.Long).Show();
}
}
return base.DispatchTouchEvent(ev);
}
private bool IsTouchPointInView(View targetView, int currentX, int currentY)
{
if (targetView == null)
{
return false;
}
int[] localtion = new int[2];
targetView.GetLocationOnScreen(localtion);
int left = localtion[0];
int top = localtion[1];
int right = left + targetView.MeasuredWidth;
int bottom = top + targetView.MeasuredHeight;
if (currentY >= top && currentY <= bottom && currentX >= left
&& currentX <= right)
{
return true;
}
return false;
}

manipulate JavaCamera2View to set Parameters for Camera Device - OpenCV in Android

Software: Android 4.1.1, OpenCV 4.5.0
To briefly summarize my project. i want to continuously monitor the 'frequency' of a flickering led light source using the rolling shutter effect based on a cmos camera of an android smartphone. in the video/'image stream' a light dark stripe pattern should be visible, which i want to analize with opencv.
if you are interested in the method, you can find more information here:
RollingLight: Light-to-Camera Communications
now the actual problem: i need to set the camera parameters to fixed values before i start analizing with opencv. the exposure time should be as short as possible, the iso (sensitivity) should get a medium value and the focus should be set as close as possible.
for this i made the following changes (marked as comments //) in the methods initializeCamera() and createCameraPreviewSession() from the opencv's JavaCamera2View.java file.
initializeCamera() - from JavaCamera2View.java
public int minExposure = 0;
public int maxExposure = 0;
public long valueExposure = minExposure;
public float valueFocus = 0;
public int minIso = 0;
public int maxIso = 0;
public int valueIso = minIso;
public long valueFrameDuration = 0;
protected boolean initializeCamera() {
Log.i(LOGTAG, "initializeCamera");
CameraManager manager = (CameraManager) getContext().getSystemService(Context.CAMERA_SERVICE);
try {
String camList[] = manager.getCameraIdList();
if (camList.length == 0) {
Log.e(LOGTAG, "Error: camera isn't detected.");
return false;
}
if (mCameraIndex == CameraBridgeViewBase.CAMERA_ID_ANY) {
mCameraID = camList[0];
} else {
for (String cameraID : camList) {
CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraID);
if ((mCameraIndex == CameraBridgeViewBase.CAMERA_ID_BACK &&
characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_BACK) ||
(mCameraIndex == CameraBridgeViewBase.CAMERA_ID_FRONT &&
characteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_FRONT)
) {
mCameraID = cameraID;
break;
}
}
}
if (mCameraID != null) {
Log.i(LOGTAG, "Opening camera: " + mCameraID);
//I added this code to get the parameters---------------------------------------------------------------------------
CameraManager mCameraManager = (CameraManager) getContext().getSystemService(Context.CAMERA_SERVICE);
try {
CameraCharacteristics mCameraCharacteristics = mCameraManager.getCameraCharacteristics(mCameraID);
valueFocus = mCameraCharacteristics.get(CameraCharacteristics.LENS_INFO_MINIMUM_FOCUS_DISTANCE);
Range<Integer> rangeExposure = mCameraCharacteristics.get(CameraCharacteristics.CONTROL_AE_COMPENSATION_RANGE);
minExposure = rangeExposure.getLower();
maxExposure = rangeExposure.getUpper();
Range<Integer> rangeIso = mCameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE);
minIso = rangeIso.getLower();
maxIso = rangeIso.getUpper();
valueFrameDuration = mCameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_MAX_FRAME_DURATION);
} catch (CameraAccessException e) {
Log.e(LOGTAG, "calcPreviewSize - Camera Access Exception", e);
} catch (IllegalArgumentException e) {
Log.e(LOGTAG, "calcPreviewSize - Illegal Argument Exception", e);
} catch (SecurityException e) {
Log.e(LOGTAG, "calcPreviewSize - Security Exception", e);
}
//end of code-------------------------------------------------------------------------------------------------------
manager.openCamera(mCameraID, mStateCallback, mBackgroundHandler);
} else { // make JavaCamera2View behaves in the same way as JavaCameraView
Log.i(LOGTAG, "Trying to open camera with the value (" + mCameraIndex + ")");
if (mCameraIndex < camList.length) {
mCameraID = camList[mCameraIndex];
manager.openCamera(mCameraID, mStateCallback, mBackgroundHandler);
} else {
// CAMERA_DISCONNECTED is used when the camera id is no longer valid
throw new CameraAccessException(CameraAccessException.CAMERA_DISCONNECTED);
}
}
return true;
} catch (CameraAccessException e) {
Log.e(LOGTAG, "OpenCamera - Camera Access Exception", e);
} catch (IllegalArgumentException e) {
Log.e(LOGTAG, "OpenCamera - Illegal Argument Exception", e);
} catch (SecurityException e) {
Log.e(LOGTAG, "OpenCamera - Security Exception", e);
}
return false;
}
createCameraPreviewSession() - from JavaCamera2View.java
private void createCameraPreviewSession() {
final int w = mPreviewSize.getWidth(), h = mPreviewSize.getHeight();
Log.i(LOGTAG, "createCameraPreviewSession(" + w + "x" + h + ")");
if (w < 0 || h < 0)
return;
try {
if (null == mCameraDevice) {
Log.e(LOGTAG, "createCameraPreviewSession: camera isn't opened");
return;
}
if (null != mCaptureSession) {
Log.e(LOGTAG, "createCameraPreviewSession: mCaptureSession is already started");
return;
}
mImageReader = ImageReader.newInstance(w, h, mPreviewFormat, 2);
mImageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image == null)
return;
// sanity checks - 3 planes
Image.Plane[] planes = image.getPlanes();
assert (planes.length == 3);
assert (image.getFormat() == mPreviewFormat);
JavaCamera2Frame tempFrame = new JavaCamera2Frame(image);
deliverAndDrawFrame(tempFrame);
tempFrame.release();
image.close();
}
}, mBackgroundHandler);
Surface surface = mImageReader.getSurface();
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_MANUAL);
mPreviewRequestBuilder.addTarget(surface);
mCameraDevice.createCaptureSession(Arrays.asList(surface),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession cameraCaptureSession) {
Log.i(LOGTAG, "createCaptureSession::onConfigured");
if (null == mCameraDevice) {
return; // camera is already closed
}
mCaptureSession = cameraCaptureSession;
try {
//I added this code to set the parameters---------------------------------------------------------------------------
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, valueFocus);
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, valueExposure);
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, valueIso);
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_FRAME_DURATION, valueFrameDuration);
//end of code-------------------------------------------------------------------------------------------------------
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), null, mBackgroundHandler);
Log.i(LOGTAG, "CameraPreviewSession has been started");
} catch (Exception e) {
Log.e(LOGTAG, "createCaptureSession failed", e);
}
}
#Override
public void onConfigureFailed(CameraCaptureSession cameraCaptureSession) {
Log.e(LOGTAG, "createCameraPreviewSession failed");
}
},
null
);
} catch (CameraAccessException e) {
Log.e(LOGTAG, "createCameraPreviewSession", e);
}
}
unfortunately, the above editing does not work. console shows:
java.lang.NullPointerException: Attempt to invoke virtual method 'float java.lang.Float.floatValue()' on a null object reference
maybe not the only problem i guess.
i have already read numerous posts about the interaction between opencv and androids camera 2 (camerax) api. regarding my question, however, this post unfortunately sums it up: "OpenCV will not work with android.camera2" but, the post is a few years old and i hope there is a workaround for the problem by now.
1.) do you know how i can set fix camera parameters and then do an analysis in opencv? could you explain the steps to me?
2.) are there any existing available projects as reference?
3.) i need the highest possible frame rate of the camera. otherwise i would have thought that i could work with getbitmap(); and forward the image to opencv. but the frame rate is really bad and besides i'm not sure here either how to set the camera parameters fix before taking the photo. or do you know any alternatives?
thanks in advance for your support. if you need more info, i will be happy to share it with you.
--
2021 arpil: the questions are still unanswered. if someone doesn't answer them in detail but has links that could help me, i would really appreciate it.

Unity - Disable AR HitTest after initial placement

I am using ARKit plugin for Unity leveraging the UnityARHitTestExample.cs.
After I place my object into the world scene I want to disable the ARKit from trying to place the object again every time I touch the screen. Can someone please help?
There are a number of ways you can achieve this, although perhaps the simplest is creating a boolean to determine whether or not your model has been placed.
First off all you would create a boolean as noted above e.g:
private bool modelPlaced = false;
Then you would set this to true within the HitTestResultType function once your model has been placed:
bool HitTestWithResultType (ARPoint point, ARHitTestResultType resultTypes)
{
List<ARHitTestResult> hitResults = UnityARSessionNativeInterface.GetARSessionNativeInterface ().HitTest (point, resultTypes);
if (hitResults.Count > 0) {
foreach (var hitResult in hitResults) {
//1. If Our Model Hasnt Been Placed Then Set Its Transform From The HitTest WorldTransform
if (!modelPlaced){
m_HitTransform.position = UnityARMatrixOps.GetPosition (hitResult.worldTransform);
m_HitTransform.rotation = UnityARMatrixOps.GetRotation (hitResult.worldTransform);
Debug.Log (string.Format ("x:{0:0.######} y:{1:0.######} z:{2:0.######}", m_HitTransform.position.x, m_HitTransform.position.y, m_HitTransform.position.z));
//2. Prevent Our Model From Being Positioned Again
modelPlaced = true;
}
return true;
}
}
return false;
}
And then in the Update() function:
void Update () {
//Only Run The HitTest If We Havent Placed Our Model
if (!modelPlaced){
if (Input.touchCount > 0 && m_HitTransform != null)
{
var touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Began || touch.phase == TouchPhase.Moved)
{
var screenPosition = Camera.main.ScreenToViewportPoint(touch.position);
ARPoint point = new ARPoint {
x = screenPosition.x,
y = screenPosition.y
};
ARHitTestResultType[] resultTypes = {
ARHitTestResultType.ARHitTestResultTypeExistingPlaneUsingExtent,
};
foreach (ARHitTestResultType resultType in resultTypes)
{
if (HitTestWithResultType (point, resultType))
{
return;
}
}
}
}
}
}
Hope it helps...

Difficulty updating InkPresenter visual after removing strokes?

I am creating an inkcanvas (CustomInkCanvas) that receives Gestures. At different times during its use, I am placing additional panels over different parts of the inkcanvas. All is well, and the part of the CustomInkCanvas that is not covered by another panel responds appropriately to ink and gestures.
However, occasionally a Gesture is not recognized, so in the default code of the gesture handler, I am trying to remove the ink from the CustomInkCanvas--even when it is not the uppermost panel.
How is this done?
Note: I have tried everything I can think of, including:
Dispatcher with Background update as:
cink.InkPresenter.Dispatcher.Invoke(DispatcherPriority.Background, EmptyDelegate);
Clearing the strokes with:
Strokes.Clear();
cink.InkPresenter.Strokes.Clear();
Invalidating the visual with:
cink.InkPresenter.InvalidateVisual();
cink.InavlidateVisual();
And even
foreach (Stroke s in Strokes)
{
cink.InkPresenter.Strokes.Remove(s);
}
Here is the full code...
void inkCanvas_Gesture(object sender, InkCanvasGestureEventArgs e)
{
CustomInkCanvas cink = sender as CustomInkCanvas;
ReadOnlyCollection<GestureRecognitionResult> gestureResults = e.GetGestureRecognitionResults();
StylusPointCollection styluspoints = e.Strokes[0].StylusPoints;
TextBlock tb; // instance of the textBlock being used by the InkCanvas.
Point editpoint; // user point to use for the start of editing.
TextPointer at; // textpointer that corresponds to the lowestpoint of the gesture.
Run parentrun; // the selected run containing the lowest point.
// return if there is no textBlock.
tb = GetVisualChild<TextBlock>(cink);
if (tb == null) return;
// Check the first recognition result for a gesture.
isWriting = false;
if (gestureResults[0].RecognitionConfidence == RecognitionConfidence.Strong)
{
switch (gestureResults[0].ApplicationGesture)
{
#region [Writing]
default:
bool AllowInking;
editpoint = GetEditorPoint(styluspoints, EditorPoints.Writing);
at = tb.GetPositionFromPoint(editpoint, true);
parentrun = tb.InputHitTest(editpoint) as Run;
if (parentrun == null)
{
AllowInking = true;
TextPointer At = tb.ContentEnd;
Here = (Run)At.GetAdjacentElement(LogicalDirection.Backward);
}
else
{
Here = parentrun;
AllowInking = String.IsNullOrWhiteSpace(parentrun.Text);
}
*** THIS FAILS TO REMOVE THE INK FROM THE DISPLAY ???? *********
if (AllowInking == false)
{
foreach (Stroke s in Strokes)
{
cink.InkPresenter.Strokes.Remove(s);
}
// remove ink from display
// Strokes.Clear();
// cink.InkPresenter.Strokes.Clear();
cink.InkPresenter.InvalidateVisual();
cink.InkPresenter.Dispatcher.Invoke(DispatcherPriority.Background, EmptyDelegate);
return;
}
// stop the InkCanvas from recognizing gestures
EditingMode = InkCanvasEditingMode.Ink;
isWriting = true;
break;
#endregion
}
}
}
private static Action EmptyDelegate = delegate() { };
Thanks in advance for any help.
It would be nice to get a guru response to this, but for anybody else getting here, apparently the strokes that go into creating the gesture have not yet been added to the InkCanvas, so there is nothing to remove or clear from the inkcanvas from within the gesture handler. Strokes are only added to the InkCanvas AFTER the gesture handler. The solution this newbie ended up with was to set a flag when ink was not allowed, and then act on it in the StrokesChanged handler like:
if (AllowInking == false)
{
ClearStrokes = true;
return;
}
void Strokes_StrokesChanged(object sender, StrokeCollectionChangedEventArgs e)
{
if (ClearStrokes == true)
{
ClearStrokes = false;
Strokes.Clear();
return;
}
All works now. Is there a better way?

Set an OnClickListener for an SVG element

Say I have an SVG element, as follows. How do I add an onClickListener?
solved, see below.
I'm going to guess you're meaning a FieldChangeListener rather than an OnClickListener (wrong platform ;). SVGImage isn't part of the RIM-developed objects, so unfortunately you won't be able to. Anything that is going to be able to have a FieldChangeListner has to be a subclass of the net.rim.device.api.ui.Field class.
Just in case someone's interested in how it's done...
try {
InputStream inputStream = getClass().getResourceAsStream("/svg/sphere1.svg");
_image = (SVGImage)SVGImage.createImage(inputStream, null);
_animator = SVGAnimator.createAnimator(_image, "net.rim.device.api.ui.Field");
_document = _image.getDocument();
_svg123 = (SVGElement)_document.getElementById("123");
}
catch (IOException e) { e.printStackTrace(); }
Field _svgField = (Field)_animator.getTargetComponent();
_svgField.setBackground(blackBackground);
add(_svgField);
_svg123.addEventListener("click", this, false);
_svg123.addEventListener("DOMFocusIn", this, false);
_svg123.addEventListener("DOMFocusOut", this, false);
}
public void handleEvent(Event evt) {
if( _svg123 == evt.getCurrentTarget() && evt.getType() == "click" ){ Dialog.alert("You clicked 123"); }
if( _svg123 == evt.getCurrentTarget() && evt.getType() == "DOMFocusIn" ) { ((SVGElement) _document.getElementById("outStroke123")).setTrait("fill", "#FF0000"); }
if( _svg123 == evt.getCurrentTarget() && evt.getType() == "DOMFocusOut" ) { ((SVGElement) _document.getElementById("outStroke123")).setTrait("fill", "#2F4F75"); }
}

Resources