Detecting ARKit compatible device from user agent - ios

We would like to enable a feature that allows a model to be viewed using a deep link to our ARKit app from a web page.
Has anyone discovered a way to discover if a device is ARKit compatible using the user agent string or any other browser-based mechanism?
Thanks!

Apple seems to use the following code to show/hide the "Visit this page on iOS 12 to try AR Quick Look" on https://developer.apple.com/arkit/gallery/
(function () {
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
document.documentElement.classList.add(isRelAR ? 'relar' : 'no-relar');
})();
The interesting part of course being
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
Make your actions accordingly based on the value of isRelAR.

Safari doesn’t expose any of the required hardware information for that.
If you already have a companion iOS app for your website, another option might be to still provide some non-AR experience for your content, so that the website has something to link to in all cases.
For example, AR furniture catalogs seem to be a thing now. But if the device isn’t ARKit capable, you could still provide a 3D model of each furniture piece linked from your website, letting the user spin it around and zoom in on it with touch gestures instead of placing it in AR.

Related

Q: Disable "zoom in/out" on scroll at scene

We use the base repo "Roomle UI" based on the "Roomle Web SDK". We are currently customizing this and integrating it into our website accordingly. We would like to deactivate the automatic "zoom in" via scrolling. It interrupts the intended user flow. Unfortunately we haven't found a way to implement this yet without keeping the classic functionality like drag n drop.
Do you guys have any suggestions to handle this?
Currently that's not possible. Please create a feature request here: https://roomle.atlassian.net/servicedesk/customer/portal/4/group/5/create/24
What you could try (but be aware that this relies on private apis and those apis can break at any time in the future) is the following:
!!Warning the next snippet changes private apis!!
window.deactivated = true;
var oldOnMouseWheel = RoomleConfigurator._sceneManager._cameraControl._inputManager._onMouseWheel.bind(RoomleConfigurator._sceneHelper._cameraControl._inputManager);
RoomleConfigurator._sceneManager._cameraControl._inputManager._onMouseWheel = function () {
console.log('!!!!WARNING WE CHANGED A PRIVATE METHOD!!!!');
if (window.deactivated) {
return;
}
oldOnMouseWheel(...arguments);
};
And then to active/deactive you just would need to set window.deactivated to true or false.
But as a reminder, those are private apis which will break eventually

How to unfocus from a WebView in UWP

I'm working on a UWP app that hosts a WebView which runs in a separate process.
var webView = new Windows.UI.Xaml.Controls.WebView(WebViewExecutionMode.SeparateProcess)
This results in a behavior that if the WebView has the focus, the containing app can't regain the focus by itself by simply trying to focus on a UI element.
The app supports keyboard shortcuts which may result in different elements getting the focus, but it's not working correctly when the focus is captured by the WebView. The target element seems to be getting the focus but it seems as if the process itself is not activated (as the real focus resides in a different process I suppose...).
I'm currently trying to activate the app programmatically through protocol registration in an attempt to regain focus.
I added a declaration in the app manifest for a custom protocol mycustomprotocol coupled with the following activation overload
protected override void OnActivated(IActivatedEventArgs args)
{
if (eventArgs.Uri.Scheme == "mycustomprotocol")
{ }
}
And the following code to invoke the activation:
var result = await Windows.System.Launcher.LaunchUriAsync(new Uri("mycustomprotocol:"));
Seems to be working only on some computers, on others (not while debugging the app, only when executed unattached) instead of regaining focus the app's taskbar icon just flashes orange.
I've created a sample project showing the problem and the semi working solution here
Any insight on any of this would be great.
I can reproduce your issue. I found that when we switch the focus with the mouse, the focus can be transferred to the TextBlock. So you could solve this question through simulating mouse input.
Please use the following code to instead FocusTarget.Focus(FocusState.Programmatic).
As follows:
InputInjector inputInjector = InputInjector.TryCreate();
var infoDown = new InjectedInputMouseInfo();
// adjust your mouse position to the textbox through changing infoDown.DeltaX,infoDown.DeltaY
infoDown.DeltaX = 10; //change
infoDown.DeltaY = -150; //change
infoDown.MouseOptions = InjectedInputMouseOptions.LeftDown;
var infoUp = new InjectedInputMouseInfo();
infoUp.DeltaX = 0;
infoUp.DeltaY = 0;
infoUp.MouseOptions = InjectedInputMouseOptions.LeftUp;
inputInjector.InjectMouseInput(new[] { infoDown, infoUp });
Note: If you use the input injection APIs, you need to add inputInjectionBrokered Capabilitiy in your Package.appxmanifest.
But this Capabilitiy is a restricted Capabilitiy, you can’t publish this app in store, which can’t pass the verification.
I've been in discussions with a WebView software engineer. The problem is that the separate process still wants to own focus if you try to move the focus away from the webview. His solution is to ask the other process' web engine to give up focus with the following call:
_= webView.InvokeScriptAsync("eval", new string[] { "window.departFocus('up', { originLeft: 0, originTop: 0, originWidth: 0, originHeight: 0 });" });
You can call it before trying to change the focus to your target. I ran various tests and it works consistently.

ARKit: how to correctly check for supported devices?

I'm using ARKit but it is not a core functionality in my app, so I'm not setting the arkit key in the UIRequiredDeviceCapabilities. I'm setting the directive #available(iOS 11.0, *), but ARKit requires an A9 processor or above (that is, iPhone 6S or newer...)
What the best way to check that? I've found a workaround that involves checking the device model in several places, but it looks like it's a bit complicated. And would this be rejected in a review for the Store?
You should check the isSupported boolean provided by the ARConfiguration class for this.
From the Apple Developer Documentation:
isSupported
A Boolean value indicating whether the current device supports this session configuration class.
All ARKit configurations require an iOS device with an A9 or later processor. If your app otherwise supports other devices and offers augmented reality as a secondary feature, use this property to determine whether to offer AR-based features to the user.
Just check ARConfiguration availability.
if (ARConfiguration.isSupported) {
// Great! let have experience of ARKIT
} else {
// Sorry! you don't have ARKIT support in your device
}
Here is the solution Ar support or not
// Get array containing installed apps
var installedApps = require("react-native-installed-packages");
let appArray = installedApps.getApps();
// Check app_array for ar core package
var arPackage = appArray.find(function (_package) {
return _package == "com.google.ar.core";
});
if (typeof arPackage === "undefined") {
console.log("AR not support");
} else {
console.log("AR support");
}

AudioContext.createMediaStreamSource alternative for iOS?

I've developed an app using Cordova and the Web Audio API, that allows the user to plug in headphones, press the phone against their heart, and hear their own heartbeat.
It does this by using audio filter nodes.
//Setup userMedia
context = new (window.AudioContext||window.webkitAudioContext);
navigator.getUserMedia = (navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
navigator.getUserMedia(
{audio:true},
userMediaSuccess,
function(e) {
alert("error2 " + e.message);
});
function userMediaSuccess(stream)
{
//set microphone as input
input = context.createMediaStreamSource(stream);
//amplify the incoming sounds
volume = context.createGain();
volume.gain.value = 10;
//filter out sounds below 25Hz
lowPass = context.createBiquadFilter();
lowPass.type = 'lowpass';
lowPass.frequency.value = 25;
//filter out sounds above 425Hz
highPass = context.createBiquadFilter();
highPass.type = 'highpass';
highPass.frequency.value = 425;
//apply the filters and amplification to microphone input
input.connect(lowPass);
input.connect(highPass);
input.connect(volume);
//send the result of these filters to the phones speakers
highPass.connect(context.destination);
lowPass.connect(context.destination);
volume.connect(context.destination);
}
It runs fine when I deploy to Android, but it seems most of these features aren't available on iOS mobile browsers.
I managed to make getUserMedia function using the iosRTC plugin, but createMediaStreamSource is still "not a function."
So, I'm looking for an alternative to the Web Audio API that can filter out frequencies, or if there are any plugins I could use, that would be perfect.
There's no way to do this on ios web. You'd need a native app, since Apple doesn't support audio input in safari.
Did you try to use
document.addEventListener('deviceready', function () {
// Just for iOS devices.
if (window.device.platform === 'iOS') {
cordova.plugins.iosrtc.registerGlobals();
}
});
You asked this question quite a while ago, but sadly createMediaStreamSource is still not supported in Safari Mobile (will it ever be?).
As previously said, a plugin is the only way to achieve this, and there is actually a Cordova/Phonegap plugin that does exactly that. cordova-plugin-audioinput gives you access to the sound from the microphone using either the Web Audio API or by callbacks that delivers raw audio data chunks, and it supports iOS as well as Android.
Since I don't want to post the same answer twice, I'll instead point you to the following answer here on stackoverflow, where you'll also find a code example: https://stackoverflow.com/a/38464815/6609803
I'm the creator of the plugin and any feedback is appreciated.
Good news, full support for ios safari
https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/createMediaStreamSource

Flex/Flash Builder/Actionscript/AIR/Mobile iOS How to take video using the camera and/or browse for & view/access video stored in the 'Camera Roll"

My understanding currently is that:
CameraUI
I can use the CameraUI to access the built in camera for MediaType.VIDEO and that delegates to the built-in video camera app and lets me record a video. My app does that now.
When I stop recording and click the "Use" button, I am returned to my app and theoretically I have a valid MediaPromise.
iOS does -not- provide a valid/usable url/filename to the recorded video (or to photos) and so I would have to use a Loader to bring-in/use/access the 'recorded' video... AND... iOS does not actually create a file anywhere on the device, most importantly, in the Camera Roll where one would expect by the normal behavior when uses the system native camera/video app.
The documentation says that the Loader can load various image types and SWFs but nothing about video data, so I conclude from that that I cannot actually use the CameraUI to generate a valid MediaPromise that I can then pass to a Loader class or similar to read in the information created by the system camera and then manipulate (upload, save to applicationStorageDirectory, and/or display in one of the two video player components available in the API).
CameraRoll
I can have video entities in the iOS Camera Roll but the AS3/Air3.5 CameraRoll class won't let me view/access/reference them in any way.
Normal File I/O
All my attempts to use the Air3.5 File classes to browse to the storage location of the iOS Camera Roll have been rebuffed.
------- Questions -------
Am I correct in believing that there is a way to take video but no way to use the video that's been captured. (No way to use the resulting MediaPromise successfully).
I believe you can take video and access it using Android, but there's nothing in the documentation that says that you cannot using iOS.
Am I correct in believing that iOS sandboxes apps so that they cannot browse to video/photo storage using standard File I/O, but only through the apparently non-workable means I've tried (CameraUI & CameraRoll)
Am I wrong to think that these should be rather obvious NEEDS that one can achieve using the XCode Objective C++ etc route but the AIR Mobile Framework does not allow either because of Apple blocking functionality or because Adobe has failed to meet reasonable expectations?
One item of ironic note to convey. If I use the iOS system camera app to record a video, a thumnail of that video then appears in the Gallery/Camera Roll, and of course, I can share it or view it, or whatever... If I use AIR's CameraRoll.browseForImage(), provided I haven't used the camera to take another image, when it shows me the folder where the pictures are stored, the folder icon uses the thumbnail of the last object added... in this case, the video I took, but if I then enter the folder, the video cannot be found. It's teasing us. It knows it's there, but it is apparently forbidden fruit.
I can't answer all your questions, so this entry may not be acceptable, but I found this page while searching a solution for some the problems you described and thought that someone else may find this answer (partially) useful.
To save the movie you just took you need to open and read the data from the promise.
The iOS won't save the file anywere, so the MediaPromise.file is always null.
This is my solution to the problem:
private var camera:CameraUI;
private var dataInput:IDataInput;
public function recordVideo():void
{
// Start the camera and ask for a video
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, onCameraComplete);
camera.launch(MediaType.VIDEO);
}
private function onCameraComplete(event:MediaEvent):void
{
// event.data is a MediaPromise and MediaPromise.open() returns a IDataInput
// Let's cast it to a dispatcher and check when it's complete
dataInput = event.data.open();
var dispatcher:IEventDispatcher = IEventDispatcher(dataInput);
dispatcher.addEventListener(Event.COMPLETE, onDataInputComplete);
}
private function onDataInputComplete(event:Event):void
{
// We can do whatever we want with the data, so we'll store it in a File
var file:File = new File();
var bytes:ByteArray = new ByteArray();
var stream:FileStream = new FileStream();
// Reading the data from the opened MediaPromise
dataInput.readBytes(bytes);
stream.open(file, FileMode.WRITE);
stream.writeBytes(bytes, 0, bytes.bytesAvailable);
stream.close();
}
Also, I'm still looking for a way to put the movie in the CameraRoll

Resources