I want to figure out best practices for the following. Majority of the SDK's and API's I use offer me two APP ID's. One for live and one for testing. What is the best way to set this up so that I simply have to change one variable and the app knows the proper APP ID's to load. Example;
Example
// When app is live
kFACEBOOK_APP_ID = #"12345_live";
// When app is in development
kFACEBOOK_APP_ID = #"12345_dev";
APP_LIVE = TRUE;
[FBSettings setDefaultAppID:kFACEBOOK_APP_ID];
This should send #"12345_live";
You could do this with some basic programming fundamentals.
Treat the following code as pseudocode:
APP_LIVE = true;
if (APP_LIVE) {
// Application is in LIVE mode
kFACEBOOK_APP_ID = #"12345_live";
kOTHER_APP_ID = #"43124312_live";
} else {
// Application is in DEV mode
kFACEBOOK_APP_ID = #"12345_dev";
kOTHER_APP_ID = #"43124312_dev";
}
// Regardless of Application state LIVE/DEV, appropriate ID is now entered
[FBSettings setDefaultAppID:kFACEBOOK_APP_ID];
This code can be used in place of where you are currently running your [FBSettings setDefaultAppID:] function call.
Related
We would like to enable a feature that allows a model to be viewed using a deep link to our ARKit app from a web page.
Has anyone discovered a way to discover if a device is ARKit compatible using the user agent string or any other browser-based mechanism?
Thanks!
Apple seems to use the following code to show/hide the "Visit this page on iOS 12 to try AR Quick Look" on https://developer.apple.com/arkit/gallery/
(function () {
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
document.documentElement.classList.add(isRelAR ? 'relar' : 'no-relar');
})();
The interesting part of course being
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
Make your actions accordingly based on the value of isRelAR.
Safari doesn’t expose any of the required hardware information for that.
If you already have a companion iOS app for your website, another option might be to still provide some non-AR experience for your content, so that the website has something to link to in all cases.
For example, AR furniture catalogs seem to be a thing now. But if the device isn’t ARKit capable, you could still provide a 3D model of each furniture piece linked from your website, letting the user spin it around and zoom in on it with touch gestures instead of placing it in AR.
I've developed an app using Cordova and the Web Audio API, that allows the user to plug in headphones, press the phone against their heart, and hear their own heartbeat.
It does this by using audio filter nodes.
//Setup userMedia
context = new (window.AudioContext||window.webkitAudioContext);
navigator.getUserMedia = (navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
navigator.getUserMedia(
{audio:true},
userMediaSuccess,
function(e) {
alert("error2 " + e.message);
});
function userMediaSuccess(stream)
{
//set microphone as input
input = context.createMediaStreamSource(stream);
//amplify the incoming sounds
volume = context.createGain();
volume.gain.value = 10;
//filter out sounds below 25Hz
lowPass = context.createBiquadFilter();
lowPass.type = 'lowpass';
lowPass.frequency.value = 25;
//filter out sounds above 425Hz
highPass = context.createBiquadFilter();
highPass.type = 'highpass';
highPass.frequency.value = 425;
//apply the filters and amplification to microphone input
input.connect(lowPass);
input.connect(highPass);
input.connect(volume);
//send the result of these filters to the phones speakers
highPass.connect(context.destination);
lowPass.connect(context.destination);
volume.connect(context.destination);
}
It runs fine when I deploy to Android, but it seems most of these features aren't available on iOS mobile browsers.
I managed to make getUserMedia function using the iosRTC plugin, but createMediaStreamSource is still "not a function."
So, I'm looking for an alternative to the Web Audio API that can filter out frequencies, or if there are any plugins I could use, that would be perfect.
There's no way to do this on ios web. You'd need a native app, since Apple doesn't support audio input in safari.
Did you try to use
document.addEventListener('deviceready', function () {
// Just for iOS devices.
if (window.device.platform === 'iOS') {
cordova.plugins.iosrtc.registerGlobals();
}
});
You asked this question quite a while ago, but sadly createMediaStreamSource is still not supported in Safari Mobile (will it ever be?).
As previously said, a plugin is the only way to achieve this, and there is actually a Cordova/Phonegap plugin that does exactly that. cordova-plugin-audioinput gives you access to the sound from the microphone using either the Web Audio API or by callbacks that delivers raw audio data chunks, and it supports iOS as well as Android.
Since I don't want to post the same answer twice, I'll instead point you to the following answer here on stackoverflow, where you'll also find a code example: https://stackoverflow.com/a/38464815/6609803
I'm the creator of the plugin and any feedback is appreciated.
Good news, full support for ios safari
https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/createMediaStreamSource
I'm working on an iOS text to speech app and trying to add an option to use the Alex voice, which is new for iOS 9. I need to determine whether or not the user has downloaded the Alex voice in Settings -> Accessibility. I can't seem to find out how to do this.
if ([AVSpeechSynthesisVoice voiceWithIdentifier:AVSpeechSynthesisVoiceIdentifierAlex] == "Not Found" ) {
// Do something...
}
The reason is the other language voices that are standard, play back at a certain rate, different from the Alex voice. So I have a working app, but if the user hasn't downloaded the voice, iOS automatically defaults to a basic voice, but it plays back at the incorrect rate. If I can detect the voice hasn't been downloaded, I can compensate for the difference and / or advise the user.
OK, so I guess I was overthinking this and thought it was more complicated. The solution was simple.
if (![AVSpeechSynthesisVoice voiceWithIdentifier:AVSpeechSynthesisVoiceIdentifierAlex]) {
// Normalize the speech rate since the user hasn't downloaded the voice and/or trigger a notification that they need to go into settings and download the voice.
}
Thanks to everyone who looked at this and to #CeceXX for the edit. Hope this helps someone else.
Here's one way to do it. Let's stick with Alex as an example:
- (void)checkForAlex {
// is Alex installed?
BOOL alexInstalled = NO;
NSArray *voices = [AVSpeechSynthesisVoice speechVoices];
for (id voiceName in voices) {
if ([[voiceName valueForKey:#"name"] isEqualToString:#"Alex"]) {
alexInstalled = YES;
}
}
// react accordingly
if (alexInstalled) {
NSLog(#"Alex is installed on this device.");
} else {
NSLog(#"Alex is not installed on this device.");
}
}
This method loops through all installed voices and queries each voice's name. If Alex is among them, he's installed.
Other values you can query are "language" (returns a language code like en-US) and quality (1 = standard, 2 = enhanced).
the code below was developed for shared memory. when it is used on two services or two user app, it work pretty well. but when the memory is created in service, the application can not find the memory. what is wrong with this code?
in service:
mmf=MemoryMappedFile.CreateNew("ALFMap",10000);
bool mutexCreated;
Mutex mutex=new Mutex(true,"ALFMutex",out mutexCreated);
stream=mmf.CreateViewStream(0,1000);
BinaryWriter writer=new BinaryWriter(stream);
writer.Write("I am reza dadkhah");
mutex.ReleaseMutex();
in user app:
using (MemoryMappedFile mmf = MemoryMappedFile.OpenExisting("ALFMap",MemoryMappedFileRights.FullControl))
{
Mutex mutex=Mutex.OpenExisting("ALFMutex");
mutex.WaitOne();
using (MemoryMappedViewStream stream=mmf.CreateViewStream(0,1000))
{
BinaryReader reader = new BinaryReader(stream);
textBox1.Text=reader.ReadString();
}
mutex.ReleaseMutex();
}
Have you tried writing the content of the memory in a file using your application ? Try that first and confirm the value is in the memory or not.
I have an app on the App Store, and I want to make some changes that will not effect users that previously downloaded my app.
Is there a way to determine if the user has previously downloaded my app?
Incase anyone is still wondering, a great solution to this problem (assuming you don't already have it) is using the Keychain, which persists through app installation/uninstalls. This library allows you to access the Keychain using NSDictionary-like syntax.
https://github.com/nicklockwood/FXKeychain
So you could implement a function like this:
-(BOOL)alreadyInstalled
{
NSString *installDate = [[FXKeychain defaultKeychain] objectForKey:#"InstallDate"];
if (!installDate)
{
NSString *newInstallDate = [NSString stringWithFormat:#"%i", [[NSDate date] timeIntervalSince1970]];
[[FXKeychain defaultKeychain] setObject:newInstallDate forKey:#"InstallDate"]
return NO;
}
return YES;
}
I don't know a great way to do this but there are some tricks you can do, e.g.:
Look for some data that your application generates. If the data already exists then it's not an update (or an update that completed previously);
Prepare yourself for this, even if this means issuing an intermediate update to your application, then go back to #1. See: How to tell if an iOS application has been newly installed or updated?