https://developer.mozilla.org/en-US/Add-ons/SDK/Tutorials/Listening_for_load_and_unload#options.loadReason provides a (indirect) way to listen for an add-on upgrade event (loadReason == “upgrade”).
Is there a programmatic way to detect whether the upgrade was user-initiated, or was an auto-update?
a) User-initiated (user goes to my web site and installs the latest version of the add-on): once the new version is loaded, I’d like to pop a new tab/URL that says “Thank you for upgrading...”
b) Auto-update (update.rdf): No user messaging.
However, in both the a) and b) scenarios, my exports.main() is seeing loadReason == “upgrade”, so I don’t know how to distinguish between the two.
Any workaround suggestions?
TIA
This is a cool question. I'm not sure myself but https://developer.mozilla.org/en-US/Add-ons/Add-on_Manager/Addon there we see something called foreignInstall. To access that do this:
Cu.import('resource://gre/modules/AddonManager.jsm');
AddonManager.getAddonByID('NativeShot#jetpack', function(addon) {
console.log(addon.foreignInstall)
});
I'm not sure what qualifies as "third party installation".
Related
During my test on Pepper, I found some difficulties in realizing continuative collaborative dialog.
In particular, after about 10 minutes, it seems that the ALSpeechRecognition engine stops working.
In other words, Pepper dialog panel remains empty and/or the robot does not understand my words, even if the structure worked some minute before.
I tried to stop and restart it (i.e., the engine) via SSH terminal, by using:
qicli call ALSpeechRecognition.pause 1
qicli call ALSpeechRecognition.pause 0
It should restart the engine according to the guidelines shown here, but it does not work.
Thank you so much guys.
Sincerely,
Giovanni
According to the tutorial, starting and stopping the speech recognition engine is done by subscribing/unsubscribing it.
The recommended way to do this is unsubscribing and subscribing back to it. For me it also worked changing the speech reco language and chaging it back to the one you had previously.
Luis is right and to do so just create a function as below given and call it if ActiveListenning event comes false from ALSpeechRecognition module. Note: Use ALMemory module to get data from ALSpeechRecogntion.
asr_service = ALProxy("ALSpeechRecognition",ip,port)
memory = ALProxy("ALMemory",ip,port)
def reset():
asr_service.unsubscribe("ASR_Engine")
asr_service.subscribe("ASR_Engine")
ALS = memory.getData("ALSpeechRecognition/ActiveListening")
if ALS==False:
reset()
How do I consume data from, and push data to a websocket using Fable? I found this github issue which suggests that it can be done, but cannot find any documentation or examples of how to achieve this.
For anyone who finds this question later via Google, here's the response that #Lawrence received from Maxime Mangel when he asked this question on Gitter:
Hello #lawrencetaylor you can find here an old sample using websockets with FableArch.
Don't consider the code 100% correct because it's from an older version of fable-arch.
This code should however show you how to use websockets with fable-arch logic.
https://github.com/fable-compiler/fable-arch/commit/abe432881c701d2df65e864476bfa12cf7cf9343
First you create the websocket here.
Here you can see how to send a message over a websocket.
And here how to listen on a websocket.
I've copied the code he mentioned below, so that anyone who finds this question later will be able to read it without having to follow those links. Credit for the code below goes to Maxime Mangel, not me.
Websocket creation
let webSocket =
WebSocket.Create("wss://echo.websocket.org")
Sending a message over a websocket
webSocket.send("Hello, socket!")
Listening on a websocket
let webSocketProducer push =
webSocket.addEventListener_message(
Func<_,_>(fun e ->
push(ReceivedEcho (unbox e.data))
null
)
)
createApp Model.initial view update
|> withProducer webSocketProducer
|> start renderer
NOTE: ReceivedEcho in the above code is one of the cases of the Action discriminated union, which is a standard pattern in the fable-arch way of doing things. And withProducer is a function from fable-arch. See http://fable.io/fable-arch/samples/clock/index.html for a simple example of how to use withProducer.
My motivation: I'm writing an app to help with some quantified self / time tracking type things. I'd like to use electron to record information about which app I am currently using.
Is there a way to get information about other apps in Electron? Can you at least pull information about another app that currently has focus? For instance, if the user is browsing a webpage in Chrome, it would be great to know that A) they're using chrome and B) the title of the webpage they're viewing.
During my research I found this question:
Which app has the focus when a global shortcut is triggered
It looks like the author there is using the nodObjc library to get this information on OSX. In addition to any approaches others are using to solve this problem, I'm particularly curious if electron itself has any way of exposing this information without resorting to outside libraries.
In a limited way, yes, you can get some of this information using the electron's desktopCapturer.getSources() method.
This will not get every program running on the machine. This will only get whatever chromium deems to be a video capturable source. This generally equates to anything that is an active program that has a GUI window (e.g., on the task bar on windows).
desktopCapturer.getSources({
types: ['window', 'screen']
}, (error, sources) => {
if (error) throw error
for (let i = 0; i < sources.length; ++i) {
log(sources[i]);
}
});
No, Electron doesn't provide an API to obtain information about other apps. You'll need to access the native platform APIs directly to obtain that information. For example Tockler seems to do so via shell scripts, though personally I prefer accessing native APIs directly via native Node addons/modules or node-ffi-napi.
2022 answer
Andy Baird's answer is definitely the better native Electron approach though that syntax is outdated or incomplete. Here's a complete working code snippet, assumes running from the renderer using the remote module in a recent Electron version (13+):
require('#electron/remote').desktopCapturer.getSources({
types: ['window', 'screen']
}).then(sources => {
for (const thisSource of sources) {
console.log(thisSource.name);
}
});
The other answers here are for the rendering side - it might be helpful to do this in the main process:
const { desktopCapturer } = require('electron')
desktopCapturer.getSources({ types: ['window', 'screen'] }).then(async sources => {
for (const source of sources) {
console.log("Window: ", source.id, source.name);
}
})
Seems like there is probably an existing cordova solution for this, but I can't find it other than the cordovaBadge plugin
- https://github.com/katzer/cordova-plugin-badge
- http://ngcordova.com/docs/plugins/badge/
Problem is I have had to remove this plugin due to conflicts with LocalNotifications. (Seems like I can't have them both)
The cordovaBadge has a simple .hasPermission() method. Is there anything else in the cordova library that can do this?
You could use the isRemoteNotificationsEnabled() method of cordova-diagnostic-plugin:
cordova.plugins.diagnostic.isRemoteNotificationsEnabled(function(isEnabled){
console.log("Push notifications are " + (isEnabled ? "enabled" : "disabled"));
}, function(error){
console.error("An error occurred: "+error);
});
Dear #vargen_ you're right!
I'm using this plugin together 'phonegap-plugin-push' - just to catch some new feature that exposed by #md repo (see push actions...).
Diagnostic plugin doesn't allow to know if permission was request or not ( like the 'not_determined' response of 'cordova.plugins.diagnostic.getRemoteNotificationsAuthorizationStatus' method available ONLY for iOS).
Strangely, when the app is installed for the first time (Android), permission for push is set to 'true' ( even if no one modal was shown ) -- tested onto android 8.1.0 .
Instead, if we use the 'RECEIVE_WAP_PUSH' value of cordova.plugins.diagnostic.permissionStatus object, like in the example provided here https://www.npmjs.com/package/cordova.plugins.diagnostic#requestruntimepermission permission seems to be always 'DENIED_ALWAYS'.
I'm very confused about how to manage the 'first' time case for this plugin for Android platforms.
I want to know whether my users are browsing a page in my rails application with
a tablet or
a mobile device or
a desktop computer
I digged through many different solutions. Here are my favorites:
The ua-parser gem: https://github.com/ua-parser/uap-ruby which seems to be very clean but unfortunately it always plots Other when I use parsed_string.device - I can detect the OS and browser with it very well.
Writing it from scratch
Writing from scratch ended up in sth like this:
if request.user_agent.downcase.match(/mobile|android|iphone|blackberry|iemobile|kindle/)
#os = "mobile"
elsif request.user_agent.downcase.match(/ipad/)
#os = "tablet"
elsif request.user_agent.downcase.match(/mac OS|windows/)
#os = "desktop"
end
However, what I miss is a complete documentation of the user agent 'device' definitions.
For example:
What patterns do I need to look at if my user is browsing on a tablet/mobile device or desktop? I can't just guess and checking e.g. the ua-parser regex is not helping me either (very complicated): https://github.com/tobie/ua-parser/blob/master/regexes.yaml
Is there any simple solution to solve my problem?
How does google analytics do it? I tried to research but could not find it. They're also displaying devices (desktop/tablet/mobile).
The browser gem has a suggestion to do this, but until that is added you could still use the gem to figure it out by using browser.device?
I'm looking to do the 2nd option, because I need it as lean-n-mean as possible. There'se a lot of information in the User-Agent string, that I just don't need. And I don't want a function that tries to parse it all. Just simply: bot, desktop, tablet, mobile and other.
It's kind of a lot to read, but I'm looking for keywords using this extensive list.
So far, the following keywords seems to work for me. It's regular expressons in php, but you'll get the idea.
//try to find crawlers
// https://developers.whatismybrowser.com/useragents/explore/software_type_specific/crawler/
if (preg_match('/(bot\/|spider|crawler|slurp|pinterest|favicon)/i', $userAgent) === 1)
return ['type' => 'crawler'];
//try to find tablets
// https://developers.whatismybrowser.com/useragents/explore/hardware_type_specific/tablet/
// https://developers.whatismybrowser.com/useragents/explore/hardware_type_specific/ebook-reader/
if (preg_match('/(ipad| sm-t| gt-p| gt-n|wt19m-fi|nexus 7| silk\/|kindle| nook )/i', $userAgent) === 1)
return ['type' => 'tablet'];
//try to find mobiles
// https://developers.whatismybrowser.com/useragents/explore/hardware_type_specific/mobile/
// https://developers.whatismybrowser.com/useragents/explore/hardware_type_specific/phone/
if (preg_match('/(android|iphone|mobile|opera mini|windows phone|blackberry|netfront)/i', $userAgent) === 1)
return ['type' => 'mobile'];
//try to find desktops
// https://developers.whatismybrowser.com/useragents/explore/hardware_type_specific/computer/
if (preg_match('/(windows nt|macintosh|x11; linux|linux x86)/i', $userAgent) === 1)
return ['type' => 'desktop'];
return ['type' => 'other'];