Passing arguments to a running electron app - electron

I have found some search results about using app.makeSingleInstance and using CLI arguments, but it seems that the command has been removed.
Is there any other way to send a string to an already started electron app?

One strategy is to have your external program write to a file that your electron app knows about. Then, your electron app can listen for changes to that file and can read it to get the string:
import fs
fs.watch("shared/path.txt", { persistent: false }, (eventType: string, fileName: string) => {
if (eventType === "change") {
const myString: string = fs.readFileSync(fileName, { encoding: "utf8" });
}
});
I used the synchronous readFileSync for simplicity, but you might want to consider the async version.
Second, you'll need to consider the case where this external app is writing so quickly that maybe the fs.watch callback is triggered only once for two writes. Could you miss a change?
Otherwise, I don't believe there's an Electron-native way of getting this information from an external app. If you were able to start the external app from your Electron app, then you could just do cp.spawn(...) and use its stdout pipe to listen for messages.
If shared memory were a thing in Node, then you could use that, but unfortunately it's not.

Ultimately, the most elegant solution to my particular problem was to add a http api endpoint for the Electron app using koa.
const Koa = require("koa");
const koa = new Koa();
let mainWindow;
function createWindow() {
let startServer = function() {
koa.use(async ctx => {
mainWindow.show();
console.log("text received", ctx.request.query.text);
ctx.body = ctx.request.query.text;
});
koa.listen(3456);
};
}
Now I can easily send texts to Electron from outside the app using the following url:
localhost:3456?text=myText

Related

How access path in electron preloader script?

In my electron app, I'm storing some settings data locally using the 'electron-json-storage' module.
However, in order to access this data, I must first find out the local path.
I'm using app.getPath('userData')); for this. You can see how my preloader script looks like below:
const { contextBridge, ipcRenderer, app } = require('electron');
let os = require("os");
contextBridge.exposeInMainWorld('electron', {
ipcRenderer: {
myPing() {
ipcRenderer.send('ipc-example', 'ping');
},
on(channel, func) {
const validChannels = ['ipc-example',];
if (validChannels.includes(channel)) {
ipcRenderer.on(channel, (event, ...args) => func(...args));
}
},
once(channel, func) {
const validChannels = ['ipc-example'];
if (validChannels.includes(channel)) {
ipcRenderer.once(channel, (event, ...args) => func(...args));
}
},
},
os:{
cpuCount: os.cpus().length,
},
storage:{
localPath: app.getPath('userData'),
},
});
Unfortunately, it seems app isn't accessible in the preloader script because dev-tools shows the following error:
Uncaught Error: Cannot read property 'getPath' of undefined
Do I need to send the app path from a main script using a preloaded ipc-event, or is there any way to access it in the preloader itself?
Thanks!
The problem is that the app module is limited to the main process. As the preload scripts already run in the renderer, you cannot access app and would have to ask your main process via IPC for the path.
As a side note, it's probably preferable to just use IPC once (when your preload script is initialised, so before you define your main world functions) and cache the result the main process is returning.

How to populate Adal8Service 's configOptions using config settings in Angular 7 and load when the application loads

I am using import { Adal8Service, Adal8HTTPService } from 'adal-angular8'; for Azure authentication. I am using the below in app.module.ts:
export function appInit(appConfigService: AppInitService) {
return (): any => {
appConfigService.getApplicationConfig().subscribe((res) =>{
sessionStorage.setItem("appConfig",JSON.stringify(res));
timeout(500);
});
}
}
my getApplicationConfig() is below:
public getApplicationConfig() {
return this.http.get('assets/config.json');}
and in the providers [] the below:
AuthenticationService,
AppInitService,
{
provide: APP_INITIALIZER,
useFactory: appInit,
deps: [AppInitService],
multi: true
},
Adal8Service,
{ provide: Adal8HTTPService,
useFactory: Adal8HTTPService.factory,
deps: [HttpClient, Adal8Service],
multi: true
},
The here is the appInit function does not block (even removing the timeout()) the application loading and proceeds to to the
this.adalService.init(this.adalConfig);
this.adalService.handleWindowCallback();
(where this.adalConfig = sessionStorage.getItem("appConfig")).
If I refresh the page, then I am getting redirected to the Azure Ad login page properly or if I am hardcoding the configOptions of the this.adalService.init("HARDOCDE all values") then it works fine. How do I make the application block the configuration. I am storing the config values under /assets/config.json. I am not sure what I am doing wrong here. I did try reading the "json" file, but again I have to change it before proceeding to production. How do I make the application wait, there are also other config values for the application stored in the /assets/config.json file. Is the way I use the APP_INITIALIZER correct? Please point me to right direction.
The problem is not related to ADAL but related to how asynchronous functions works in javascript.
In order to block the execution of the function, you can either write down a function which waits till the response is returned by the http request or you can use library like waitfor-ES6 which can help you do that.
Change needs to be done at
export function appInit(appConfigService: AppInitService) {
return (): any => {
response = yield wait.for(appConfigService.getApplicationConfig);
sessionStorage.setItem("appConfig",JSON.stringify(response));
}
}
Please note this is not exact change but the direction of the change that you will need to perform. Hope this helps.

Downloading whole websites with k6

I'm currently evaluating whether k6 fits our load testing needs. We have a fairly traditional website architecture that uses Apache webservers with PHP und a MySQL database. Sending simple HTTP requests with k6 looks simple enough and I think we will be able to test all major functionality with it, as we don't rely on JavaScript that much and most pages are static.
However, I'm unsure how to deal with resources (stylesheets, images, etc.) that are referenced in the HTML that is returned in the requests. We need to load them as well, as this sometimes leads to database requests, which must be part of the load test.
Is there some out-of-the-box functionality in k6 that allows you to load all the resources like a browser would? I'm aware that k6 does NOT render the page and I don't need it to. I only need to request all the resources inside the HTML.
You basically have two options, both with their caveats:
Record your session - you can either export har directly from the browser as shown there or use an extension made for your browser here is firefox and chromes. Both should be usable without a k6 cloud account you just need to set them to download the har and it will automatically (and somewhat silently) download them when you hit stop. And then either use the in k6 har converter (which is deprecated, but still works) or the new har-to-k6 one which.
This method is particularly good if you have a lot of pages and/or resources and even works if you have a single page style of application as it just gets what the browser requested as a HAR and then transforms it into a script. And if there were no dynamic things that need to be inputed (username/password) the final script can be used as is most of the time.
The biggest problem with this approach is that if you add a css file you need to redo this whole exercise. This is even more problematic if you css/js file name change on each change or something like that. Which is what the next method is good for:
Use parseHTML and then find the elements you care about and make a request for them.
import http from "k6/http";
import {parseHTML} from "k6/html";
export default function() {
const res = http.get("https://stackoverflow.com");
const doc = parseHTML(res.body);
doc.find("link").toArray().forEach(function (item) {
console.log(item.attr("href"));
// make http gets for it
// or added them to an array and make one batch request
});
}
will produce
NFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] /opensearch.xml
INFO[0001] https://cdn.sstatic.net/Shared/stacks.css?v=53507c7c6e93
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=d3fa9a72fd53
INFO[0001] https://cdn.sstatic.net/Shared/Product/product.css?v=c9b2e1772562
INFO[0001] /feeds
INFO[0001] https://cdn.sstatic.net/Shared/Channels/channels.css?v=f9809e9ffa90
As you can see some of the urls are relative and not absolute so you will need to handle this. And in this example only some are css, so probably more filtering is needed.
The problem here is that you need to write the code and if you add a relative link or something else you need to handle it. Luckily k6 is scriptable so you can reuse the code :D.
I've followed Михаил Стойков suggestion and written my own function to load resources. You can set the way resources are loaded (batch or sequential gets with options.concurrentResourceLoading).
/**
* #param {http.RefinedResponse<http.ResponseType>} response
*/
export function getResources(response) {
const resources = [];
response
.html()
.find('*[href]:not(a)')
.each((index, element) => {
resources.push(element.attributes().href.value);
});
response
.html()
.find('*[src]:not(a)')
.each((index, element) => {
resources.push(element.attributes().src.value);
});
if (options.concurrentResourceLoading) {
const responses = http.batch(
resources.map((r) => {
return ['GET', resolveUrl(r, response.url), null, {
headers: createHeader()
}];
})
);
responses.forEach(() => {
check(response, {
'resource returns status 200': (r) => r.status === 200,
});
});
} else {
resources.forEach((r) => {
const res = http.get(resolveUrl(r, response.url), {
headers: createHeader(),
});
!check(res, {
'resource returns status 200': (r) => r.status === 200,
});
});
}
}

Service worker to save form data when browser is offline

I am new to Service Workers, and have had a look through the various bits of documentation (Google, Mozilla, serviceworke.rs, Github, StackOverflow questions). The most helpful is the ServiceWorkers cookbook.
Most of the documentation seems to point to caching entire pages so that the app works completely offline, or redirecting the user to an offline page until the browser can redirect to the internet.
What I want to do, however, is store my form data locally so my web app can upload it to the server when the user's connection is restored. Which "recipe" should I use? I think it is Request Deferrer. Do I need anything else to ensure that Request Deferrer will work (apart from the service worker detector script in my web page)? Any hints and tips much appreciated.
Console errors
The Request Deferrer recipe and code doesn't seem to work on its own as it doesn't include file caching. I have added some caching for the service worker library files, but I am still getting this error when I submit the form while offline:
Console: {"lineNumber":0,"message":
"The FetchEvent for [the form URL] resulted in a network error response:
the promise was rejected.","message_level":2,"sourceIdentifier":1,"sourceURL":""}
My Service Worker
/* eslint-env es6 */
/* eslint no-unused-vars: 0 */
/* global importScripts, ServiceWorkerWare, localforage */
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
//Determine the root for the routes. I.e, if the Service Worker URL is http://example.com/path/to/sw.js, then the root is http://example.com/path/to/
var root = (function() {
var tokens = (self.location + '').split('/');
tokens[tokens.length - 1] = '';
return tokens.join('/');
})();
//By using Mozilla’s ServiceWorkerWare we can quickly setup some routes for a virtual server. It is convenient you review the virtual server recipe before seeing this.
var worker = new ServiceWorkerWare();
//So here is the idea. We will check if we are online or not. In case we are not online, enqueue the request and provide a fake response.
//Else, flush the queue and let the new request to reach the network.
//This function factory does exactly that.
function tryOrFallback(fakeResponse) {
//Return a handler that…
return function(req, res) {
//If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return enqueue(req).then(function() {
//As the fake response will be reused but Response objects are one use only, we need to clone it each time we use it.
return fakeResponse.clone();
});
}
//If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
//A fake response with a joke for when there is no connection. A real implementation could have cached the last collection of updates and keep a local model. For simplicity, not implemented here.
worker.get(root + 'api/updates?*', tryOrFallback(new Response(
JSON.stringify([{
text: 'You are offline.',
author: 'Oxford Brookes University',
id: 1,
isSticky: true
}]),
{ headers: { 'Content-Type': 'application/json' } }
)));
//For deletion, let’s simulate that all went OK. Notice we are omitting the body of the response. Trying to add a body with a 204, deleted, as status throws an error.
worker.delete(root + 'api/updates/:id?*', tryOrFallback(new Response({
status: 204
})));
//Creation is another story. We can not reach the server so we can not get the id for the new updates.
//No problem, just say we accept the creation and we will process it later, as soon as we recover connectivity.
worker.post(root + 'api/updates?*', tryOrFallback(new Response(null, {
status: 202
})));
//Start the service worker.
worker.init();
//By using Mozilla’s localforage db wrapper, we can count on a fast setup for a versatile key-value database. We use it to store queue of deferred requests.
//Enqueue consists of adding a request to the list. Due to the limitations of IndexedDB, Request and Response objects can not be saved so we need an alternative representations.
//This is why we call to serialize().`
function enqueue(request) {
return serialize(request).then(function(serialized) {
localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
queue.push(serialized);
return localforage.setItem('queue', queue).then(function() {
console.log(serialized.method, serialized.url, 'enqueued!');
});
});
});
}
//Flush is a little more complicated. It consists of getting the elements of the queue in order and sending each one, keeping track of not yet sent request.
//Before sending a request we need to recreate it from the alternative representation stored in IndexedDB.
function flushQueue() {
//Get the queue
return localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
//If empty, nothing to do!
if (!queue.length) {
return Promise.resolve();
}
//Else, send the requests in order…
console.log('Sending ', queue.length, ' requests...');
return sendInOrder(queue).then(function() {
//Requires error handling. Actually, this is assuming all the requests in queue are a success when reaching the Network.
// So it should empty the queue step by step, only popping from the queue if the request completes with success.
return localforage.setItem('queue', []);
});
});
}
//Send the requests inside the queue in order. Waiting for the current before sending the next one.
function sendInOrder(requests) {
//The reduce() chains one promise per serialized request, not allowing to progress to the next one until completing the current.
var sending = requests.reduce(function(prevPromise, serialized) {
console.log('Sending', serialized.method, serialized.url);
return prevPromise.then(function() {
return deserialize(serialized).then(function(request) {
return fetch(request);
});
});
}, Promise.resolve());
return sending;
}
//Serialize is a little bit convolved due to headers is not a simple object.
function serialize(request) {
var headers = {};
//for(... of ...) is ES6 notation but current browsers supporting SW, support this notation as well and this is the only way of retrieving all the headers.
for (var entry of request.headers.entries()) {
headers[entry[0]] = entry[1];
}
var serialized = {
url: request.url,
headers: headers,
method: request.method,
mode: request.mode,
credentials: request.credentials,
cache: request.cache,
redirect: request.redirect,
referrer: request.referrer
};
//Only if method is not GET or HEAD is the request allowed to have body.
if (request.method !== 'GET' && request.method !== 'HEAD') {
return request.clone().text().then(function(body) {
serialized.body = body;
return Promise.resolve(serialized);
});
}
return Promise.resolve(serialized);
}
//Compared, deserialize is pretty simple.
function deserialize(data) {
return Promise.resolve(new Request(data.url, data));
}
var CACHE = 'cache-only';
// On install, cache some resources.
self.addEventListener('install', function(evt) {
console.log('The service worker is being installed.');
// Ask the service worker to keep installing until the returning promise
// resolves.
evt.waitUntil(precache());
});
// On fetch, use cache only strategy.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
evt.respondWith(fromCache(evt.request));
});
// Open a cache and use `addAll()` with an array of assets to add all of them
// to the cache. Return a promise resolving when all the assets are added.
function precache() {
return caches.open(CACHE).then(function (cache) {
return cache.addAll([
'/js/lib/ServiceWorkerWare.js',
'/js/lib/localforage.js',
'/js/settings.js'
]);
});
}
// Open the cache where the assets were stored and search for the requested
// resource. Notice that in case of no matching, the promise still resolves
// but it does with `undefined` as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request).then(function (matching) {
return matching || Promise.reject('no-match');
});
});
}
Here is the error message I am getting in Chrome when I go offline:
(A similar error occurred in Firefox - it falls over at line 409 of ServiceWorkerWare.js)
ServiceWorkerWare.prototype.executeMiddleware = function (middleware,
request) {
var response = this.runMiddleware(middleware, 0, request, null);
response.catch(function (error) { console.error(error); });
return response;
};
this is a little more advanced that a beginner level. But you will need to detect when you are offline or in a Li-Fi state. Instead of POSTing data to an API or end point you need to queue that data to be synched when you are back on line.
This is what the Background Sync API should help with. However, it is not supported across the board just yet. Plus Safari.........
So maybe a good strategy is to persist your data in IndexedDB and when you can connect (background sync fires an event for this) you would then POST the data. It gets a little more complex for browsers that don't support service workers (Safari) or don't yet have Background Sync (that will level out very soon).
As always design your code to be a progressive enhancement, which can be tricky, but worth it in the end.
Service Workers tend to cache the static HTML, CSS, JavaScript, and image files.
I need to use PouchDB and sync it with CouchDB
Why CouchDB?
CouchDB is a NoSQL database consisting of a number of Documents
created with JSON.
It has versioning (each document has a _rev
property with the last modified date)
It can be synchronised with
PouchDB, a local JavaScript application that stores data in local
storage via the browser using IndexedDB. This allows us to create
offline applications.
The two databases are both “master” copies of
the data.
PouchDB is a local JavaScript implementation of CouchDB.
I still need a better answer than my partial notes towards a solution!
Yes, this type of service worker is the correct one to use for saving form data offline.
I have now edited it and understood it better. It caches the form data, and loads it on the page for the user to see what they have entered.
It is worth noting that the paths to the library files will need editing to reflect your local directory structure, e.g. in my setup:
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
The script is still failing when offline, however, as it isn't caching the library files. (Update to follow when I figure out caching)
Just discovered an extra debugging tool for service workers (apart from the console): chrome://serviceworker-internals/. In this, you can start or stop service workers, view console messages, and the resources used by the service worker.

YUI3 and socket.io

Just a simple question:
I am using YUI3 framework for my website and want to use socket.io framework.
Now challenge is to use socket.io with YUI3.
As of now I am using socket.io logic inside YUI sandbox and its working fine.
BUT can there be any fallback of this approach ? If yes, then how should I integerate both ?
Here is the snippet of code:
<script type="text/javascript">
YUI().use('my-slide' , 'node', 'event','transition', function (Y) {
// connecting to nodejs server running on 7001 port for dynamic updates
var broadcast = io.connect('http://localhost:7001/getlatestbroadcast');
broadcast.on('status',function(data){
// some socket logic here
});
// Setting Listener
broadcast.on('moreData',function(data){
// some socket logic here
});
});
</script>
What you're doing definitely works, and there's no problem in using it that way unless you have a conflict with some other variable named io. A slightly more effective way of using Socket.IO (or any other external module in YUI) is to namespace it on the Y object instead:
YUI({
modules: {
'socket.io': {
fullpath: '/socket.io/socket.io.js'
}
},
onProgress: function (e) {
if (e.data[0].name === 'socket.io') {
YUI.add('socket.io', function (Y) {
Y.Socket = io;
});
}
}
}).use('socket.io', function (Y) {
var socket = Y.Socket.connect('http://localhost');
socket.on('news', function (data) {
console.log(data);
socket.emit('my other event', { my: 'data' });
});
});
This takes the example from the socket.io website and lets you namespace it as Y.Socket. That way, only when you specifically do YUI().use('socket.io'), will you actually be able to access Y.Socket, which helps with keeping your code organized and loaded in the correct order, thanks to the YUI Loader.
Also, feel free to check out the Socket Model Sync YUI Gallery module I created, if you're looking for an easier way to integrate your YUI App Framework application with Socket.IO.
Hope this helps, and let me know if you have any more questions about integrating the two!

Resources