How to differentiate two ipc renderer in one ipc main (same channel) - electron

I want to detect the specific frame/browserWindow. I have one main process and two browser windows which all three are sending message to each other using same channel . in IPCMain I need to detect one of them. I saw that IPCmain event has a function named frameId but when I use it I get undefined.
ipcMain.once("postMessage", (event, message) => {
if(!activeRequest) return;
activeRequest.json(message).send();
});

You can the get the current webcontent id from the main process by accessing the sender object in the events object which is the first argument.
console.log(event.sender.webContents.id);
you can also pass the id of the window that the eent is coming from via the renderer process.
// in the renderer process do this
electron.ipcRenderer.send("new-message", {
winId: electron.remote.getCurrentWebContents().id ,
message: "Hi"
});
when the main process receives this event, you just have to access the winId property in the message object

Either you can pass the identity in ipc message payoad, or you can get the windows web content id via, ipc message's sender object.

Related

Twilio Send & Wait for Reply widget

I'm using an incoming call flow that starts call recording, asks a bunch of questions, gathers responses, at the end of the call: stops recording and sends a sms to caller using send/wait for reply widget. A response is expected and based on what's in the body of the incoming call, it calls a function.
All this works, except, I am not receiving a response back from the caller.
My concurrent call setting =off
incoming trigger = incoming call
the flow is tied to a phone number (voice)
I'm not sure how to get a reply back into the same flow. Do I need to attach something to the message section of the phone number?
Any guidance would be appreciated
A Studio flow execution represents one call, one SMS, or the incoming call trigger from the REST Trigger. As the call initiates your flow, it will terminate when the call ends.
But you can work around this by using a function that gets invoked when the recording is done. This function can then use the Twilio APIs to fetch contextual information from the call and trigger the REST API interface of the same flow (but with a different trigger).
I created a small example that does something similar:
The flow is triggered by a call, starts a recording, and gathers data
There is a recording callback URL that points to my function
// This is your new function. To start, set the name and path on the left.
exports.handler = function (context, event, callback) {
console.log(`Recording ${event.RecordingSid} state changed to ${event.RecordingStatus}.`)
if (event.RecordingStatus === "completed") {
const client = context.getTwilioClient();
return client.calls(event.CallSid).fetch()
.then(call => {
client.studio.v2.flows(<Flow ID>)
.executions
.create({
to: call.from,
from: <YOUR NUMBER>,
parameters: {
RecordingUrl: event.RecordingUrl,
}
})
.then(execution => {
console.log(`Triggered execution ${execution.sid}`)
return callback(null, "OK");
});
})
.catch(callback)
}
return callback(null, "OK");
};
You can find the ID of your flow in the console (or when you click on the root element and check the Flow Configuration):
The REST API triggers a second flow execution that reads the parameter and uses them to send a text message:

How to get app name from electron renderer?

I'm using an Electron React boilerplate and I'm interested in getting the app name from inside one of my react component file.
I've tried this
import {app} from 'electron'
const name = app.getName()
but I get the error
hot update failed for module "./app/containers/Root.tsx". Last file processed: "./app/components/Modal/Text.tsx".
I'm guessing electron cannot be accessed from the "renderer"? what's the best way to access this data?
TLDR: Send a message to the main process and have it send back a message containing the app name.
You're correct that you can't get the app name from the renderer.
So I'd send a message to the main process requesting the app name, then have the main process send back the app name.
This is how the code would look:
renderer.js
var ipcRenderer = require('electron').ipcRenderer;
// Send message.
ipcRenderer.send('asynchronous-message', 'getAppName');
// Recieve app name.
ipcRenderer.on('asynchronous-message', function (evt, messageObj) {
console.log(messageObj); // This will contain the app name.
});
main.js
var ipcMain = require('electron').ipcMain;
var app = require('electron').app;
ipcMain.on('asynchronous-message', function (evt, messageObj) {
// Send message back to renderer.
if (messageObj == 'getAppName') {
evt.sender.webContents.send('asynchronous-message', app.getName());
}
});
Docs for ipcMain.
Docs for ipcRenderer.

Redirect API call fetches from Service Worker

This is a really annoying issue. I am using a third party login in my application. When a user logins in through the third party, it redirects an api call to the server.
ex: /api/signin/github?code=test&state=test
For some strange reason this API call is getting fetched from the service worker instead on the server which handles the login logic.
ex:
Without seeing your service worker's fetch event handler, it's hard to say exactly what code is responsible for that.
In general, though, if there are URLs for which you want to tell the service worker never to respond to, you can just avoid calling event.respondWith(...) when they trigger a fetch. There are lots of ways to avoid doing that, but an early return is straightforward:
self.addEventListener('fetch', (event) => {
const url = new URL(event.request.url);
if (url.pathname === '/api/signin/github') {
// By returning without calling event.respondWith(),
// the request will be handled by the normal browser
// network stack.
return;
}
// Your fetch event response generation logic goes here.
event.respondWith(...);
});

Managing Server Side Events with a Service Worker

I am building a web app to display on my iPad to control my raspberry pi acting as an audio recorder. Part of the need is to maintain an event source open so that the server can send Server Side Events. A specific instance of the app can grab control of the recording process, but will loose control if the server sees sse link closes. This is just protection against a client disappearing and leaving the control held (control of the process does needed to be renewed at least every 5 minutes - but I don't really want to wait that long in the normal case of someone just closing the browser tab.)
Part of my need is to push the browser to the background so I can then open up the camera and record a video.
I built this app and had it almost working see https://github.com/akc42/pi_record.git (master branch).
Until I pushed the browser to the background and found IOS shut down the page and broke the sse link.
I tried restructuring to use a private web worker to manage the sse link, massing messages between the web worker and the main javascript thread - again almost working (see workers branch of above repository). But that got shutdown too!
My last thought is to use a service worker, but how to structure the app?
Clearly the service worker must act as a client to the server for the server side events. It must keep the connection open, but it also needs to keep track of multiple tabs in the browser which may or may not try and grab control of the interface, and only allow one tab to do so.
I can think of three approaches - but its difficult to see which is better. At least I have never even seen any mention of approach 2 and 3 below , but it seems to me that one of these two might actually be the simplest.
Approach 1
Move the code I have now for separate web workers into the service worker. However we will need to add to the message passing some form of ID between window and service. So I can record which tab actually grabbed control of the interface and therefore exclude other tabs from doing so (ie simulate a failed attempt to take control).
As far as I can work out MessageEvent.ports[0] could be a unique object which I could store in a Map somewhere, but I am not entirely convinced that the MessageChannel wouldn't close if the browser moved to the background.
Approach 2
have a set of phantom urls in the service worker that simulate all the different message types (and parameters) that where previously sent my the tab to its private web worker.
The fetch event provides a clientid (which I can use to difference between who actually grabbed control) and which I can use to then do Clients.get(clientid).postMessage() (or Clients.matchAll when a broadcast response is needed)
Code would be something like
self.addEventListener('fetch', (event) => {
const requestURL = new URL(event.request.url);
if (/^\/api\//.test(requestURL.pathname)) {
event.respondWith(fetch(event.request)); //all api requests are a direct pass through
} else if (/^\/service\//.test(requestURL.pathname)) {
/*
process these like a message passing with one extra to say the client is going away.
*/
if (urlRecognised) {
event.respondWith(new Response('OK', {status: 200}));
} else {
event.respondWith(new Response(`Unknown request ${requestURL.pathname}`, {status: 404}));
}
} else {
event.respondWith(async () => {
const cache = await caches.open('recorder');
const cachedResponse = await cache.match(event.request);
const networkResponsePromise = fetch(event.request);
event.waitUntil(async () => {
const networkResponse = await networkResponsePromise;
await cache.put(event.request, networkResponse.clone());
});
// Returned the cached response if we have one, otherwise return the network response.
return cachedResponse || networkResponsePromise;
});
}
});
The top of the the fetch event just passes the standard api requests made by the client straight through. I can't cache these (although I could be more sophisticated and perhaps pre reject those not supported).
The second section matches phantom urls /service/something
The last section is taken from Jake Archibald's offline cookbook and tries to use the cache, but updates the cache in the background if any of the static files have changed.
Approach 3
Similar to the approach above, in that we would have phantom urls and use the clientid as a unique marker, but actually try and simulate a server side event stream with one url.
I'm thinking the code with be more like
...
} else if (/^\/service\//.test(requestURL.pathname)) {
const stream = new TransformStream();
const writer = stream.writeable.getWriter();
event.respondWith(async () => {
const streamFinishedPromise = new Promise(async (resolve,reject) => {
event.waitUntil(async () => {
/* eventually close the link */
await streamFinishedPromise;
});
try {
while (true) writer.write(await nextMessageFromServerSideEventStream());
} catch(e) {
writer.close();
resolve();
}
});
return new Response(stream.readable,{status:200}) //probably need eventstream headers too
}
I am thinking that approach 2 could be the simplest, given where I am now but I am concerned that I can see nothing when searching for how to use service workers that discusses this phantom url approach.
Can anyone comment on any of these approaches and provide guidance on how to best program the tricky bits (for instance does Approach 1 message channel close when the browser is moved to the background on an iPad, or how do you really keep a response channel open, and does that get closed when the browser moves to the background in Approach 3)
The simple truth is that none of these approaches will work. What I didn't realise when I asked the question is that a service worker is re-run by the browser when ever there is something to do and that run only lasts for the length of time of the processing of an event. Although eventWaitUntil can prolong that, the only reference to how long I can find is that the browser is still at liberty to cancel it if it appears it might never close. I can't imagine than in a period of several hours it won't get cancelled. So an Event Source will close effectively terminate its link to the server.
So my only option to achieve what I want is to have the server carry on when the Event Source closes and find some other mechanism to release resources held on behalf of the client

A better way to detect a change in a parse class?

Currently I set up a timer that every 2 seconds makes a query to a parse class to see if any data has changed. If the data has changed it calls the refreshData method so my view can be updated with the new parse data.
So when ever data is updated in the parse class it will almost instantly be updated in the app.
The problem is this causes a lot of unnecessary web traffic, which I need to avoid.
What can I do to replace the timer with something that detects when data is changed in the parse class then tells the app to call the refreshData method?
afterSave Triggers
//this will trigger everytime the className objects has changed
Parse.Cloud.afterSave("className", function(request) {
//Do some stuff here //like calling http request or sending push 'data has changed' to installed mobile device
console.log("Object has been added/updated"+request.object.id);
});
https://parse.com/docs/js/guide#cloud-code-aftersave-triggers
You need to deploy first a cloud code then it will handle your problem :-)
In some cases, you may want to perform some action, such as a push, after an object has been saved. You can do this by registering a handler with the afterSave method. For example, suppose you want to keep track of the number of comments on a blog post. You can do that by writing a function like this:
Parse.Cloud.afterSave("Comment", function(request) {
query = new Parse.Query("Post");
query.get(request.object.get("post").id, {
success: function(post) {
post.increment("comments");
post.save();
},
error: function(error) {
console.error("Got an error " + error.code + " : " + error.message);
}
});
});
The client will receive a successful response to the save request after the handler terminates, regardless of how it terminates. For instance, the client will receive a successful response even if the handler throws an exception. Any errors that occurred while running the handler can be found in the Cloud Code log.
If you want to use afterSave for a predefined class in the Parse JavaScript SDK (e.g. Parse.User), you should not pass a String for the first argument. Instead, you should pass the class itself.
I'm not sure if my solution will fit with your needs, but using beforeSave trigger within CloudCode, combined to DirtyKeys will save you time and queries : http://blog.parse.com/learn/engineering/parse-objects-dirtykeys/
With DirtyKeys you can detect once some change was done on your class, and then you can build new trigger and do whatever you need once done.

Resources