Accessing the DOM between two renderers in Electron - electron

Is it possible to do this in Electron:
I want to duplicate a video to a 2nd screen.
This is easyly done, by invoking the following 50 times/second:
canvas_context_2nd_screen.drawImage(video_1st_screen,0,0,width_canvas_2nd,height_canvas_2nd);
but in electron i have to communicate via IPC...
any ideas? is it possible in nw.js?

It is supported in NW.js. As DOM windows are in the same renderer process by default, and the Node.js context is shared between them:
http://docs.nwjs.io/en/latest/For%20Users/Advanced/JavaScript%20Contexts%20in%20NW.js/
This can be changed to be separate though.

Related

What is the advantage of using angular with electronJS

I want to create desktop application compatible with other OS.For that I'm using electron with angular.Because both are frame work whether it will effect performance or loading time, and also whether deploying easy,can we use all the features of angular when we use with electron like routing..?
Electron uses Chromium and NodeJS which is the reason why it is compatible with other OSs. You can talk with the NodeJS process from your angular-app which opens up some possibilities. For example opening native file-dialogs to let the user choose files. Electron also already abstracts some platform specific operations like getting the user home to save some configuration files for example.
You can use routing just like in any Angular app and I think you can use most features like you would normally but dont take me for granted on this one.
I would not say it affects your loading time to combine those too. During development you have to build your angular app before electron can start up and use those files but in production Angular is already ready to be loaded so they dont hinder each other.

DOM ready event inside Electron how?

This is a silly question, but I could not find a clear answer on the web.
Do I need to listen to a special "dom-ready-event" if my app is running within a BrowserWindow in Electron? For instance in a Cordova/PhoneGap app I read you can start doing things after the deviceready event.
I would like to know how this is done in Electron? Is one of the following enough?
document.addEventListener("DOMContentLoaded", startApp);
window.addEventListener("load", startApp);
Thank you.
Cordova has deviceready because it has both native code and JavaScript code, and potentially JavaScript might run before the native code has finished loading.
You don't have the same problem in Electron. You have a main process (main.js) which creates your BrowserWindow, so by the time any client-side JavaScript is running, your main process has definitely already started because it was the thing that created your browser window in the first place!
Inside the browser window the same events fire as they would on a normal webpage. So if you wanted to, you could use DOMContentLoaded or load (for the difference see this article on MDN), just in the same way as you would for a regular web application. But you certainly don't need to before calling any of the Electron APIs.

Node.js, renderer process and main process for Electron

I'm trying to see if I understand how Electron's implementation of Node.js is done and how it interacts with the app. From my understanding, the startup web page has a javascript file that runs as a "renderer" process. Code in this script can also access any of the Node.js APIs. To create new browser windows, code in renderer script uses new BrowserWindow to create new windows and each window in turn has its own renderer script.
Code in the renderer scripts run under Node.js and as such any code written in these scripts cannot communicate with script code in the browser's web page.
Is all of this true or am I wrong on something?
The Electron main process can create new windows (with Browser Window) and each of those windows has a renderer process. You can use ipc to send messages between the renderer process and the main process. To send a message from one renderer process to another, there are plugins for that, or you just have to relay the message through the main process.
The format/appearance of each window is controlled via html and css. Part of creating a window is specifying the html file to load.
More info can be found in this other SO question. The other question referenced this repo which has more info.
Lastly, the consensus seems to be to put as much in the renderer as possible.
For more clarification, by
Code in the renderer scripts run under Node.js and as such any code
written in these scripts cannot communicate with script code in the
browser's web page.
are you asking if an Electron app can interact with a separate web browser?

VSCode extension IPC with UI inside HTML preview

I wish to develop a unit test runner extension for VSCode. The extension should display discovered tests grouped into expandable hierarchy, annotate run status, display output and errors for each test, provide run/debug commands on different levels, and of course the red/green bar.
Roughly spearating this into "model" and "view", I plan to implement the model in the extension process, and I plan to implement the view as HTML preview based on a TextDocumentContentProvider. (Is there a better approach?)
Now, the model and the view should communicate with each other. I want to implement the view as a single-page application. The view will send commands to the model, and the model will send events to the view (or the view will poll the model for events). The view will update itself according to received events.
My question is, what communication technique should I use? Can HTML page inside the HTML preview access VSCode/Atom/Electron/Node APIs? Can I share object instances, or do some lightweight IPC? By far I didn't figure out.
I've found that I can invoke VSCode commands or refresh the entire page, when the user clicks a link with href set to specific scheme (command:// or the one I registered for my TextDocumentContentProvider).
I do succeed to open an HTTP listener (http.createServer) in the extension process, and communicate through XMLHttpRequest on the HTML preview side. But it looks to me like a heavy overkill.
I wonder if there are more appropriate ways to do this?
Almenon is referring to the currently proposed Webview API that was released in version 1.21 (Feb 2018). For the time being, this appears to be a much better approach for HTML previews. But in order to use the API, there are special instructions. From the release notes:
These APIs are still proposed, so in order to use it, you must opt into it by adding a "enableProposedApi": true to package.json and you'll have to copy the vscode.proposed.d.ts into your extension project.
What isn't clarified (and probably should be) is how to add the downloaded declaration file to a project. One way to do it is place the file in $/node_modules/vscode, next to vscode.d.ts, which is generated during postinstall. Then add the following line to the top of vscode.d.ts:
/// <reference path="vscode.proposed.d.ts" />
That will link the type declaration files. To make this part of the installation process, write a build task to do it and then call it in the vscode:postinstall script in package.json.
VSCode has a new API that makes this easier.
https://github.com/Microsoft/vscode/issues/43713
You can find the new API here
To try the new API:
Add "enableProposedApi": true to your package.json
Manually download vscode.proposed.d.ts and add it to your project: https://raw.githubusercontent.com/Microsoft/vscode/master/src/vs/vscode.proposed.d.ts
Run your extension with the latest VS Code insiders build

Arrange Cytoscape network windows using the API (Version 3)

I'm developing a bundle app for Cytoscape 3. In this app I need a funcionality very similar to the build-in View > Arrange Network Windows > Grid, or Ctrl+G.
However, I cannot seem to find anything in Cytoscape's API that allows me to arrange network windows.
The source code behind the build-in funcionality can be found here: https://github.com/cytoscape/cytoscape-impl/blob/cbd6ae7202a2137d0224862aa371b82c1ec9a7a7/swing-application-impl/src/main/java/org/cytoscape/internal/view/CyDesktopManager.java#L81
As you can see I need a reference to the JDesktopPane, how do I get this through the API?
I don't think there is clean API-way of achieving what I want. You can however do it as follows:
In your activator you are able to retrieve a CySwingApplication reference: getService(bc, CySwingApplication.class), from which you can call the method .getJFrame(). You can recursively scan through all swing Container components with until you find a component of the type JDesktopPane. When you call .getAllFrames() of the JDesktopPane you can do whatever with you network windows (JInternalFrame).

Resources