How to pass File in electron contextBridge - electron

I could not find the docs on electron contextBridge and what is done to the API arguments but obviously something is done.
This is the gist of it:
// preload.js
contextBridge.exposeInMainWorld('fileCache', {
put (file) {
console.log(file) // ==> {}
}
})
// web app
window.fileCache.put(new File([], 'foo.txt'))
How should I pass File or any Blob or Buffer argument ? (making a string is not an option for performance reasons: 20+ Mb files...)

Related

CDK not updating

Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.

Electron: accessing the browser window of the single instance when called a second time

I have been able to successfully create an Electron app that links to the web app (using window.loadUrl).
I have been able to pass some command line params to the web app using window.webContents.send .... On the web app, the javascript receives the parameter and updates the screen.
I am using (2) by opening a file (right-click on it from the directory) through file association using process.argv[1]. This too works.
What I would like is that if I right-click on a second file, it must be passed on to the same electron instance. I am having some issues for this.
I have used the recommended approach for preventing multiple instances as below:
...
let myWindow = null
const gotTheLock = app.requestSingleInstanceLock()
if (!gotTheLock) {
// I do not want to quit
app.quit()
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
...
}
In the above logic, when the program is unable to get-the-lock, the boiler-plate code quits. That works fine, in the sense, that a second window does not open. But in my case, I would like to use the process.argv[1] of the second request and pass it to the web program of the existing instance.
I have not been successful in getting a handle to the browserWindow of the other instance. I would not want to work on multiple windows where each window would call another load of the web app. The current webapp has the ability to update multiple tabs in the same window based on different parameters. Basically, that is handled in the web app itself.
What could be the solution? Thanks
I got it working. It starred at my face and I did not see it. Added a few logs and it helped. Something like this would be a solution.
...
let myWindow = null;
...
function createWindow() {
....
return win;
}
function processParams(...) {
...
}
const gotTheLock = app.requestSingleInstanceLock()
if (!gotTheLock) {
app.quit()
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
//.. this is called the second time
// process any commandLine params
processParams(...)
...
});
app.on('whenReady')
.then(_ => {
myWindow = createWindow();
// this is called the first time
// process any argv params
processParms(...);
});
}

Make Changes To sw-precache service-worker

I am using sw-precache and I understand that in order to edit the service-worker.js file I need to do this (as detailed in the service-worker.js file)
// This file should be overwritten as part of your build process.
// If you need to extend the behavior of the generated service worker, the best approach is to write
// additional code and include it using the importScripts option:
// https://github.com/GoogleChrome/sw-precache#importscripts-arraystring
but I do not know where to add the importscripts() code. Does it go in the service-worker.js file? Surely that gets over-ridden on each project build.
Just include it like so:
importScripts: ['custom-offline-import.js']
Here is an example with an config including the importScripts config option at the end:
var packageJson = require('../package.json');
var swPrecache = require('../lib/sw-precache.js');
var path = require('path');
function writeServiceWorkerFile(rootDir, handleFetch, callback) {
var config = {
cacheId: packageJson.name,
runtimeCaching: [{
// See https://github.com/GoogleChrome/sw-toolbox#methods
urlPattern: /runtime-caching/,
handler: 'cacheFirst'
}],
staticFileGlobs: [rootDir + '/**/*.{js,html,css,png,jpg,gif}'],
stripPrefix: rootDir,
importScripts: ['custom-offline-import.js']
};
swPrecache.write(path.join(rootDir, 'service-worker.js'), config, callback);
}
Hope I could help you!
Add it to the sw-precache-config.js file.
Good example here:
Rewrite URL offline when using a service worker

Copying default external configuration on first run of Grails web app

In our Grails web applications, we'd like to use external configuration files so that we can change the configuration without releasing a new version. We'd also like these files to be outside of the application directory so that they stay unchanged during continuous integration.
The last thing we need to do is to make sure the external configuration files exist. If they don't, then we'd like to create them, fill them with predefined content (production environment defaults) and then use them as if they existed before. This allows any administrator to change settings of the application without detailed knowledge of the options actually available.
For this purpose, there's a couple of files within web-app/WEB-INF/conf ready to be copied to the external configuration location upon the first run of the application.
So far so good. But we need to do this before the application is initialized so that production-related modifications to data sources definitions are taken into account.
I can do the copy-and-load operation inside the Config.groovy file, but I don't know the absolute location of the WEB-INF/conf directory at the moment.
How can I get the location during this early phase of initialization? Is there any other solution to the problem?
There is a best practice for this.
In general, never write to the folder where the application is deployed. You have no control over it. The next rollout will remove everything you wrote there.
Instead, leverage the builtin configuration capabilities the real pro's use (Spring and/or JPA).
JNDI is the norm for looking up resources like databases, files and URL's.
Operations will have to configure JNDI, but they appreciate the attention.
They also need an initial set of configuration files, and be prepared to make changes at times as required by the development team.
As always, all configuration files should be in your source code repo.
I finally managed to solve this myself by using the Java's ability to locate resources placed on the classpath.
I took the .groovy files later to be copied outside, placed them into the grails-app/conf directory (which is on the classpath) and appended a suffix to their name so that they wouldn't get compiled upon packaging the application. So now I have *Config.groovy files containing configuration defaults (for all environments) and *Config.groovy.production files containing defaults for production environment (overriding the precompiled defaults).
Now - Config.groovy starts like this:
grails.config.defaults.locations = [ EmailConfig, AccessConfig, LogConfig, SecurityConfig ]
environments {
production {
grails.config.locations = ConfigUtils.getExternalConfigFiles(
'.production',
"${userHome}${File.separator}.config${File.separator}${appName}",
'AccessConfig.groovy',
'Config.groovy',
'DataSource.groovy',
'EmailConfig.groovy',
'LogConfig.groovy',
'SecurityConfig.groovy'
)
}
}
Then the ConfigUtils class:
public class ConfigUtils {
// Log4j may not be initialized yet
private static final Logger LOG = Logger.getGlobal()
public static def getExternalConfigFiles(final String defaultSuffix, final String externalConfigFilesLocation, final String... externalConfigFiles) {
final def externalConfigFilesDir = new File(externalConfigFilesLocation)
LOG.info "Loading configuration from ${externalConfigFilesDir}"
if (!externalConfigFilesDir.exists()) {
LOG.warning "${externalConfigFilesDir} not found. Creating..."
try {
externalConfigFilesDir.mkdirs()
} catch (e) {
LOG.severe "Failed to create external configuration storage. Default configuration will be used."
e.printStackTrace()
return []
}
}
final def cl = ConfigUtils.class.getClassLoader()
def result = []
externalConfigFiles.each {
final def file = new File(externalConfigFilesDir, it)
if (file.exists()) {
result << file.toURI().toURL()
return
}
final def error = false
final def defaultFileURL = cl.getResource(it + defaultSuffix)
final def defaultFile
if (defaultFileURL) {
defaultFile = new File(defaultFileURL.toURI())
error = !defaultFile.exists();
} else {
error = true
}
if (error) {
LOG.severe "Neither of ${file} or ${defaultFile} exists. Skipping..."
return
}
LOG.warning "${file} does not exist. Copying ${defaultFile} -> ${file}..."
try {
FileUtils.copyFile(defaultFile, file)
} catch (e) {
LOG.severe "Couldn't copy ${defaultFile} -> ${file}. Skipping..."
e.printStackTrace()
return
}
result << file.toURI().toURL()
}
return result
}
}

Open external file with Electron

I have a running Electron app and is working great so far. For context, I need to run/open a external file which is a Go-lang binary that will do some background tasks.
Basically it will act as a backend and exposing an API that the Electron app will consume.
So far this is what i get into:
I tried to open the file with the "node way" using child_process but i have fail opening the a sample txt file probably due to path issues.
The Electron API expose a open-file event but it lacks of documentation/example and i don't know if it could be useful.
That's it.
How i open an external file in Electron ?
There are a couple api's you may want to study up on and see which helps you.
fs
The fs module allows you to open files for reading and writing directly.
var fs = require('fs');
fs.readFile(p, 'utf8', function (err, data) {
if (err) return console.log(err);
// data is the contents of the text file we just read
});
path
The path module allows you to build and parse paths in a platform agnostic way.
var path = require('path');
var p = path.join(__dirname, '..', 'game.config');
shell
The shell api is an electron only api that you can use to shell execute a file at a given path, which will use the OS default application to open the file.
const {shell} = require('electron');
// Open a local file in the default app
shell.openItem('c:\\example.txt');
// Open a URL in the default way
shell.openExternal('https://github.com');
child_process
Assuming that your golang binary is an executable then you would use child_process.spawn to call it and communicate with it. This is a node api.
var path = require('path');
var spawn = require('child_process').spawn;
var child = spawn(path.join(__dirname, '..', 'mygoap.exe'), ['game.config', '--debug']);
// attach events, etc.
addon
If your golang binary isn't an executable then you will need to make a native addon wrapper.
Maybe you are looking for this ?
dialog.showOpenDialog refer to: https://www.electronjs.org/docs/api/dialog
If using electron#13.1.0, you can do like this:
const { dialog } = require('electron')
console.log(dialog.showOpenDialog({ properties: ['openFile', 'multiSelections'] }))
dialog.showOpenDialog(function(file_paths){
console.info(file_paths) // => this gives the absolute path of selected files.
})
when the above code is triggered, you can see an "open file dialog" like this (diffrent view style for win/mac/linux)
Electron allows the use of nodejs packages.
In other words, import node packages as if you were in node, e.g.:
var fs = require('fs');
To run the golang binary, you can make use of the child_process module. The documentation is thorough.
Edit: You have to solve the path differences. The open-file event is a client-side event, triggered by the window. Not what you want here.
I was also totally struggling with this issue, and almost seven years later the documentation is quite not clear what's the case with Linux.
So, on Linux it falls under Windows treatment in this regard, which means you have to look into process.argv global in the main processor, the first value in the array is the path that fired the app. The second argument, if one exist, is holding the path that requested the app to be opened. For example, here is the output for my test case:
Array(2)
0: "/opt/Blueprint/b-test"
1: "/home/husayngonzalez/2022-01-20.md"
length: 2
So, when you're creating a new window, you check for the length of process.argv and then if it was more than 1, i.e. = 2 it means you have a path that requested to be opened with your app.
Assuming you got your application packaged with the ability to process those files, and also you set the operating system to request your application to open those.
I know this doesn't exactly meet your specification, but it does cleanly separate your golang binary and Electron application.
The way I have done it is to expose the golang binary as a web service. Like this
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
//TODO: put your call here instead of the Fprintf
fmt.Fprintf(w, "HI there from Go Web Svc. %s", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/api/someMethod", handler)
http.ListenAndServe(":8080", nil)
}
Then from Electron just make ajax calls to the web service with a javascript function. Like this (you could use jQuery, but I find this pure js works fine)
function get(url, responseType) {
return new Promise(function(resolve, reject) {
var request = new XMLHttpRequest();
request.open('GET', url);
request.responseType = responseType;
request.onload = function() {
if (request.status == 200) {
resolve(request.response);
} else {
reject(Error(request.statusText));
}
};
request.onerror = function() {
reject(Error("Network Error"));
};
request.send();
});
With that method you could do something like
get('localhost/api/somemethod', 'text')
.then(function(x){
console.log(x);
}

Resources