Electron ES6 module import - electron

Electron 3.0.0-beta.1
Node 10.2.0
Chromium 66.0.3359.181
The problem I'm having is importing a module. I created the following protocol:
protocol.registerFileProtocol('client', (request, callback) => {
var url = request.url.substr(8);
callback({path: path.join(__dirname, url)});
});
The output of the protocol is the correct path
"/Users/adviner/Projects/Client/src/ClientsApp/app.js"
I have the following module app.js with the following code:
export function square() {
return 'hello';
}
in my index.html I import the module like so:
<script type="module" >
import square from 'client://app.js';
console.log(square());
</script>
But I keep getting the error:
app.js/:1 Failed to load module script: The server responded with a non-JavaScript MIME type of "". Strict MIME type checking is enforced for module scripts per HTML spec.
I'm done searches but can't seem to find a solution. Can anyone suggest a way I can make this work?
Thanks

This is a tricky question and i will refer to Electron#12011 and this GitHub Gist for a deeper explaination but the core learning is that the corresponding HTML spec, disallows import via file:// (For XSS reasons) and a protocol must have the mime types defined.
The file protocol you use client:// has to set the correct mime-types when serving the files. Currently i would guess they are not set when you define the protocol via protocol.registerBufferProtocol thus you recive a The server responded with a non-JavaScript MIME type of "", the gist above has a code sample on how to do it.
Edit: I just want to emphasize the other answers here do only cover the absolute minimum basics implementation with no consideration of exceptions, security, or future changes. I highly recommend taking the time and read trough the gist I linked.

To confirm: this is there for security reasons.
However, in the event that you just need to get it deployed:
Change "target": "es2015" to "target": "es5" in your tsconfig.json file

Quick Solution:
const { protocol } = require( 'electron' )
const nfs = require( 'fs' )
const npjoin = require( 'path' ).join
const es6Path = npjoin( __dirname, 'www' )
// <= v4.x
// protocol.registerStandardSchemes( [ 'es6' ] )
// >= v5.x
protocol.registerSchemesAsPrivileged([
{ scheme: 'es6', privileges: { standard: true } }
])
app.on( 'ready', () => {
protocol.registerBufferProtocol( 'es6', ( req, cb ) => {
nfs.readFile(
npjoin( es6Path, req.url.replace( 'es6://', '' ) ),
(e, b) => { cb( { mimeType: 'text/javascript', data: b } ) }
)
})
})
<script type="module" src="es6://main.js"></script>

Based on flcoder solution for older Electron version.
Electron 5.0
const { protocol } = require('electron')
const nfs = require('fs')
const npjoin = require('path').join
const es6Path = npjoin(__dirname, 'www')
protocol.registerSchemesAsPrivileged([{ scheme: 'es6', privileges: { standard: true, secure: true } }])
app.on('ready', async () => {
protocol.registerBufferProtocol('es6', (req, cb) => {
nfs.readFile(
npjoin(es6Path, req.url.replace('es6://', '')),
(e, b) => { cb({ mimeType: 'text/javascript', data: b }) }
)
})
await createWindow()
})
Attention! The path always seems to be transformed to lowercase
<script type="module" src="es6://path/main.js"></script>
Sorry Viziionary, not enough reputation to answer the comment.

I've now done it like this:
https://gist.github.com/jogibear9988/3349784b875c7d487bf4f43e3e071612
my problem was, I also wanted to support modules which are imported via none relative path's, so I don't need to transpile my code.

Related

aws cdk lambda, appconfig typescript example please?

Can anyone provide or point at an AWS-CDK/typescript example provisioning an AWS AppConfig app/env/config consumed by a lambda please? Could not find anything meaningful online.
A little late to this party, If anyone is coming to this question, looking for a quick answer, here you go:
AppConfig Does Not Have Any Official L2 Constructs
This means that one has to work with L1 constructs currently vended here
This is ripe for someone authoring an L2/L3 construct (I am working on vending this custom construct soon - so keep an eye out on updates here).
This does not mean its hard to use AppConfig in CDK, its just that unlike L2/L3 constructs, we have to dig deeper into setting AppConfig up reading their documentation.
A Very Simple Example (YMMV)
Here is a custom construct I have to setup AppConfig:
import {
CfnApplication,
CfnConfigurationProfile,
CfnDeployment,
CfnDeploymentStrategy,
CfnEnvironment,
CfnHostedConfigurationVersion,
} from "aws-cdk-lib/aws-appconfig";
import { Construct } from "constructs";
// 1. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appconfig-readme.html - there are no L2 constructs for AppConfig.
// 2. https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-working.html- this the CDK code from this console setup guide.
export class AppConfig extends Construct {
constructor(scope: Construct, id: string) {
super(scope, id);
// create a new app config application.
const testAppConfigApp: CfnApplication = new CfnApplication(
this,
"TestAppConfigApp",
{
name: "TestAppConfigApp",
}
);
// you can customize this as per your needs.
const immediateDeploymentStrategy = new CfnDeploymentStrategy(
this,
"DeployStrategy",
{
name: "ImmediateDeployment",
deploymentDurationInMinutes: 0,
growthFactor: 100,
replicateTo: "NONE",
finalBakeTimeInMinutes: 0,
}
);
// setup an app config env
const appConfigEnv: CfnEnvironment = new CfnEnvironment(
this,
"AppConfigEnv",
{
applicationId: testAppConfigApp.ref,
// can be anything that makes sense for your use case.
name: "Production",
}
);
// setup config profile
const appConfigProfile: CfnConfigurationProfile = new CfnConfigurationProfile(
this,
"ConfigurationProfile",
{
name: "TestAppConfigProfile",
applicationId: testAppConfigApp.ref,
// we want AppConfig to manage the configuration profile, unless we need from SSM or S3.
locationUri: "hosted",
// This can also be "AWS.AppConfig.FeatureFlags"
type: "AWS.Freeform",
}
);
// Update AppConfig
const configVersion: CfnHostedConfigurationVersion = new CfnHostedConfigurationVersion(
this,
"HostedConfigurationVersion",
{
applicationId: testAppConfigApp.ref,
configurationProfileId: appConfigProfile.ref,
content: JSON.stringify({
someAppConfigResource: "SomeAppConfigResourceValue",
//... add more as needed.
}),
// https://www.rfc-editor.org/rfc/rfc9110.html#name-content-type
contentType: "application/json",
}
);
// Perform deployment.
new CfnDeployment(this, "Deployment", {
applicationId: testAppConfigApp.ref,
configurationProfileId: appConfigProfile.ref,
configurationVersion: configVersion.ref,
deploymentStrategyId: immediateDeploymentStrategy.ref,
environmentId: appConfigEnv.ref,
});
}
}
Here is what goes inside my lambda handler (please note that you should have lambda layers enabled for AppConfig extension, see more information here):
const http = require('http');
exports.handler = async (event) => {
const res = await new Promise((resolve, reject) => {
http.get(
"http://localhost:2772/applications/TestAppConfigApp/environments/Production/configurations/TestAppConfigProfile",
resolve
);
});
let configData = await new Promise((resolve, reject) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('error', err => reject(err));
res.on('end', () => resolve(data));
});
const parsedConfigData = JSON.parse(configData);
return parsedConfigData;
};
EDIT: As promised, I released a new npm library : https://www.npmjs.com/package/cdk-appconfig, as of this update, this is still pre-release, but time permitting I will release a 1.x soon. But this will allow people to check out/fork if need be. Please share feedback/or open PRs if you'd like to collaborate.
I have no experience with AWS AppConfig. But I would start here and here

Workbox precache doesn't precache

I'm attempting to implement workbox to precache image and video assets on a website.
I've created a service worker file. It appears to be successfully referenced and used in my application. The service worker:
import { clientsClaim, setCacheNameDetails } from 'workbox-core';
import { precacheAndRoute, addRoute } from 'workbox-precaching';
const context = self; // eslint-disable-line no-restricted-globals
setCacheNameDetails({ precache: 'app' });
// eslint-disable-next-line no-restricted-globals, no-underscore-dangle
precacheAndRoute(self.__WB_MANIFEST);
context.addEventListener('install', (event) => {
event.waitUntil(
caches.open('dive').then((cache) => {
console.log(cache);
}),
);
});
context.addEventListener('activate', (event) => {
console.log('sw active');
});
context.addEventListener('fetch', async (event) => {
console.log(event.request.url);
});
context.addEventListener('message', ({ data }) => {
const { type, payload } = data;
if (type === 'cache') {
payload.forEach((url) => {
addRoute(url);
});
const manifest = payload.map((url) => (
{
url,
revision: null,
}
));
console.log('attempting to precache and route manifest', JSON.stringify(manifest));
precacheAndRoute(manifest);
}
});
context.skipWaiting();
clientsClaim();
The application uses workbox-window to load, reference and message the service worker. The app looks like:
import { Workbox } from 'workbox-window';
workbox = new Workbox('/sw.js');
workbox.register();
workbox.messageSW({
type: 'cache',
payload: [
{ url: 'https://media0.giphy.com/media/Ju7l5y9osyymQ/giphy.gif' }
],
});
This project is using vue with vue-cli. It has a webpack config which allows plugins to be sent added to webpack. The config looks like:
const { InjectManifest } = require('workbox-webpack-plugin');
const path = require('path');
module.exports = {
configureWebpack: {
plugins: [
new InjectManifest({
swSrc: path.join(__dirname, 'lib/services/Asset-Cache.serviceworker.js'),
swDest: 'Asset-Cache.serviceworker.js',
}),
],
},
};
I can see messages are successfully sent to the service worker and contain the correct payload. BUT, the assets never show up in Chrome dev tools cache storage. I also never see any workbox logging related to the assets sent via messageSW. I've also tested by disabling my internet, the assets don't appear to be loading into the cache. What am I doing wrong here?
I found the workbox docs to be a little unclear and have also tried to delete the message event handler from the service worker. Then, send messages to the service worker like this:
workbox.messageSW({
type: 'CACHE_URLS',
payload: { urlsToCache: [ 'https://media0.giphy.com/media/Ju7l5y9osyymQ/giphy.gif'] },
});
This also appears to have no effect on the cache.
The precache portion of precacheAndRoute() works by adding install and activate listeners to the current service worker, taking advantage of the service worker lifecycle. This will effectively be a no-op if the service worker has already finished installing and activating by the time it's called, which may be the case if you trigger it conditionally via a message handler.
We should probably warn folks about this ineffective usage of precacheAndRoute() in our Workbox development builds; I've filed an issue to track that.

Is there any way to track an event using firebase in electron + react

I want to ask about how to send an event using firebase & electron.js. A friend of mine has a problem when using firebase analytics and electron that it seems the electron doesn't send any event to the debugger console. When I see the network it seems the function doesn't send anything but the text successfully go in console. can someone help me to figure it? any workaround way will do, since he said he try to implement the solution in this topic
firebase-analytics-log-event-not-working-in-production-build-of-electron
electron-google-analytics
this is the error I got when Try to use A solution in Point 2
For information, my friend used this for the boiler plate electron-react-boilerplate
The solution above still failed. Can someone help me to solve this?
EDIT 1:
As you can see in the image above, the first image is my friend's code when you run it, it will give a very basic example like in the image 2 with a button to send an event.
ah just for information He used this firebase package :
https://www.npmjs.com/package/firebase
You can intercept HTTP protocol and handle your static content though the provided methods, it would allow you to use http:// protocol for the content URLs. What should make Firebase Analytics work as provided in the first question.
References
Protocol interception documentation.
Example
This is an example of how you can serve local app as loaded by HTTP protocol and simulate regular browser work to use http protocol with bundled web application. This will allow you to add Firebase Analytics. It supports poorly HTTP data upload, but you can do it on your own depending on the goals.
index.js
const {app, BrowserWindow, protocol} = require('electron')
const http = require('http')
const {createReadStream, promises: fs} = require('fs')
const path = require('path')
const {PassThrough} = require('stream')
const mime = require('mime')
const MY_HOST = 'somehostname.example'
app.whenReady()
.then(async () => {
await protocol.interceptStreamProtocol('http', (request, callback) => {
const url = new URL(request.url)
const {hostname} = url
const isLocal = hostname === MY_HOST
if (isLocal) {
serveLocalSite({...request, url}, callback)
}
else {
serveRegularSite({...request, url}, callback)
}
})
const win = new BrowserWindow()
win.loadURL(`http://${MY_HOST}/index.html`)
})
.catch((error) => {
console.error(error)
app.exit(1)
})
async function serveLocalSite(request, callback) {
try {
const {pathname} = request.url
const filepath = path.join(__dirname, path.resolve('/', pathname))
const stat = await fs.stat(filepath)
if (stat.isFile() !== true) {
throw new Error('Not a file')
}
callback(
createResponse(
200,
{
'content-type': mime.getType(path.extname(pathname)),
'content-length': stat.size,
},
createReadStream(filepath)
)
)
}
catch (err) {
callback(
errorResponse(err)
)
}
}
function serveRegularSite(request, callback) {
try {
console.log(request)
const req = http.request({
url: request.url,
host: request.url.host,
port: request.url.port,
method: request.method,
headers: request.headers,
})
if (req.uploadData) {
req.write(request.uploadData.bytes)
}
req.on('error', (error) => {
callback(
errorResponse(error)
)
})
req.on('response', (res) => {
console.log(res.statusCode, res.headers)
callback(
createResponse(
res.statusCode,
res.headers,
res,
)
)
})
req.end()
}
catch (err) {
callback(
errorResponse(err)
)
}
}
function toStream(body) {
const stream = new PassThrough()
stream.write(body)
stream.end()
return stream
}
function errorResponse(error) {
return createResponse(
500,
{
'content-type': 'text/plain;charset=utf8',
},
error.stack
)
}
function createResponse(statusCode, headers, body) {
if ('content-length' in headers === false) {
headers['content-length'] = Buffer.byteLength(body)
}
return {
statusCode,
headers,
data: typeof body === 'object' ? body : toStream(body),
}
}
MY_HOST is any non-existent host (like something.example) or host that is controlled by admin (in my case it could be electron-app.rumk.in). This host will serve as replacement for localhost.
index.html
<html>
<body>
Hello
</body>
</html>

How can I intercept requests against file and change them to http protocol?

In Electron, is it possible to intercept requests against file:/// and redirect them to http?
I have checked the Electron protocol page, but it's not obvious if this is supported or not.
You could use protocol.registerHttpProtocol with the scheme file to intercept file: requests, and instead make an HTTP request.
Example (untested):
const {app, protocol} = require('electron')
const path = require('path')
app.on('ready', () => {
protocol.registerHttpProtocol('file', (request, callback) => {
const url = request.url.substr(8)
callback({url: 'http://example.com/' + url)})
}, (error) => {
if (error) console.error('Failed to register protocol')
})
})
Note: this sample may need refining as the file path may include the drive letter, which would be invalid for an HTTP request.
There is another way I was able to address this and interestingly the word "intercept" in the question has a lot to do with it :)
There is a function interceptHttpProtocol() on the protocol object you can use.
Sample code:
app.on("ready", () => {
protocol.interceptHttpProtocol("http", function(request, callback) {
var parsedUri = url.parse(request.url);
var filePath = path.join(__dirname, parsedUri.pathname);
request.url = "file://" + filePath;
callback(request);
});
var mainWindow = new BrowserWindow();
mainWindow.loadURL("http://localhost/index.html");
});
Hope that helps

How can one split API documentation in multiple files using Swagger 2.0

According to Swagger 2.0 specs, it might be possible to do this. I am referencing PathObject using $ref which points to another file. We used to be able to do this nicely using Swagger 1.2. But Swagger-UI does not seem to be able to read the referred PathObject in another file.
Is this part of spec too new and is not yet supported? Is there a way to split each "path"'s documentation into another file?
{
"swagger": "2.0",
"basePath": "/rest/json",
"schemes": [
"http",
"https"
],
"info": {
"title": "REST APIs",
"description": "desc",
"version": "1.0"
},
"paths": {
"/time": {
"$ref": "anotherfile.json"
}
}
}
To support multiple files, your libraries have to support dereferencing the $ref field. But I would not recommend to deliver the swagger file with unresolved references. Our swagger defintion has around 30-40 files. Delivering them via HTTP/1.1 could slow down any reading application.
Since we are building javascript libs, too, we already had a nodejs based build system using gulp. For the node package manager (npm) you can find some libraries supporting dereferencing to build one big swagger file.
Our base file looks like this (shortened):
swagger: '2.0'
info:
version: 2.0.0
title: App
description: Example
basePath: /api/2
paths:
$ref: "routes.json"
definitions:
example:
$ref: "schema/example.json"
The routes.json is generated from our routing file. For this we use a gulp target implementing swagger-jsdoc like this:
var gulp = require('gulp');
var fs = require('fs');
var gutil = require('gulp-util');
var swaggerJSDoc = require('swagger-jsdoc');
gulp.task('routes-swagger', [], function (done) {
var options = {
swaggerDefinition: {
info: {
title: 'Routes only, do not use, only for reference',
version: '1.0.0',
},
},
apis: ['./routing.php'], // Path to the API docs
};
var swaggerSpec = swaggerJSDoc(options);
fs.writeFile('public/doc/routes.json', JSON.stringify(swaggerSpec.paths, null, "\t"), function (error) {
if (error) {
gutil.log(gutil.colors.red(error));
} else {
gutil.log(gutil.colors.green("Succesfully generated routes include."));
done();
}
});
});
And for generating the swagger file, we use a build task implementing SwaggerParser like this:
var gulp = require('gulp');
var bootprint = require('bootprint');
var bootprintSwagger = require('bootprint-swagger');
var SwaggerParser = require('swagger-parser');
var gutil = require('gulp-util');
var fs = require('fs');
gulp.task('swagger', ['routes-swagger'], function () {
SwaggerParser.bundle('public/doc/swagger.yaml', {
"cache": {
"fs": false
}
})
.then(function(api) {
fs.writeFile('public/doc/swagger.json', JSON.stringify(api, null, "\t"), function (error) {
if (error) {
gutil.log(gutil.colors.red(error));
} else {
gutil.log("Bundled API %s, Version: %s", gutil.colors.magenta(api.info.title), api.info.version);
}
});
})
.catch(function(err) {
gutil.log(gutil.colors.red.bold(err));
});
});
With this implementation we can maintain a rather large swagger specification and we are not restricted to special programming language or framework implementation, since we define the paths in the comments to the real routing definitions. (Note: The gulp tasks are split in multiple files too.)
While it would theoretically be possible to do that in the future, the solution is still not fully baked into the supporting tools so for now I'd highly recommend keeping it in one file.
If you're looking for a way to manage and navigate the Swagger definition, I'd recommend using the YAML format of the spec, where you can add comments and that may ease up navigation and splitting of a large definition.
You can also use JSON Refs library to resolve such multi-file Swagger spec.
I've written about it in this blog post
There is also this GitHub repo to demonstrate how all of this work.
My solution to this problem is using this package below to solve the reference issue
https://www.npmjs.com/package/json-schema-ref-parser
Here is the code snippet when generating the swagger UI using that library. I was using Express.js for my node server.
import express from 'express';
import * as path from 'path';
import refParser from '#apidevtools/json-schema-ref-parser';
import swaggerUi from 'swagger-ui-express';
const port = 3100;
const app = express();
app.get('/', async (req, res) => {
res.redirect('/api-docs')
});
app.use(
'/api-docs',
async function (req: express.Request, res: express.Response, next: express.NextFunction) {
const schemaFilePath = path.join(__dirname, 'schema', 'openapi.yml');
try {
// Resolve $ref in schema
const swaggerDocument = await refParser.dereference(schemaFilePath);
(req as any).swaggerDoc = swaggerDocument;
next();
} catch (err) {
console.error(err);
next(err);
}
},
swaggerUi.serve,
swaggerUi.setup()
);
app.listen(port, () => console.log(`Local web server listening on port ${port}!`));
Take a look at my Github repository to see how it works

Resources