Make Changes To sw-precache service-worker - service-worker

I am using sw-precache and I understand that in order to edit the service-worker.js file I need to do this (as detailed in the service-worker.js file)
// This file should be overwritten as part of your build process.
// If you need to extend the behavior of the generated service worker, the best approach is to write
// additional code and include it using the importScripts option:
// https://github.com/GoogleChrome/sw-precache#importscripts-arraystring
but I do not know where to add the importscripts() code. Does it go in the service-worker.js file? Surely that gets over-ridden on each project build.

Just include it like so:
importScripts: ['custom-offline-import.js']
Here is an example with an config including the importScripts config option at the end:
var packageJson = require('../package.json');
var swPrecache = require('../lib/sw-precache.js');
var path = require('path');
function writeServiceWorkerFile(rootDir, handleFetch, callback) {
var config = {
cacheId: packageJson.name,
runtimeCaching: [{
// See https://github.com/GoogleChrome/sw-toolbox#methods
urlPattern: /runtime-caching/,
handler: 'cacheFirst'
}],
staticFileGlobs: [rootDir + '/**/*.{js,html,css,png,jpg,gif}'],
stripPrefix: rootDir,
importScripts: ['custom-offline-import.js']
};
swPrecache.write(path.join(rootDir, 'service-worker.js'), config, callback);
}
Hope I could help you!

Add it to the sw-precache-config.js file.
Good example here:
Rewrite URL offline when using a service worker

Related

Nuxt 3 environment variables for static site [duplicate]

I have .env file in the project root, and in my nuxt config I am using variables to configure ReCaptcha like this:
import dotenv from 'dotenv'
dotenv.config()
export default {
modules: [
['#nuxtjs/recaptcha', {
siteKey: process.env.RECAPTCHA_SITE_KEY,
version: 3,
size: 'compact'
}],
]
}
and in .env like this:
RECAPTCHA_SITE_KEY=6L....
but the application always failed with console log error:
ReCaptcha error: No key provided
When I hard-code ReCaptcha key directly like that: siteKey: 6L.... app start working, so I guess the problem is with reading .env props in nuxt.config
do you have any idea how to fix it?
EDIT:
I tried update my nuxt.config by #kissu recommendation and by example which I found here: https://www.npmjs.com/package/#nuxtjs/recaptcha
so there is new nuxt.config which also not working:
export default {
modules: [
'#nuxtjs/recaptcha',
],
publicRuntimeConfig: {
recaptcha: {
siteKey: process.env.RECAPTCHA_SITE_KEY,
version: 3,
size: 'compact'
}
}
}
If your Nuxt version is 2.13 or above, you don't need to use #nuxtjs/dotenv or anything alike because it is already backed into the framework.
To use some variables, you need to have an .env file at the root of your project. This one should be ignored by git. You can then input some keys there like
PUBLIC_VARIABLE="https://my-cool-website.com"
PRIVATE_TOKEN="1234qwer"
In your nuxt.config.js, you have to input those into 2 objects, depending of your use case, either publicRuntimeConfig or privateRuntimeConfig:
export default {
publicRuntimeConfig: {
myPublicVariable: process.env.PUBLIC_VARIABLE,
},
privateRuntimeConfig: {
myPrivateToken: process.env.PRIVATE_TOKEN
}
}
Differences: publicRuntimeConfig can basically be used anywhere, while privateRuntimeConfig can only be used during SSR (a key can only stay private if not shipped to the browser).
A popular use case for the privateRuntimeConfig is to use it for nuxtServerInit or during the build process (either yarn build or yarn generate) to populate the app with headless CMS' API calls.
More info can be found on this blog post: https://nuxtjs.org/blog/moving-from-nuxtjs-dotenv-to-runtime-config/
Then, you will be able to access it into any .vue file directly with
this.$config.myPublicVariable
You access it into Nuxt's /plugins too, with this syntax
export default ({ $axios, $config: { myPublicVariable } }) => {
$axios.defaults.baseURL = myPublicVariable
}
If you need this variable for a Nuxt module or in any key in your nuxt.config.js file, write it directly with
process.env.PRIVATE_TOKEN
Sometimes, the syntax may differ a bit, in this case refer to your Nuxt module documentation.
// for #nuxtjs/gtm
publicRuntimeConfig: {
gtm: {
id: process.env.GOOGLE_TAG_MANAGER_ID
}
},
PS: if you do use target: server (default value), you can yarn build and yarn start to deploy your app to production. Then, change any environment variables that you'd like and yarn start again. There will be no need for a rebuild. Hence the name RuntimeConfig!
Nuxt3 update
As mentioned here and in the docs, you can use the following for Nuxt3
nuxt.config.js
import { defineNuxtConfig } from 'nuxt3'
export default defineNuxtConfig({
runtimeConfig: {
public: {
secret: process.env.SECRET,
}
}
}
In any component
<script setup lang="ts">
const config = useRuntimeConfig()
config.secret
</script>
In a composable like /composables/test.js as shown in this comment
export default () => {
const config = useRuntimeConfig()
console.log(config.secret)
}
Here is the official doc for that part.
You can also use the env property with Nuxt
nuxt.config.js:
export default {
// Environment variables
env: {
myVariable: process.env.NUXT_ENV_MY_VAR
},
...
}
Then in your plugin:
const myVar = process.env.myVariable
It's very easy. Providing you an example with axios/nuxt
Define your variable in the .env file:
baseUrl=http://localhost:1337
Add the variable in the nuxt.config.js in an env-object (and use it in the axios config):
export default {env: {baseUrl: process.env.baseUrl},axios: {baseURL: process.env.baseUrl},}
Use the env variable in any file like so:
console.log(process.env.baseUrl)
Note that console.log(process.env) will output {} but console.log(process.env.baseUrl) will still output your value!
For nuxt3 rc11, in nuxt.conf.ts file:
export default defineNuxtConfig({
runtimeConfig: {
public: {
locale: {
defaultLocale: process.env.NUXT_I18N_LOCALE,
fallbackLocale: process.env.NUXT_I18N_FALLBACK_LOCALE,
}
}
},
...
and in .env file:
NUXT_I18N_LOCALE=tr
NUXT_I18N_FALLBACK_LOCALE=en
public: is very important otherwise it cannot read it and gives undefined error.
For v3 there is a precise description in the official docs
You define a runtimeConfig entry in your nuxt.config.[ts,js] which works as initial / default value:
export default defineNuxtConfig({
runtimeConfig: {
recaptchaSiteKey: 'default value' // This key is "private" and will only be available within server-side
}
}
You can also use env vars to init the runtimeConfig but its written static after build.
But you can override the value at runtime by using the following env var:
NUXT_RECAPTCHA_SITE_KEY=SOMETHING DYNAMIC
If you need to use the config on client-side, you need to use the public property.
export default defineNuxtConfig({
runtimeConfig: {
public: {
recaptchaSiteKey: 'default value' // will be also exposed to the client-side
}
}
}
Notice the PUBLIC part in the env var:
NUXT_PUBLIC_RECAPTCHA_SITE_KEY=SOMETHING DYNAMIC
This is very strange because we can't access process.env in Nuxt 3
In the Nuxt 3, we are invited to use the runtime config, but this is not always convenient, because the Nuxt application context is required.
But in a situation where we have some plain library, and we don’t want to wrap it in plugins nor composables functions, declaring global variables through vite / webpack is best:
// nuxt.config.ts
export default defineNuxtConfig({
vite: {
define: {
MY_API_URL: JSON.stringify(process.env.MY_API_URL)
}
}
})
And then you can use in any file without dancing with a tambourine:
// some-file.ts
console.log('global var:', MY_API_URL) // replaced by vite/webpack in real value

CDK not updating

Running cdk deploy after updating my Stack:
export function createTaskXXXX (stackScope: Construct, workflowContext: WorkflowContext) {
const lambdaXXXX = new lambda.Function(stackScope, 'XXXXFunction', {
runtime: Globals.LAMBDA_RUNTIME,
memorySize: Globals.LAMBDA_MEMORY_MAX,
code: lambda.Code.fromAsset(CDK_MODULE_ASSETS_PATH),
handler: 'xxxx-handler.handler',
timeout: Duration.minutes(Globals.LAMBDA_DURATION_2MIN),
environment: {
YYYY_ENV: (workflowContext.production) ? 'prod' : 'test',
YYYY_A_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/`,
YYYY_B_LOCATION: `s3://${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/` <--- added
}
})
lambdaXXXX.addToRolePolicy(new iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['s3:PutObject'],
resources: [
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/adata-workflow/split-input/*`,
`arn:aws:s3:::${workflowContext.S3ImportDataBucket}/bdata-workflow/split-input/*` <---- added
]
}))
I realize that those changes are not updated at stack.template.json:
...
"Runtime": "nodejs12.x",
"Environment": {
"Variables": {
"YYYY_ENV": "test",
"YYYY_A_LOCATION": "s3://.../adata-workflow/split-input/"
}
},
"MemorySize": 3008,
"Timeout": 120
}
...
I have cleaned cdk.out and tried the deploy --force, but never see any updates.
Is it deleting the stack and redeploy the only final alternative, or am i missing something? I think at least at synth should generate different results.
(i also changed to cdk 1.65.0 in my local system to match the package.json)
Thanks.
EDITED: I git clone the project, and did npm install and cdk synth again and finally saw the changes, i would like not to do this everytime, any light of what could be blocking the correct synth generation?
EDITED 2: After a diff between the bad old project and the new from git where synth worked, i realized that some of my project files that had .ts (for example cdk.ts my App definition) also had replicas with .js and .d.ts., such as cdk.js and cdk.d.ts. Could i have runned some command by mistake that compiled Typescript, i will continue to investigate, thanks to all answers.
because CDK uses Cloudformation, it performs an action to determine a ChangeSet. This is to say, if it doesn't think anything has changed, it wont change that resource.
This can, of course, be very annoying as sometimes it thinks it is the same and doesn't update when there is actually a change - I find this most often with Layers and using some form of make file to generate the zips for the layers. Even tho it makes a 'new' zip whatever it uses to determine that the zip is updated recalls it as the same because of ... whatever compression/hash/ect changes are used.
You can get around this by updating the description with a datetime. Its assigned at synth (which is part of the cdk deploy) and so if you do a current now() of datetime
You can also use cdk diff to see what it thinks the changes are.
And finally... always remember to save your file before deployments as, depending on your IDE, it may not be available to the command line ;)
I think it will update where I see the code, but I don't know why it can't.
It is advisable to comment out the part about Lambda once and deploy it, then uncomment it and deploy it again, then recreate Lambda.
This is how I do it. Works nicely so far. Basically you can do the following:
Push your lambda code as a zip file to an s3 bucket. The bucket must have versioning enabled. .
The CDK code below will do the following:
Create a custom resource. It basically calls s3.listObjectVersions for my lambda zip file in S3. I grab the first returned value, which seems to be the most recent object version all the time (I cannot confirm this with the documentation though). I also create a role for the custom resource.
Create the lambda and specify the code as the zip file in s3 AND THE OBJECT VERSION RETURNED BY THE CUSTOM RESOURCE! That is the most important part.
Create a new lambda version.
Then the lambda's code updates when you deploy the CDK stack!
const versionIdKey = 'Versions.0.VersionId';
const isLatestKey = 'Versions.0.IsLatest'
const now = new Date().toISOString();
const role = new Role(this, 'custom-resource-role', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
role.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AdministratorAccess')); // you can make this more specific
// I'm not 100% sure this gives you the most recent first, but it seems to be doing that every time for me. I can't find anything in the docs about it...
const awsSdkCall: AwsSdkCall = {
action: "listObjectVersions",
parameters: {
Bucket: buildOutputBucket.bucketName, // s3 bucket with zip file containing lambda code.
MaxKeys: 1,
Prefix: LAMBDA_S3_KEY, // S3 key of zip file containing lambda code
},
physicalResourceId: PhysicalResourceId.of(buildOutputBucket.bucketName),
region: 'us-east-1', // or whatever region
service: "S3",
outputPaths: [versionIdKey, isLatestKey]
};
const customResourceName = 'get-object-version'
const customResourceId = `${customResourceName}-${now}` // not sure if `now` is neccessary...
const response = new AwsCustomResource(this, customResourceId, {
functionName: customResourceName,
installLatestAwsSdk: true,
onCreate: awsSdkCall,
onUpdate: awsSdkCall,
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE}), // you can make this more specific
resourceType: "Custom::ListObjectVersions",
role: role
})
const fn = new Function(this, 'my-lambda', {
functionName: 'my-lambda',
description: `${response.getResponseField(versionIdKey)}-${now}`,
runtime: Runtime.NODEJS_14_X,
memorySize: 1024,
timeout: Duration.seconds(5),
handler: 'index.handler',
code: Code.fromBucket(buildOutputBucket, LAMBDA_S3_KEY, response.getResponseField(versionIdKey)), // This is where the magic happens. You tell CDK to use a specific S3 object version when updating the lambda.
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
});
new Version(this, `version-${now}`, { // not sure if `now` is neccessary...
lambda: fn,
removalPolicy: RemovalPolicy.DESTROY
})
Do note:
For this to work, you have to upload your lambda zip code to S3 before each cdk deploy. This can be the same code as before, but the s3 bucket versioning will create a new version. I use code pipeline to do this as part of additional automation.

Copying default external configuration on first run of Grails web app

In our Grails web applications, we'd like to use external configuration files so that we can change the configuration without releasing a new version. We'd also like these files to be outside of the application directory so that they stay unchanged during continuous integration.
The last thing we need to do is to make sure the external configuration files exist. If they don't, then we'd like to create them, fill them with predefined content (production environment defaults) and then use them as if they existed before. This allows any administrator to change settings of the application without detailed knowledge of the options actually available.
For this purpose, there's a couple of files within web-app/WEB-INF/conf ready to be copied to the external configuration location upon the first run of the application.
So far so good. But we need to do this before the application is initialized so that production-related modifications to data sources definitions are taken into account.
I can do the copy-and-load operation inside the Config.groovy file, but I don't know the absolute location of the WEB-INF/conf directory at the moment.
How can I get the location during this early phase of initialization? Is there any other solution to the problem?
There is a best practice for this.
In general, never write to the folder where the application is deployed. You have no control over it. The next rollout will remove everything you wrote there.
Instead, leverage the builtin configuration capabilities the real pro's use (Spring and/or JPA).
JNDI is the norm for looking up resources like databases, files and URL's.
Operations will have to configure JNDI, but they appreciate the attention.
They also need an initial set of configuration files, and be prepared to make changes at times as required by the development team.
As always, all configuration files should be in your source code repo.
I finally managed to solve this myself by using the Java's ability to locate resources placed on the classpath.
I took the .groovy files later to be copied outside, placed them into the grails-app/conf directory (which is on the classpath) and appended a suffix to their name so that they wouldn't get compiled upon packaging the application. So now I have *Config.groovy files containing configuration defaults (for all environments) and *Config.groovy.production files containing defaults for production environment (overriding the precompiled defaults).
Now - Config.groovy starts like this:
grails.config.defaults.locations = [ EmailConfig, AccessConfig, LogConfig, SecurityConfig ]
environments {
production {
grails.config.locations = ConfigUtils.getExternalConfigFiles(
'.production',
"${userHome}${File.separator}.config${File.separator}${appName}",
'AccessConfig.groovy',
'Config.groovy',
'DataSource.groovy',
'EmailConfig.groovy',
'LogConfig.groovy',
'SecurityConfig.groovy'
)
}
}
Then the ConfigUtils class:
public class ConfigUtils {
// Log4j may not be initialized yet
private static final Logger LOG = Logger.getGlobal()
public static def getExternalConfigFiles(final String defaultSuffix, final String externalConfigFilesLocation, final String... externalConfigFiles) {
final def externalConfigFilesDir = new File(externalConfigFilesLocation)
LOG.info "Loading configuration from ${externalConfigFilesDir}"
if (!externalConfigFilesDir.exists()) {
LOG.warning "${externalConfigFilesDir} not found. Creating..."
try {
externalConfigFilesDir.mkdirs()
} catch (e) {
LOG.severe "Failed to create external configuration storage. Default configuration will be used."
e.printStackTrace()
return []
}
}
final def cl = ConfigUtils.class.getClassLoader()
def result = []
externalConfigFiles.each {
final def file = new File(externalConfigFilesDir, it)
if (file.exists()) {
result << file.toURI().toURL()
return
}
final def error = false
final def defaultFileURL = cl.getResource(it + defaultSuffix)
final def defaultFile
if (defaultFileURL) {
defaultFile = new File(defaultFileURL.toURI())
error = !defaultFile.exists();
} else {
error = true
}
if (error) {
LOG.severe "Neither of ${file} or ${defaultFile} exists. Skipping..."
return
}
LOG.warning "${file} does not exist. Copying ${defaultFile} -> ${file}..."
try {
FileUtils.copyFile(defaultFile, file)
} catch (e) {
LOG.severe "Couldn't copy ${defaultFile} -> ${file}. Skipping..."
e.printStackTrace()
return
}
result << file.toURI().toURL()
}
return result
}
}

Open external file with Electron

I have a running Electron app and is working great so far. For context, I need to run/open a external file which is a Go-lang binary that will do some background tasks.
Basically it will act as a backend and exposing an API that the Electron app will consume.
So far this is what i get into:
I tried to open the file with the "node way" using child_process but i have fail opening the a sample txt file probably due to path issues.
The Electron API expose a open-file event but it lacks of documentation/example and i don't know if it could be useful.
That's it.
How i open an external file in Electron ?
There are a couple api's you may want to study up on and see which helps you.
fs
The fs module allows you to open files for reading and writing directly.
var fs = require('fs');
fs.readFile(p, 'utf8', function (err, data) {
if (err) return console.log(err);
// data is the contents of the text file we just read
});
path
The path module allows you to build and parse paths in a platform agnostic way.
var path = require('path');
var p = path.join(__dirname, '..', 'game.config');
shell
The shell api is an electron only api that you can use to shell execute a file at a given path, which will use the OS default application to open the file.
const {shell} = require('electron');
// Open a local file in the default app
shell.openItem('c:\\example.txt');
// Open a URL in the default way
shell.openExternal('https://github.com');
child_process
Assuming that your golang binary is an executable then you would use child_process.spawn to call it and communicate with it. This is a node api.
var path = require('path');
var spawn = require('child_process').spawn;
var child = spawn(path.join(__dirname, '..', 'mygoap.exe'), ['game.config', '--debug']);
// attach events, etc.
addon
If your golang binary isn't an executable then you will need to make a native addon wrapper.
Maybe you are looking for this ?
dialog.showOpenDialog refer to: https://www.electronjs.org/docs/api/dialog
If using electron#13.1.0, you can do like this:
const { dialog } = require('electron')
console.log(dialog.showOpenDialog({ properties: ['openFile', 'multiSelections'] }))
dialog.showOpenDialog(function(file_paths){
console.info(file_paths) // => this gives the absolute path of selected files.
})
when the above code is triggered, you can see an "open file dialog" like this (diffrent view style for win/mac/linux)
Electron allows the use of nodejs packages.
In other words, import node packages as if you were in node, e.g.:
var fs = require('fs');
To run the golang binary, you can make use of the child_process module. The documentation is thorough.
Edit: You have to solve the path differences. The open-file event is a client-side event, triggered by the window. Not what you want here.
I was also totally struggling with this issue, and almost seven years later the documentation is quite not clear what's the case with Linux.
So, on Linux it falls under Windows treatment in this regard, which means you have to look into process.argv global in the main processor, the first value in the array is the path that fired the app. The second argument, if one exist, is holding the path that requested the app to be opened. For example, here is the output for my test case:
Array(2)
0: "/opt/Blueprint/b-test"
1: "/home/husayngonzalez/2022-01-20.md"
length: 2
So, when you're creating a new window, you check for the length of process.argv and then if it was more than 1, i.e. = 2 it means you have a path that requested to be opened with your app.
Assuming you got your application packaged with the ability to process those files, and also you set the operating system to request your application to open those.
I know this doesn't exactly meet your specification, but it does cleanly separate your golang binary and Electron application.
The way I have done it is to expose the golang binary as a web service. Like this
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
//TODO: put your call here instead of the Fprintf
fmt.Fprintf(w, "HI there from Go Web Svc. %s", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/api/someMethod", handler)
http.ListenAndServe(":8080", nil)
}
Then from Electron just make ajax calls to the web service with a javascript function. Like this (you could use jQuery, but I find this pure js works fine)
function get(url, responseType) {
return new Promise(function(resolve, reject) {
var request = new XMLHttpRequest();
request.open('GET', url);
request.responseType = responseType;
request.onload = function() {
if (request.status == 200) {
resolve(request.response);
} else {
reject(Error(request.statusText));
}
};
request.onerror = function() {
reject(Error("Network Error"));
};
request.send();
});
With that method you could do something like
get('localhost/api/somemethod', 'text')
.then(function(x){
console.log(x);
}

Externalized grails.serverURL not accessible from Config.groovy

I have an application where the config is externalized. In Config.groovy, I'm updating
grails.config.locations=[file:/.../myapp-log4j.groovy, file:/.../myapp-config.properties]
That works fine for datasources and such. But later in Config.groovy, I have:
springws {
wsdl {
MyApp {
// In this case the wsdl will be available at <grails.serverURL>/services/v1/myapp/myapp-v1.wsdl
wsdlName= 'myapp-v1'
xsds= '/WEB-INF/myapp.xsd'
portTypeName = 'myappPort'
serviceName = 'myappService'
locationUri = "${grails.serverURL}/services/v1/myapp"
targetNamespace = 'http://www..../myapp/v1/definitions'
}
}
}
And ${grails.serverURL} contains [:] which is not what is in my config file. The config file contains (among the datasource details):
grails.serverURL=http://samiel:9011/xid
My guess would be that the updated grails.config.locations is only used after I return from Config.groovy.
So, what are my options to setup my web service details based on the externalized serverURL ?
This is what I get when I run your example (just confirming your starting postion):
def testExternalConfig() {
println "grails.serverURL: ${ConfigurationHolder.config.grails.serverURL}"
println "springws.wsdl.MyApp.locationUri ${ConfigurationHolder.config.springws.wsdl.MyApp.locationUri}"
}
--Output from testExternalConfig--
grails.serverURL: http://samiel:9011/xid
springws.wsdl.MyApp.locationUri http://localhost:8080/soGrails/services/v1/myapp
Like you said, Config.groovy does not see the value set in the external config. I believe that Grails processes external
configs after Config.groovy, and this test appears to confirm that. The logic being that you likely have external config file
values that you want to have precedence over config in war file.
Fix is to override the full property in myapp-config.properties:
grails.serverURL=http://samiel:9011/xid
springws.wsdl.MyApp.locationUri=http://samiel:9011/xid/services/v1/myapp
With that change I get this:
--Output from testExternalConfig--
grails.serverURL: http://samiel:9011/xid
springws.wsdl.MyApp.locationUri http://samiel:9011/xid/services/v1/myapp

Resources