Though I am familiar with Sw-precache and Sw-toolbox libraries, Still puzzle on how to cater Dynamic Dependencies while building Progressive Web App using Angular.
I have pre-cached bundle.js, bundle.css and some static templates for my application using Sw-precache build process. (Assume all my templates (.html files) are dependent on bundle.js and bundle.css)
What if I update my template which is not pre-cached ? How do I ensure Updation of bundle.js/css in conjunction with my template.
What if I update my template which is already pre-cached. Does it always update in conjunction with pre-cached bundle.js/css files.
Last use case, When my template is already pre-cached but it has some inline script with src attribute and this script file is not cached anywhere. Assume I made some changes in template as well as script file. What caching approach should I follow to ensure Updation of template file in conjunction with script file.
We are not following pure App-shell architecture, It's like Single Page App designed using Angular.js
Sw-Precache will update for the file changes, generate a new service worker and when you deploy updates to bundle js or css, you should be deploying your new Sw-Precache generated service worker as well.
Just as a general idea, hash all the resources you're precaching and the perform a hash of hashes and include this digest in the service worker so when one of the dependencies change, the digest will change and your service worker will update triggering a new installation event. Does it make sense?
Related
What are the persistence options for fitnesse files? So far it seems like a file system is the only thing supported. There does appear to be an out of date database plugin. Is there anything else that is supported (S3, database, etc.)? Is there a way to control where files are persisted if using the filesystem?
I believe there is very little in that area. The location of the files can be controlled using a command line option. See http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.QuickReferenceGuide#FitNesseCommandLINE
-d /path/to/fitnesse/root
How I've used the FitNesse wiki is as a local development tool, with the pages on the file system. Once I'm satisfied with the tests I commit them to version control (e.g. git) so that they become part of the (integration) test pipeline setup (e.g. they are run as part of the CI/CD pipeline of the project).
There is a plugin I believe that will automatically commit any save actions to Git, but I've never used that. Saving each edit action just pollutes version control in my opinion. I only want to see tests after they have been checked/completed, and that tends not to be each save.
Working on a shared wiki environment (where I would expect a non-file system approach would fit in) you run into the same problem, I expect. Developing automated tests is a development task that requires some iterations before it is 'done', and not all attempts reach that 'done' state. So using shared storage for wiki persistence creates 'noise' in the test-set: which are the tests that form the current reference set that should pass and what is work in-progress.
If you are working on a larger project where new features are developed together with their automated tests it becomes even more important to know which test changes belong to which features/changes. Having tests on the file system, in version control, allows you to develop test in sync with code changes in the same branch. This is what I would recommend.
I'm working on a scala app (building with maven) where the UI is html and javascript and the back end is a REST API. For deployment, the html/javascript will just get thrown into nginx as static resources, but for development I just want something that serves up the files from local disk. Other teams use gulp-connect for this, but I'm hoping to avoid adding a second build tool (i.e., gulp) to my stack if I can avoid it.
What are my options for going about this? I see there's an nginx plugin for maven, but it's poorly documented. NanoHttpd seems promising, but it looks like I'd have to write my own maven plugin.
I asked the following two questions in JIRA Answers, but got no reply so far:
Question 1
Question 2
Basically my question is what's the best way to make changes in a JIRA production environment.
This will be rather general answer, but this is how I do it:
I avoid modifying JIRA production files. When I need it (e.g. mail templates), I keep them under source control along with plugin in 'deploy' directory which mimics JIRA directory structure so it is possible to grap it and deploy it with copy&paste
I frequently use javascript to decorate screens with custom behavior
Schema changes, custom fields and other meta data are all created in code
Keep everything in plugin and leverage plugin versioning system. Plugin should be able to check whether there is everything it needs and when not it should be able to upgrade incrementally.
For JIRA configuration - this is the same, plugin should check whether it has everything it needs, but you can also keep configuration changes in some excel file and have it under source control
My aproach is to have everything possible in source control and modify production files only when absolutely necessary. Do as much in code as possible.
While using the DB Migration plugin I came across an interesting question. In our regular war deployments, time and again, we need to run certain scripts for data updates to accommodate our changed code. While we can still run these externally, we were trying to find a way to add them as a part of DB Migration process.
Now one set of these scripts can be converted into migration scripts and added inside the grailsChange section and and they run pretty seamlessly. There is another set of scripts though, which are problematic because of a couple of reasons.
These scripts are run time and again so we would have to keep changing the id with every run as we don't want to duplicate the code, thus losing the original changes.
We pass params to these scripts from the command line and by the method above we have to add them to the scripts themselves just causing maintainability issues.
So my question would be, is there a more elegant way to trigger external grails or groovy scripts from within the DB migration scripts such that every time we need to run a script file, we can create the changelog with the updated call and tag it with the app.
I think there was a post on stackoverflow regarding this a while back, but I cannot for the love of my life, find it any more. Any help regarding this would be appreciated.
Thanks
Are the scripts something you could add into bootstrap.groovy? That would probably be the simplest. Just use groovy.sql.Sql to run the scripts.
Another more functional and flexible option would be to create a service to run the scripts (groovy.sql.Sql) and a domain class to track the scripts that have been run. You could trigger the service in the bootstrap.groovy file and the service could look at some migrations domain class you set up to see if the script has been run. You could even go as far to secure a front end to this mechanism to upload a script file to execute at runtime.
Let me know more details of what you want and I can try to be more detailed in my response.
I have a Rails application which I now plan to deploy many instances to different domains. Originally I only intend for it to be on one domain.
I realize that for each domain, I have to replace all the hard-coded values in various places. These include:
asset host path (assets reside on the same domain)
whenever-gem's :application setting (since two domain can share the same server, and this is to avoid crobtab update clash)
some of tasks which uses curl to its own address to trigger events
carrierwave needs a hardcoded value when computing image full url without the request object.
Question
Is there a strategy to set this, so:
the setting should not be commited into source control (like database.yml.example)
Codes outside Rails can access it (whenever-gem does not load Rails environment)
Ways to access the domain can be consistent
One approach you can take is to have a YAML file with per deployment properties. You could even check the development version in and have your deploy scripts overwrite with the correct version.
Typically I'd put that configuration file in shared/config (assuming a capistrano style layout) and then symlink it into the current release during the deploy.