maxAgeSeconds in sw-toolbox not working - service-worker

I'm using sw-toolbox for building progressive web app. All resources are getting cached as expected but they are not expiring on putting maxAgeSeconds.
toolbox.router.get("/test/(.*)", toolbox.cacheFirst, {
cache: {
maxEntries: 5, name: "list", maxAgeSeconds: 300, debug: !0
}
});
I have used CacheFirst Strategy, and verified that resources are kicking in through service worker cache.

Related

Nestjs with Neo4j graphQL library causing RangeError: Maximum call stack size exceeded

I am trying to learn a new concept for myself - Graph databases.
I also decided to try out GraphQL with it.
Learning two new technologies... What possibly could go wrong??!?!?
I created a wee apollo+neo4j+neo4j-graphql library project and it works. I could make custom resolvers, the data saves to the DB...everything works great.
So I decided to add a structure to this wee project and what is better than Angular structure, right? So.. . I found Nest JS.
My question is... How do you marry NestJs with Neo4j-graphQL library?
I just keep getting:
RangeError: Maximum call stack size exceeded
With this code
const driver = neo4j.driver(
"bolt://localhost:7687",
neo4j.auth.basic("neo4j", "Qwertyui1!")
);
const neoSchema = new Neo4jGraphQL({
typeDefs,
driver,
config: {
jwt: {
secret: "secret"
}
}
});
let graphSchema: GraphQLSchema = neoSchema.schema;
console.error(graphSchema);
#Module({
imports: [
GraphQLModule.forRoot({
playground: false,
plugins: [ApolloServerPluginLandingPageLocalDefault()],
debug: false,
disableHealthCheck: true,
// typePaths: ['./models/*.graphql'],
definitions: {
path: null
},
schema: graphSchema
})
],
Here is the source code:
https://github.com/mcgri/neo4jWithNestjs

Unable to access Azure blob SAS uri in SwaggerUIBundle url list

I'm trying to create SwaggerUIBundle where the urls will be of Azure Blob Storage container files.
For testing purpose I have hard coded the urls in here like this in my index.jsp file:
// Begin Swagger UI call region
const ui = SwaggerUIBundle({
urls: [
{url: "https://backendsa.blob.core.windows.net/swagger-consolidation/*****", name: "SwaggerConsolidation"},
{url: "https://backendsa.blob.core.windows.net/swagger-consolidation/*****2", name: "SwaggerConsolidation2"},
],
dom_id: '#swagger-ui',
deepLinking: true,
spec: location.host,
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout"
})
// End Swagger UI call region
These urls are pointing SAS urls for Azure Blob Storage files and are accessible while hitting in open network.
But while I deploy the code it gives below error :
Fetch error
NetworkError when attempting to fetch resource. https://backendsa.blob.core.windows.net/swagger-consolidation/*****
Fetch error
Possible cross-origin (CORS) issue? The URL origin (https://backendsa.blob.core.windows.net) does not match the page (https://router-sc.dev-wus.digitalservices.com). Check the server returns the correct 'Access-Control-Allow-*' headers.
Any insight over the issue would be helpful.
According to the error you provide, you need to configure CORS in Azure blob. Because the swaager UI application is a SPA application. when we call the rest api from a domain different from your website in the application, we will get CORS issue. Regarding how to configure it, please refer to the docuemnt.
For example
Allowed origins: *
Allowed verbs: DELETE,GET,HEAD,MERGE,POST,OPTIONS,PUT
Allowed headers: *
Exposed headers: *
Maximum age (seconds): 86400

how do I include files that sw-precache misses

Context
I'm using parcel-plugin-sw-precache which wraps around sw-precache to make it work with Parcel.js. Everything was working as expected, and I have been testing my offline app.
Problem
I added react-pdf.js into my project, one of the dependencies for this library doesn't get added into the service worker when it is generated by the sw-precache. I know this because the file "pdf.worker.entry.7ce4fb6a.js" gives a 304 error when I switch to offline mode.
What I have tried
I'm trying to add the file manually to the package.json parcel-plugin-sw-precache config using this code:
"sw-precache": {
"maximumFileSizeToCacheInBytes": 10485760,
"staticFileGlobs": [
"/pdf.worker.entry.7ce4fb6a.js"
]
},
I'm not sure if the file path should be relative to package.json or relative the generated service worker. In anycase, the manually specified file doesn't get added to generate services worker as I would expect. As seen below.
self.__precacheManifest = [{
"url": "index.html",
"revision": "ac5ace7a43a0fef7ae65fd3119987d1f"
}, {
"url": "castly.e31bb0bc.css",
"revision": "657409f7159cb857b9409c44a15e653f"
}, {
"url": "castly.e31bb0bc.js",
"revision": "018d4664d809ec19d167421b359823ad"
}, {
"url": "/",
"revision": "af5513bb330deae3098ab289d69a40c7"
}]
The question
If the sw-precache or parcel-plugin-sw-precache seem to be missing some files, how can I make sure they get added to the generated service worker?
In my exploration for an answer. I gave up on using parcel-plugin-sw-precache and instead I switched to using workbox. If you are interested in creating an offline app with Parcel.js. Then I recommend Workbox as it is the next generation of sw-precache.
There is how I got it working:
Learning
Learn what Workbox is and what is does with this code lab.
Implimenting
1) Install the Workbox CLI globally.
2) create a placeholding service worker in the root directory. e.g. sw_shell.js
- The shell is a holding file. The Workbox wizard will pick it up and generate a
new sw.js file automatically.
3) Add to the sw_config.js the following code:
importScripts("https://storage.googleapis.com/workbox-cdn/releases/3.6.3/workbox-sw.js");
if (workbox) {
workbox.skipWaiting();
workbox.clientsClaim();
workbox.precaching.suppressWarnings();
// The next line came from the code lab
workbox.precaching.precacheAndRoute([]);
workbox.routing.registerNavigationRoute("/index.html");
} else {
console.log(`Boo! Workbox didn't load 😬`);
}
4) Run this code from a command line opened in your project's root directory.
workbox wizard --injectManifest
5) Follow the steps in the wizard. For dev purposes point the "root web app" to your parcel dist folder. Workbox does it's magic and picks up those files to be hashed into a new sw.js file.
6) The wizard will ask for your existing sw.js file. In my case I use the sw_shell.js.
a:Workbox picks up the sw_shell.js.
c:Generates as new sw.js file in a location specfied when running the wizard, and injects the files to run offline.
In my case I let the new sw.js generate in my root folder because Parcel picks it up automatically as per the script in my index.js.
'use strict';
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('sw.js').then(function(reg) {
console.log('Worker registration started')
reg.onupdatefound = function() {
console.log('update found')
var installingWorker = reg.installing;
installingWorker.onstatechange = function() {
console.log('installing worker')
switch (installingWorker.state) {
case 'installed':
if (navigator.serviceWorker.controller) {
console.log('New or updated content is available.');
} else {
console.log('Content is now available offline!');
}
break;
case 'redundant':
console.error('The installing service worker became redundant.');
break;
}
};
};
}).catch(function(e) {
console.error('Error during service worker registration:', e);
});
});
}
7) Add workbox injectManifest to your package.json to make sure Workbox picks up any changes to your files:
"scripts": {
"start": "parcel index.html workbox injectManifest"
}
Let me know if you want to know more about this. There is a video here that helped me a little bit also.

Workbox Stale-while-revalidate strategy always returns response from network call instead of cache

I am using workbox-webpack-plugin, below is code in webpack config
new GenerateSW({
runtimeCaching: [
{
urlPattern: new RegExp('^https://devapi\.mysite\.xyz/'),
handler: 'staleWhileRevalidate',
options: {
cacheableResponse: {
statuses: [200]
}
}
}
]
})
Below is flow of stale while revalidate strategy as per google doc
I am calling API from cross domain, what I observed is each time response is given back to UI not from cache but from network call response.
I am expecting when same API is called 2nd time, I should get response from cache and then cache should be updated from response of network call.
I think all the info in this "Handle Third Party Requests" guide should help.
In particular, make sure that your remote server is using CORS, or else you'll get back a response that has a status of 0. You're explicitly configuring the cacheableResponse plugin to only cache responses with a status of 200.
For anyone stumbling upon this now, the correct snippet should be.
Workbox listens to StaleWhileRevalidate not staleWhileRevalidate.
new GenerateSW({
runtimeCaching: [
{
urlPattern: new RegExp('^https://devapi\.mysite\.xyz/'),
handler: 'StaleWhileRevalidate',
options: {
cacheableResponse: {
statuses: [200]
}
}
}
]
})

Rails - Access sidekiq status via API

I'm using sidekiq and sidekiq-status gems for workers and tracking progress of them on web UI: /sidekiq/statuses.
For individual worker tracking /sidekiq/statuses/job_id.
How can I access progress info from a frontend via API?
On GET /sidekiq/stats I get reponse:
{
"sidekiq": {
"processed": 805,
"failed": 62,
"busy": 3,
"processes": 1,
"enqueued": 0,
"scheduled": 0,
"retries": 1,
"dead": 0,
"default_latency": 0
},
"redis": {
"redis_version": "3.0.6",
"uptime_in_days": "0",
"connected_clients": "24",
"used_memory_human": "1.06M",
"used_memory_peak_human": "2.00M"
},
"server_utc_time": "14:50:29 UTC"
}
Can I do similar thing for /statuses/job_id ?
In case you want to get stats of individual sidekiq queue, I have written a small gem sidekiq_queue_metrics to do so. It provides individual queue stats on Web UI as well as provides an API to fetch the metrics.
require 'sidekiq_queue_metrics'
Sidekiq.configure_server do |config|
Sidekiq::QueueMetrics.init(config)
end
API to fetch stats:
Sidekiq::QueueMetrics.fetch
#=> {
"mailer_queue" => {"processed" => 5, "failed" => 1, "enqueued" => 2, "in_retry" => 0},
"default_queue" => {"processed" => 10, "failed" => 0, "enqueued" => 1, "in_retry" => 1}
}
Web UI:
Looking at the Sidekiq::Web source the only endpoints I see returning JSON are /sidekiq/stats and /sidekiq/stats/queues.
Remember, Sidekiq has many API helpers for use in your Ruby code. There's no reason you can't just create your own controller to pass job info to the frontend using the Ruby API. E.g. Sidekiq::Queue.new("high-queue").find_job(jid)
This also has the advantage of letting you set up more fine-grained user access control over the data rather than letting any user of your frontend have access to the Sidekiq API.
If you plan on making heavy use of this, you might think about upgrading to Pro which includes a Pro API with a more efficient Sidekiq::JobSet#find_job(jid) method.
Finally, if there's no API helper in Sidekiq remember that Sidekiq is all just Redis in the backend and you could write your own Redis queries to fetch the right data in the shape you want.

Resources