MongoDB default data load - docker

I have some problems with MongoDB.
I have 2 replicas of NodeJS and 1 MongoDB. Default data is loaded always twice in the database. How to fix this?
I have databaseLoader.js function which is load data in DB:
mongoose.promise = Promise;
mongoose.set('useCreateIndex', true);
mongoose.set('useFindAndModify', false);
mongoose.connect(MONGODB_URI, {useNewUrlParser: true})
.then(
() => {
logger.info('Successfully connected to mongoDB');
loader.loadDefaultData()
.then(response => {
});
},
)
.catch(err => {
logger.error('Connection to MongoDB could not be established');
});

I don't know for what purpose you run 2 replicas to load demo data and connect to db but if you have 2 replicas in your deployment, then each replica will run independently, so it means it will load demo data twice.
If you have some application and you want to check if there is db connectivity, before starting the application, you can use initContainer
Init Containers are exactly like regular Containers, except:
They always run to completion.
Each one must complete successfully
before the next one is started.

Related

Cloud Run Outbound API Calls being throttled?

I have an instance that on some requests needs to make multiple calls to an external API, sometimes up to 2,000+ calls.
When running my application locally, each call to the external api returns at sub 200ms every time without fail, and the entire process for all 2,000 calls takes appx 15 seconds.
I noticed running on cloud run, my API calls are split into three categories:
appx 1/4 take ~200ms same as local. about 1/2 take exactly 12007ms and some take exactly 63000ms, the whole process is taking 20+ minutes for these same API calls.
I have tried batching, using async/eachLimit set to 10, 20, 50.. but the same thing occurs.
The endpoint, data and calls are identical on my local to what's on cloud run. Cloud run is running through a NAT with static IP as well (which may be intercepting/throttling?).
Has anyone encountered this before? Adding my docker image to a VM in the same VPC/NAT Gateway has the same effect as my local (all calls < 200ms and complete is < 15 seconds).
Has anyone encountered this in run and how to get around?
Snippet running (Note I have played around with the limit (50 in below snippet):
const productsWithAvailabilities = await mapLimit(products, 50, async product => {
console.time(product.ProductID)
const availabilityOther = await productClient.execute(
'GetAvailabilityA',
{
productid: product.ProductID,
viewid: 'WEB',
connectid: this.pricingToken,
},
false,
)
let AvailabilityOther
try {
AvailabilityOther = availabilityOther?.GetAvailabilityAResult?.diffgram?.Warehouses?.ProductAvailability?.map(
a => ({
...a,
QtyAvail: parseFloat(a.QtyAvail),
QtyOnHand: parseFloat(a.QtyOnHand),
QtyOnOrder: parseFloat(a.QtyOnOrder),
QtyInTransit: parseFloat(a.QtyInTransit),
Available: parseFloat(a.QtyAvail),
}),
)
} catch (e) {
console.log({ product, e })
}
console.timeEnd(product.ProductID)
return {
...product,
AvailabilityOther,
Availability: AvailabilityOther?.find(a => a.LocationID === LOCATIONS.MEL),
QtyAvailableOther: AvailabilityOther?.filter(a => a.LocationID !== LOCATIONS.MEL)
.map(a => a.QtyAvail)
.reduce((result, current) => result + current, 0),
}
})
console.timeEnd('availabilities')
return productsWithAvailabilities as MoProProduct[]
}
ProductClient.execute is making a SOAP post request using the node 'soap' library.
VPC Connector throughput is 200 - 1000 (default setting) and I have a single external IP / router connected to the NAT.

Do I need to do more than add env variables in docker-compose file in order to use them across other containers?spin up

I define the URL for my backend service container in my docker-compose.yaml.
environment:
PORT: 80
VUE_APP_BACKEND_URL: "mm_backend:8080"
When the containers spin up, I inspect my frontend container and can verify that the env variable was set correctly as shown below.
However, when I attempt to use my frontend service to connect to my backend (retrieve data) it tells me that the VUE_APP_BACKEND_URL is undefined in the network tab.
The implementation and usage of this environment variable is such within my vue.js code
getOwners(){
fetch(`${process.env.VUE_APP_BACKEND_URL}/owners`, defaultOptions)
.then((response) => {
return response.json();
})
.then((data) => {
data.forEach((element) => {
var entry = {
value: element.id,
text: `${element.display_name} (${element.name})`
}
this.owners.push(entry)
})
})
Any assistance is appreciated.
This won’t work because process.env.someKey is not available on the browser. In other words, docker-compose won’t help much if you want to pass any env variable into your front end application. Simplest approach is to define the backendUrl in the code itself at one place and use it to make api calls. If you are not happy doing this, there are already some good answers/solutions available on StackOverflow for this same problem.

Workbox precache putting items inside -temp cache, and not using it later [duplicate]

To enable my app running offline. During installation the service worker should:
fetch a list of URLs from an async API
reformat the response
add all URLs in the response to the precache
For this task I use Googles Workbox in combination with Webpack.
The problem: While the service worker successfully caches all the Webpack assets (which tells me that the workbox basically does what it should), it does not wait for the async API call to cache the additional remote assets. They are simply ignored and neither cached nor ever fetched in the network.
Here is my service worker code:
importScripts('https://storage.googleapis.com/workbox-cdn/releases/3.1.0/workbox-sw.js');
workbox.skipWaiting();
workbox.clientsClaim();
self.addEventListener('install', (event) => {
const precacheController = new
workbox.precaching.PrecacheController();
const preInstallUrl = 'https://www.myapiurl/Assets';
event.waitUntil(fetch(preInstallUrl)
.then(response => response.json()
.then((Assets) => {
Object.keys(Assets.data.Assets).forEach((key) => {
precacheController.addToCacheList([Assets.data.Assets[key]]);
});
})
);
});
self.__precacheManifest = [].concat(self.__precacheManifest || []);
workbox.precaching.suppressWarnings();
workbox.precaching.precacheAndRoute(self.__precacheManifest, {});
workbox.routing.registerRoute(/^.*\.(jpg|JPG|gif|GIF|png|PNG|eot|woff(2)?|ttf|svg)$/, workbox.strategies.cacheFirst({ cacheName: 'image-cache', plugins: [new workbox.cacheableResponse.Plugin({ statuses: [0, 200] }), new workbox.expiration.Plugin({ maxEntries: 600 })] }), 'GET');
And this is my webpack configuration for the workbox:
new InjectManifest({
swDest: 'sw.js',
swSrc: './src/sw.js',
globPatterns: ['dist/*.{js,png,html,css,gif,GIF,PNG,JPG,jpeg,woff,woff2,ttf,svg,eot}'],
maximumFileSizeToCacheInBytes: 5 * 1024 * 1024,
})
It looks like you're creating your own PrecacheController instance and also using the precacheAndRoute(), which aren't actually intended to be used together (not super well explained in the docs, it's only mentioned in this one place).
The problem is the helper methods on workbox.precaching.* actually create their own PrecacheController instance under the hood. Since you're creating your own PrecacheController instance and also calling workbox.precaching.precacheAndRoute([...]), you'll end up with two PrecacheController instances that aren't working together.
From your code sample, it looks like you're creating a PrecacheController instance because you want to load your list of files to precache at runtime. That's fine, but if you're going to do that, there are a few things to be aware of:
Your SW might not update
Service worker updates are usually triggered when you call navigator.serviceWorker.register() and the browser detects that the service worker file has changed. That means if you change what /Assets returns but the contents of your service worker files haven't change, your service worker won't update. This is why most people hard-code their precache list in their service worker (since any changes to those files will trigger a new service worker installation).
You'll have to manually add your own routes
I mentioned before that workbox.precaching.precacheAndRoute([...]) creates its own PrecacheController instance under the hood. It also adds its own fetch listener manually to respond to requests. That means if you're not using precacheAndRoute(), you'll have to create your own router and define your own routes. Here are the docs on how to create routes: https://developers.google.com/web/tools/workbox/modules/workbox-routing.
I realised my mistake. I hope this helps others as well. The problem was that I did not call precacheController.install() manually. While this function will be executed automatically it will not wait for additional precache files that are inserted asynchronously. This is why the function needs to be called after all the precaching happened. Here is the working code:
importScripts('https://storage.googleapis.com/workbox-cdn/releases/3.1.0/workbox-sw.js');
workbox.skipWaiting();
workbox.clientsClaim();
const precacheController = new workbox.precaching.PrecacheController();
// Hook into install event
self.addEventListener('install', (event) => {
// Get API URL passed as query parameter to service worker
const preInstallUrl = new URL(location).searchParams.get('preInstallUrl');
// Fetch precaching URLs and attach them to the cache list
const assetsLoaded = fetch(preInstallUrl)
.then(response => response.json())
.then((values) => {
Object.keys(values.data.Assets).forEach((key) => {
precacheController.addToCacheList([values.data.Assets[key]]);
});
})
.then(() => {
// After all assets are added install them
precacheController.install();
});
event.waitUntil(assetsLoaded);
});
self.__precacheManifest = [].concat(self.__precacheManifest || []);
workbox.precaching.suppressWarnings();
workbox.precaching.precacheAndRoute(self.__precacheManifest, {});
workbox.routing.registerRoute(/^.*\.(jpg|JPG|gif|GIF|png|PNG|eot|woff(2)?|ttf|svg)$/, workbox.strategies.cacheFirst({ cacheName: 'image-cache', plugins: [new workbox.cacheableResponse.Plugin({ statuses: [0, 200] }), new workbox.expiration.Plugin({ maxEntries: 600 })] }), 'GET');

How can a docker service know about all other containers of the same service?

I'm working on a file sync Docker microservice. Basically I will have a file-sync service that is global to the swarm (one on each node). Each container in the service needs to peer with all the other containers on different nodes. Files will be distributed across the nodes, not a complete duplicate copy. Some files will reside on only certain nodes. I want to be able to selectively copy a subset of the files from one node to another.
How can I get a list of the endpoints of all the other containers so the microservice can peer with the them? This needs to happen programmatically.
On a related note, I'm wondering if a file-sync microservice is the best route for the solution I'm working on.
Basically I have some videos a user has uploaded. I want to be able to encode them into different formats. I was planning on having the video encoding node have the file-sync service pull the files, encode the videos, and then use the file-sync to push the encoded files back to the same server. I know I can use some kind of object store but that isn't available to me with bare metal dedicated servers and I'd rather not deal with OpenStack if I don't need to.
Thanks to #johnharris85 for the above suggestion. For anyone else that is interested I created a snippet that can be used in node.
https://gist.github.com/brennancheung/62d2abe16569e600d2be5e9495c85331
const dns = require('dns')
function lookup (serviceName) {
const tasks = `tasks.${serviceName}`
return new Promise((resolve, reject) => {
dns.lookup(tasks, { all: true }, (err, addresses, family) => {
if (err) {
return reject(err)
}
const filtered = addresses.filter(address => address.family === 4)
const ips = filtered.map(x => x.address)
resolve(ips)
})
})
}
async function main () {
const result = await lookup('hello')
console.log(result)
}
main()

How do I use ServiceWorker without a separate JS file?

We create service workers by
navigator.serviceWorker.register('sw.js', { scope: '/' });
We can create new Workers without an external file like this,
var worker = function() { console.log('worker called'); };
var blob = new Blob( [ '(' , worker.toString() , ')()' ], {
type: 'application/javascript'
});
var bloburl = URL.createObjectURL( blob );
var w = new Worker(bloburl);
With the approach of using blob to create ServiceWorkers, we will get a Security Error as the bloburl would be blob:chrome-extension..., and the origin won't be supported by Service Workers.
Is it possible to create a service worker without external file and use the scope as / ?
I would strongly recommend not trying to find a way around the requirement that the service worker implementation code live in a standalone file. There's a very important of the service worker lifecycle, updates, that relies on your browser being able to fetch your registered service worker JavaScript resource periodically and do a byte-for-byte comparison to see if anything has changed.
If something has changed in your service worker code, then the new code will be considered the installing service worker, and the old service worker code will eventually be considered the redundant service worker as soon as all pages that have the old code registered and unloaded/closed.
While a bit difficult to wrap your head around at first, understanding and making use of the different service worker lifecycle states/events are important if you're concerned about cache management. If it weren't for this update logic, once you registered a service worker for a given scope once, it would never give up control, and you'd be stuck if you had a bug in your code/needed to add new functionality.
One hacky way is to use the the same javascript file understand the context and act as a ServiceWorker as well as the one calling it.
HTML
<script src="main.js"></script>
main.js
if(!this.document) {
self.addEventListener('install', function(e) {
console.log('service worker installation');
});
} else {
navigator.serviceWorker.register('main.js')
}
To prevent maintaining this as a big file main.js, we could use,
if(!this.document) {
//service worker js
importScripts('sw.js');
else {
//loadscript document.js by injecting a script tag
}
But it might come back to using a separate sw.js file for service worker to be a better solution. This would be helpful if one'd want a single entry point to the scripts.

Resources