I'm very new to containers/docker and it is absolutely changing my life so far however I have one question I'm not really finding a solid answer on. I apologize in advance if this qeustion too basic / silly :)
In my Node app I use a queue to spawn a Docker instance with a browser to screenshot some pages before terminating the docker instance and process the next item in the queue (eventually will process several items concurrently).
In my Dockerfile I added the COPY command to create a static copy of my codebase which is used in the Docker instance, including the chrome browser etc.
If I view say 100,000 pages the browsers cache would build up so my question is, does docker create a fresh version of the data used in the COPY command for every instance launched? I know I can clear the browsers cache on launch etc, but I'm more curious to know whether or not every launch is an original vanilla copy of the initial build, so nothing is shared/cached with subsequent launches.
Yes. Every run is vanilla copy of initial build. If you require data to be persistent, you can use volumes. Check this official guide for more information: Manage data in Docker
Related
A while back I created an instance of mariadb inside a docker container on a machine running Ubuntu. I've since learned that I'll need to update some settings to keep things running smoothly, but when I created the image, I did not specify any .cnf volumes. How do I update/create a .cnf file for this image? I'm a complete newb when it comes to docker, so please spoon-feed me.
I've tried accessing the file from within the image, but there are no text editors.
The defaults of MariaDB work pretty much out of the box (container) for small instances. You should only need to change setting when problems occur.
If you have spare memory you can increase your innodb_buffer_pool_size.
With the mariadb container, you don't need to edit the .cnf files, you can just add a few options on the command line per the docs (that you should defiantly read).
Recommend using the defaults for a while, and if you encounter problems, include a new question on dba.stackexchange.com that includes show global status output and specifics on the queries that are slow (show create table TBLNAME / explain QUERY).
I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.
C:\share is shared folder.
C:\share\electron-v13.0.1-win32-x64, \\192.168.1.10\share\electron-v13.0.1-win32-x64 and Z:\electron-v13.0.1-win32-x64 are same folder.
Electron app is launched correctly when I execute C:\share\electron-v13.0.1-win32-x64\electron.exe command.
However, electron app is not launched correctly when I execute Z:\electron-v13.0.1-win32-x64\electron.exe command.
According to the task manager, electron processes are running.
However, electron's window is not shown.
Can electron run correctly on shared folder?
Should be safer to use it locally (from the C:\share). The mapped drives behave very differently compared to local filesystem. And their implementations can differ in their settings as well:
https://wiki.samba.org/index.php/Time_Synchronisation
https://www.truenas.com/community/threads/issue-with-modified-timestamps-on-windows-file-copy.82649/
https://help.2brightsparks.com/support/solutions/articles/43000335953-the-last-modification-date-and-time-are-wrong
If I understand you are just mapping back your own shared folder, and overall the Windows server cofigurations felt to me more consistent, however the protocol changed over the time as well:
https://en.wikipedia.org/wiki/Server_Message_Block
I do not understand the network sharing protocols well to give you exact answer why you have the problem, but I know enough to tell you that the mounted shared folders are not like your own local filesystem. In many cases the differences do not matter and it gives great user expierence, but in some cases these minute differences break things in misterious ways, even if they are mapped/mounted almost like a regular/local drive. This is not exclusive problem to Electron.
And that is a problem with a lot of things through SMB (mainly binaries/tools), the shared folder might be running a different filesystem, different permission and privileges (or run a completely different structure of permissions underneath if it's a completely different filesystem). Remote folders might have issues with inotify getting events on file updates, might miss changed file (like touch on Linux is meant to update date on the file), so through shared folder the date updates might be delayed/rounded. I think at one point even Makefiles were misbehaving as it was depending on the access-date to work the way it would locally.
Other problem with tools is the sharability, can it handle run multiple instances from the same location? Is it saving something into a ./tmp or some other file which could conflict with other user running it at the same time?
Overall with shares I tend to use them for data (and few times had issues with them as well), but have shared remotely applications only if they are known to not cause troubles.
I'm new in docker so I want to know what is the better approach to use it. I have a Project that needs three components to work:
Jboss server application
PostgreSQL
A spring boot application
So, based on it my questions are:
1) Should I have one docker image for each component mentioned above? If yes, why not just put all together? My idea of docker is simplify the deploy of a application so put all together will make easy to install this app in another environment, right?
2) If yes (one docker image per component), spring boot is just a "Java -jar" command is really necessary have a docker image to it?
3) In case of PostgreSQL should I have the image with all my database structure and data or just vanilla PostgreSQL without anything?
To answer your questions
1) should I have one docker image for each component mentioned above ?
If yes, why not just put all together? My idea of docker is simplify
the deploy of a application so put all together will make easy to
install this app in another environment, right?
It is best to put them on a separate components so that:
You can isolate cases(will help you in debugging)
You can selectively scale(horizontally) specific stateless components when you run on k8s or docker-swarm
You can set hardware limit(RAM, CPU, etc) per component
You have different base images(might be useful for optimizations)
You want to build & test your components independently
List goes on
2) if yes (one docker image per component), spring boot is just a
"Java -jar" command is really necessary have a docker image to it?
Please check the list mentioned above (why it's best to separate) if it fits your use case. Note that adding it to existing components will affect your scaling strategy
Example - you run 3 instances of jboss component with spring boot app, you will spawn 3 instances for both of them w/c you might not want.
3) in case of PostgreSQL should I have the image with all my database
structure and data or just vanilla PostgreSQL without anything?
I would recommend that you mount your structure & data to the host volume, so that it doesn't get lost when the image is restarted. see example so i'll recommend using vanilla postgres
I hope this helps you in some way
Not sure if this is a potential bug or me doing something wrong.
I'm using cli based app, #angular/serviceworker 5.1.0 #angular/cli 1.6.0
Implementation of the SW is exactly by the book
I will try to describe what's going on:
Consider an app with running service worker. The SW caches data from ngsw.json.
Now, I am going to deploy a new ng build --prod files with new budle hash.
Currently running app's ServiceWorker will be loading old cached files unless explicitly ask to update by SwUpdate service. That's fine.
But here's the thing. Upon opening new tab and loading new instance of the app it still loads the old files. In network log there is no fetch of new ngsw.json.
Do both tabs use the same ServiceWorker?
How does ServiceWorker knows when to check for new ngsw.json?
The most bizzare things:
Sometimes upon hitting F5 the ServiceWorker still loading old files. Sometimes it loads the new files. Sometimes it tries to fetch files with old hash and fails (404)!
I haven't been able to figure out any pattern so far.
Is it possible browser caching is causing problems? I tried to add server response headers to no-cache and expire: 0 but no difference.
At the time of this answer, the latest version of ngsw (#angular/service-worker) is 8.2.13.
Do both tabs use the same ServiceWorker?
Yes, both tabs activate the same service worker. This is done because of data integrity, if you had one service worker processing differently to another service worker across different tabs, this would become a nightmare to maintain.
How does the ServiceWorker know when to check for new ngsw.json?
ngsw.json is updated when you call a production build, as you've identified: ng build --prod. The service worker doesn't consider an update to ngsw.json as an update to the service worker. As such, it can update the service worker's cache without creating a new version of the service worker and a simple refresh should suffice, without needing to close the browser tabs.
If the service worker is updated, a new service worker should get installed but it won't activate until all associated browser clients (tabs) have been closed, which will make it safe to update.
Sometimes upon hitting F5 the ServiceWorker still loading old files.
Sometimes it loads the new files. Sometimes it tries to fetch files
with old hash and fails (404)!
The refresh button doesn't behave ordinarily when it comes to service workers. If you're doing a hard reload, you will bypass the service worker and request new data. This action won't necessarily update the service worker's cache. As a result, you might find the next refresh loads old data, retrieved from the cache associated with the old service worker.
Is it possible browser caching is causing problems?
To manually invalidate the service worker's cache, head into DevTools > Application (tab) > Cache storage and delete its contents.