updating raspberry pi with docker containers running - docker

I recently started using docker on raspberry pi. I am using a set of docker containers that are run constantly e.g. pihole, node red, mosquitto etc. I know that if you have to restart raspberry pi then you should stop the containers first and then restart them once the pi has rebooted. But in some other tutorials I see they check for update by using the command sudo apt updates for the raspberry pi OS and install the updates if they are available. I want to know the following:
Should I stop the containers before checking and installing updates for the raspberry pi OS?
do you need to stop all the containers that depend on each other before updating any one of the container or just stop the container that needs to be updated?

Should I stop the containers before checking and installing updates for the raspberry pi OS?
Generally, no, updates on the host won't have any impact on your containers. The exception is if you install an update to Docker, which may restart the Docker daemon. This can cause all your containers to exit, although they will come back up automatically if you have configured them with a restart policy. And of course a kernel update will require a host reboot before it becomes active.
do you need to stop all the containers that depend on each other before updating any one of the container or just stop the container that needs to be updated?
This really depends on how you have designed your applications.
Consider a web application that talks to a database. If you were to remove the database container and create a new one with updated software or configuration, depending on how the web application is written:
It might crash, requiring you to restart it manually.
If you have configured it with a restart policy, Docker might take care of restarting it for you.
If the application has reconnect/retry logic, it might simply wait until the database is back up and available.
The application itself might require an update before it is able to run correctly.
It's up to you to understand that dependencies in your software stack and determine how best to handle component upgrades.

Related

Running docker-Desktop on Windows 10 cannot restart containers after system restart

I am running Docker-Dektop Version 2.1.0.0 (36874) on a Windows 10 environment.
I am using two separate container compositions, one of these binding to port 8081 on my machine, and the other binding to 9990 and 8787.
After a system restart, I am unable to start these container compositions again, because the ports are already bound.
So far, I have tried multiple approaches to solve this:
manually stop all containers prior to system shutdown
manually stop and remove all containers prior to system shutdown
the above, plus explicitly stopping the docker application prior to system shutdown
removing all containers after system startup and prior to restart
pruning the networks after container removal
restart docker app prior to restarting containers (this worked up until the last update)
I did fiddle around with the compose files and the configuration, but taht would be too much detail to go into right now; all of these did not help.
What I recently found was, directly after a system startup and prior to starting any container, that the process com.docker.backend was already listening to the bound ports. This is confusing as the containers were shut down prior to system shutdown and are not run with a restart-command.
So I explicitly quit the docker desktop app, and the process still remaind, and it still bound the ports.
After manually killing the process as administrator from the power shell, and restarting the docker desktop application, my containers were able to start again.
Has anyone else had this problem? Does anyone know a "fix" for this at all?
And, of course, is this even the right page to ask? As this is not strictly programming, I am unsure.
Docker setup gets screwed up sometimes, so try deleting %appdata%\Docker.
The problem went away after the update to version 2.1.0.1 (37199)

What's the purpose of the node-modules container in wolkenkit?

That container is built when deploying the application.
Looks like its purpose is to share dependencies across modules.
It looks like it is started as a container but nothing is apparently running, a bit like an init container.
Console says it starts/stops that component when using respective wolkenkit start and wolkenkit stop command.
On startup:
On shutdown:
When you docker ps, that container cannot be found:
Can someone explain these components?
When starting a wolkenkit application, the application is boxed in a number of Docker containers, and these containers are then started along with a few other containers that provide the infrastructure, such as databases, a message queue, ...
The reason why the application is split into several Docker containers is because wolkenkit builds upon the CQRS pattern, which suggests separating the read side of an application from the application's write side, and hence there is one container for the read side, and one for the write side (actually there are a few more, but you get the picture).
Now, since you may develop on an operating system other than Linux, the wolkenkit application may run under a different operating system than when you develop it, as within Docker it's always Linux. This means that the start command can not simply copy over the node_modules folder into the containers, as they may contain binary modules, which are then not compatible (imagine installing on Windows on the host, but running on Linux within Docker).
To avoid issues here, wolkenkit runs an npm install when starting the application inside of the containers. The problem now is that if wolkenkit did this in every single container, the start would be super slow (it's not the fastest thing on earth anyway, due to all the Docker building and starting that's happening under the hood). So wolkenkit tries to optimize this as much as possible.
One concept here is to run npm install only once, inside of a container of its own. This is the node-modules container you encountered. This container is then linked as a volume to all the containers that contain the application's code. This way you only have to run npm install once, but multiple containers can use the outcome of this command.
Since this container now contains data, but no code, it only has to be there, it doesn't actually do anything. This is why it gets created, but is not run.
I hope this makes it a little bit clearer, and I was able to answer your question :-)
PS: Please note that I am one of the core developers of wolkenkit, so take my answer with a grain of salt.

Start Docker silently on Windows 10

I have installed Docker Edge 18.06.01-ce on Microsoft Windows 10 with Fall Creators Update.
I want to find a way to configure Docker silently - from for example command console. Unfortunately, I can't use Docker features without Docker daemon running in the background.
Normally I would open Docker for Windows.exe- it does manage Hyper-V machine MobyLinuxVM, creates network switches etc. The problem is that this application is running in tray and has GUI.
Is there any way to start this application from command console and wait until it will configure everyting (until Docker will work correctly)? I have already tried creating manually new machine and starting Docker service, but it does not work - I suspect that I need to map new machine somehow to new console windows (docker-machine env?).

Windows 10 Docker Network DNS doesn't work after reboot

I'm not sure if this is an issue with the current version of Windows Docker network or poor configuration and misunderstanding on my part, but I have the following setup:
2 Docker containers (built using the Microsoft/ASP.NET image as a base) running a .NET MVC application in each.
1 Docker container running SQL server (built using the Microsoft/mssql-server-windows image)
When I create all 3 containers everything works great, I can attach and ping all other the other containers using their names without any issue. The applications run and can communicate with each other as I hoped.
However, when I reboot my machine and start all the containers again they can no longer ping/communicate with each other using their names (using IP addresses is fine).
I've tried this on the default NAT network and also tried replacing the NAT network with my own custom NAT network.
To resolve the issue I have to run the force network disconnect command for each container as such:
docker network disconnect nat <containername> --force
And then I have to reconnect each container to the network before starting them up. All containers can then ping/communicate with each other using their names as well as their IP addresses.
FYI, this is a development environment but I was hoping to do something similar in Azure using a Windows Server 2016 VM, although I don't quite know what the best network configuration is for live production yet as I need to have multiple applications (in separate containers) on the same node accessed via their own subdomains.
Any help or guidance would be great.
I'm not sure, in part because this question was asked several months before any other example I've run into, but this sounds very similar to the problem described at https://github.com/docker/for-win/issues/1038.
Basically, there appears to be a problem introduced with the 1709 update to Windows 10 which results in a scenario where Hyper-V networking doesn't work the way it ought to.
There appear to be two common ways of working around this problem: Turning off "Fast Start" in the Control Panel => Power Options => System Settings, or restarting Docker for Windows and any containers after booting. I also thought I saw something on a Microsoft blog post indicating that the underlying problem has now been resolved and will be included in an update to Windows 10, but alas I can no longer find that information or the specific version number in which the problem was (theoretically) resolved. It may well be the delayed 1803 "Spring Creators Update" release.

How do you install something that needs restart in a Dockerfile?

Suppose I have installation instructions as follows:
Do something.
Reboot your machine.
Do something else.
How do I express that in a Dockerfile?
This entirely depends on why they require a reboot. For Linux, rebooting a machine would typically indicate a kernel modification, though it's possible it's for something more simple like a change in user permissions (which would be handled by logging out and back in again). If the install is trying to make an OS level change to the kernel, it should fail if done inside of a container. By default, containers isolate and restrict what the application can do to the running host OS which would impact the host or other running containers.
If, the reboot is to force the application service to restart, you should realize that this design doesn't map well to a container since each RUN command runs just that command in an isolated environment. And by running only that command, this also indicates that any OS services that would normally be started on OS bootup (cron, sendmail, or your application) will not be started in the container. Therefore, you'll need to find a way to run the installation command in addition to restarting any dependent services.
The last scenario I can think of they want different user permissions to take effect to the logged in user. In that case, the next RUN command will run the requested command with any changed access from prior RUN commands. So there's no need to take any specific action of your own to do a reboot, simply perform the install steps as if there's a complete restart between each step.

Resources