I am running Docker-Dektop Version 2.1.0.0 (36874) on a Windows 10 environment.
I am using two separate container compositions, one of these binding to port 8081 on my machine, and the other binding to 9990 and 8787.
After a system restart, I am unable to start these container compositions again, because the ports are already bound.
So far, I have tried multiple approaches to solve this:
manually stop all containers prior to system shutdown
manually stop and remove all containers prior to system shutdown
the above, plus explicitly stopping the docker application prior to system shutdown
removing all containers after system startup and prior to restart
pruning the networks after container removal
restart docker app prior to restarting containers (this worked up until the last update)
I did fiddle around with the compose files and the configuration, but taht would be too much detail to go into right now; all of these did not help.
What I recently found was, directly after a system startup and prior to starting any container, that the process com.docker.backend was already listening to the bound ports. This is confusing as the containers were shut down prior to system shutdown and are not run with a restart-command.
So I explicitly quit the docker desktop app, and the process still remaind, and it still bound the ports.
After manually killing the process as administrator from the power shell, and restarting the docker desktop application, my containers were able to start again.
Has anyone else had this problem? Does anyone know a "fix" for this at all?
And, of course, is this even the right page to ask? As this is not strictly programming, I am unsure.
Docker setup gets screwed up sometimes, so try deleting %appdata%\Docker.
The problem went away after the update to version 2.1.0.1 (37199)
Related
I have a multi-services environment that is hosted with docker swarm. There are multiple stacks that are created. All the docker containers which are running have an inbuild Spring Boot application. The issue is coming that all my stacks get restarted on their own. Now I know that in compose file I have mentioned that restart_policy as on failure. Hence it auto restarted. The issue comes that when services are restarted, I get errors from a particular service and this breaks everything.
I am not able to figure out what actually happens.
I did quite a lot of research and found out about these things.
Docker daemon is not restarted. I double-checked this with the uptime of the docker daemon.
I checked the docker service ps <Service_ID> and there I can see service showing shutdown and starting. No other information.
I checked the docker service logs <Service_ID> but no error in there too.
I checked for resource crunch. I can assure you that there was quite a good resource available at the host as well as each container level.
Can someone help where exactly to find logs for this even? Any other thoughts on this?
My host is actually a VM hosted on VMWare Vcenter.
After a lot of research and going through all docker logs, I could not find the solution. Later on, I discovered that there was a memory snapshot taken for backup every 24 hours.
Here is what I observe:
Whenever we take a snapshot, all docker services running on the host restart automatically. There will be no errors in that but they will just restart gracefully.
I found some questions already having this problem with VMware snapshots.
As far as I know, when we take a snapshot, it points to a different memory location and saves the previous one. I am not able to find why it's happening but yes Root cause of the problem was this. If anyone is a VMWare snapshots expert, please let us know.
I'm trying to spin up a fresh server using the azerothcore docker installation guide. I have completed all of the early installation steps, up until running the containers. Upon running the containers (for worldserver and authserver) i see the following output from the containers. It appears the destination of the world and auth servers in dist/bin is missing, how may i resolve this issue?
Check your docker settings. Make sure you have enough memory. If containers have low memory they will not finish the compile. Check if you have build issues.
I recently started using docker on raspberry pi. I am using a set of docker containers that are run constantly e.g. pihole, node red, mosquitto etc. I know that if you have to restart raspberry pi then you should stop the containers first and then restart them once the pi has rebooted. But in some other tutorials I see they check for update by using the command sudo apt updates for the raspberry pi OS and install the updates if they are available. I want to know the following:
Should I stop the containers before checking and installing updates for the raspberry pi OS?
do you need to stop all the containers that depend on each other before updating any one of the container or just stop the container that needs to be updated?
Should I stop the containers before checking and installing updates for the raspberry pi OS?
Generally, no, updates on the host won't have any impact on your containers. The exception is if you install an update to Docker, which may restart the Docker daemon. This can cause all your containers to exit, although they will come back up automatically if you have configured them with a restart policy. And of course a kernel update will require a host reboot before it becomes active.
do you need to stop all the containers that depend on each other before updating any one of the container or just stop the container that needs to be updated?
This really depends on how you have designed your applications.
Consider a web application that talks to a database. If you were to remove the database container and create a new one with updated software or configuration, depending on how the web application is written:
It might crash, requiring you to restart it manually.
If you have configured it with a restart policy, Docker might take care of restarting it for you.
If the application has reconnect/retry logic, it might simply wait until the database is back up and available.
The application itself might require an update before it is able to run correctly.
It's up to you to understand that dependencies in your software stack and determine how best to handle component upgrades.
Upon a fresh start of my ubuntu (on a virtualbox vm), I can build my images normally. Then very inconstantly, it can be the next time I try to build, or the 10th time, it will hang forever after running the command docker build .
Dockerfiles are in directories with 5~10 other files (which eliminates the issue with massive file amount slowing down docker while trying to locate the Dockerfile, as seen on other posts)
If I try to build for a new, very simple, Dockerfile (to eliminate any syntax error), it will also hang whenever it hangs with my project's Dockerfiles.
Beside, I am running minikube --driver=none and my images are used for deployments in kubernetes. (with none driver it's not required to run eval $(minikube docker-env) )
The only reliable fix is to stop the vm on which my ubuntu is running, start it again, and it will consistently allow me to build my images at least one time, then the issue comes back inconsistently.
This fix is quite inconvenient as I need to stop everything I am doing and it takes a bit of time.
I have tried to run docker system prune and to delete all the images already built.
What log could I check to find an issue going on when the build hangs ?
Any idea of the origin of the issue ?
Thanks a lot !
Ok this bug was viscous.
Sometimes when I need to check how my nginx server behave in one of the containers, I open the VM's graphical interface and pop firefox to have a look.
I only figured today that firefox prompt a pop-up after a while, asking for the admin password in order to access the keychain. And turns out docker do not build anything until this pop-up is open. Closing it or filling the password fixed my issue...
On another terminal window, please check the Docker Daemon Log using sudo journalctl -fu docker.service at the time of build command hanging. Also, you can check the list of running processes during the build execution.
This issue is really strange, but I've no idea how to resolve it. I'm running Win10 and I've got WSL2 on it. When I start Docker with it using the "WSL 2 Based engine", it retrieves some deleted folders out of nowhere. I delete them every single time and after I've restarted my PC and started up Docker again, the folders are back.
There are absolutely no docker containers running ("docker ps" returns nothing) so it couldn't be that some rogue volume definition is being ran along with some container. The folders also only appear once I start up Docker.
The directory inside of which the zombie folders appear is also the source for a Mutagen volume when the containers are running, but as I said - no containers are running.
I ran into the same problem and resolved it by running Troubleshoot from Docker for Windows and purge data on WSL2.
I've not found a solution to selectively clear just that one folder that keeps getting raised from the dead.