NiFi 1.19 keeps restarting within Docker container - docker

I have Docker Desktop on Windows 10 and I run Apache NiFi 1.19 within container.
NiFi keeps restarting, by itself, without giving any useful log message, exception message, or whatever I can trace back with.
Any ideas what can be going wrong? I have tried many things.. including this, but nothing helped out.

Ok, finally, so the problem was built-in Hyper-V backend of the Docker (i.e. the default VM environment that comes with Docker Desktop).
I just opted for Use the WSL 2 based engine choice and problem went away.
From the Docker Desktop dashboard, go to settings and in the General tab, select "Use the WSL 2 based enginge":

Related

Deployment of a web application in AKS windows nodepool

I have deployed a owin hosted web applciation in AKS(windows nodepool). The container is in running state but I am not able to hit the application. There might be runtime exceptions or errors but I am not able to figure out the path where I can see such errors in AKS Windows node.
Please help me.
So, the ideal way here would be to use kubctl logs (which goes to Monitor, if you have that enabled). However, Windows containers don't pass on its logs to stdout by default. You have to use Log Monitor for that. Essentially, you have to enable Log Monitor on your container image to be able to get the logs out of the container just like you do with Linux containers. I blogged about it here: https://techcommunity.microsoft.com/t5/itops-talk-blog/troubleshooting-windows-containers-apps-on-azure-kubernetes/ba-p/3269767
The other thing you can try is to use kubectl exec to run a command inside the container and get its output.

All docker stack are restarting automatically

I have a multi-services environment that is hosted with docker swarm. There are multiple stacks that are created. All the docker containers which are running have an inbuild Spring Boot application. The issue is coming that all my stacks get restarted on their own. Now I know that in compose file I have mentioned that restart_policy as on failure. Hence it auto restarted. The issue comes that when services are restarted, I get errors from a particular service and this breaks everything.
I am not able to figure out what actually happens.
I did quite a lot of research and found out about these things.
Docker daemon is not restarted. I double-checked this with the uptime of the docker daemon.
I checked the docker service ps <Service_ID> and there I can see service showing shutdown and starting. No other information.
I checked the docker service logs <Service_ID> but no error in there too.
I checked for resource crunch. I can assure you that there was quite a good resource available at the host as well as each container level.
Can someone help where exactly to find logs for this even? Any other thoughts on this?
My host is actually a VM hosted on VMWare Vcenter.
After a lot of research and going through all docker logs, I could not find the solution. Later on, I discovered that there was a memory snapshot taken for backup every 24 hours.
Here is what I observe:
Whenever we take a snapshot, all docker services running on the host restart automatically. There will be no errors in that but they will just restart gracefully.
I found some questions already having this problem with VMware snapshots.
As far as I know, when we take a snapshot, it points to a different memory location and saves the previous one. I am not able to find why it's happening but yes Root cause of the problem was this. If anyone is a VMWare snapshots expert, please let us know.

Docker build hangs immediatly (with minikube and ubuntu, with not many files in the directory)

Upon a fresh start of my ubuntu (on a virtualbox vm), I can build my images normally. Then very inconstantly, it can be the next time I try to build, or the 10th time, it will hang forever after running the command docker build .
Dockerfiles are in directories with 5~10 other files (which eliminates the issue with massive file amount slowing down docker while trying to locate the Dockerfile, as seen on other posts)
If I try to build for a new, very simple, Dockerfile (to eliminate any syntax error), it will also hang whenever it hangs with my project's Dockerfiles.
Beside, I am running minikube --driver=none and my images are used for deployments in kubernetes. (with none driver it's not required to run eval $(minikube docker-env) )
The only reliable fix is to stop the vm on which my ubuntu is running, start it again, and it will consistently allow me to build my images at least one time, then the issue comes back inconsistently.
This fix is quite inconvenient as I need to stop everything I am doing and it takes a bit of time.
I have tried to run docker system prune and to delete all the images already built.
What log could I check to find an issue going on when the build hangs ?
Any idea of the origin of the issue ?
Thanks a lot !
Ok this bug was viscous.
Sometimes when I need to check how my nginx server behave in one of the containers, I open the VM's graphical interface and pop firefox to have a look.
I only figured today that firefox prompt a pop-up after a while, asking for the admin password in order to access the keychain. And turns out docker do not build anything until this pop-up is open. Closing it or filling the password fixed my issue...
On another terminal window, please check the Docker Daemon Log using sudo journalctl -fu docker.service at the time of build command hanging. Also, you can check the list of running processes during the build execution.

Zombie folders coming back from the dead with Docker running on WSL2

This issue is really strange, but I've no idea how to resolve it. I'm running Win10 and I've got WSL2 on it. When I start Docker with it using the "WSL 2 Based engine", it retrieves some deleted folders out of nowhere. I delete them every single time and after I've restarted my PC and started up Docker again, the folders are back.
There are absolutely no docker containers running ("docker ps" returns nothing) so it couldn't be that some rogue volume definition is being ran along with some container. The folders also only appear once I start up Docker.
The directory inside of which the zombie folders appear is also the source for a Mutagen volume when the containers are running, but as I said - no containers are running.
I ran into the same problem and resolved it by running Troubleshoot from Docker for Windows and purge data on WSL2.
I've not found a solution to selectively clear just that one folder that keeps getting raised from the dead.

Docker Desktop Kubernetes Unable to connect to the server: EOF

Earlier today I had increased my Docker desktop resources, but when ever since it restarted Kubernetes has not been able to complete its startup. Whenever I try to run a kubectl command, I get Unable to connect to the server: EOF in response.
I had thought that it started because I hadn't deleting a helm chart before adjusting the resource values in Settings, thus said resources having been assigned to the pods instead of the Kubernetes api server. But I have not been able to fix this issue.
This is what I have tried thus far:
Restarting Docker again
Reset Kubernetes
Reset Docker to factory settings
Deleting the VM in hyper-v and restarting Docker
Uninstalling and reinstalling Docker Desktop
Deleting the pki folder and restart Docker
Set the Environment Variable for KUBECONFIG
Deleting .kube/config and restart
Another clean reinstall of Docker Desktop
But Kubernetes does not complete its startup, so I still get Unable to connect to the server: EOF in response.
Is there anything I haven't tried yet?
I'll share that what solved this for me was Docker Desktop settings feature for "reset kubernetes cluster". I know that #shenyongo said that a "reset kubernetes" didn't work, and I suppose they mean this.
But for the sake of other readers who may find this, I had this same error message (with Docker Desktop on Windows 11, using wsl2), and the solution for me was indeed to do this:
open the Settings page (in Docker Desktop--right-click on it in the status tray)
then choose "Kubernetes" on the left
then choose "reset kubernetes cluster"
Yes, that warns that "all stacks and kubernetes resources will be deleted", but as nothing else had worked for me (and I wasn't worried about losing much), I tried it, and it did the trick. In moments, all my k8s functionality was back to working.
As background, k8s had been working fine for me for some time. It was just that one day I found I was getting this error. I searched and searched and found lots of folks asking about it but not getting answers, let alone this answer. To be clear, like the OP here I had tried restarting Docker Desktop, restarting the host machine, even downloading and installing an available DD update (I was only a bit behind), and none of those worked. I didn't proceed to ALL the steps shenyongo did, as I thought I'd try this first, and the reset worked.
Hope that may help others. I realize some may fear losing something, but this helps stress the power of declarative vs imperative k8s configuration. It SHOULD be easy to recreate most everything if necessary. I realize it may not be so for everyone.

Resources