I have two containers running on Ubuntu Server 22.04 LTS.
One of them is Selenium Grid and the second one is Python container that works in the connection with mentioned above Selenium container.
How can I get these two containers correctly restarted after system poweroff or reboot?
I tried this:
docker update --restart [on-failure] [always] [unless-stopped] container_grid docker update --restart [on-failure] [always] [unless-stopped] container_python
The Selenium Grid container restarts correctly, but Python container keeps restarting in a loop.
As I can suppose it cannot by some reason establish connection to the second one, exits with the code 1 and keeps restarting.
How can I avoid this? Maybe there is a solution that adds delay or sets the order of containers restart after turning on the system? Or should I simply add some delay in Python code because there is no any simple solution to this?
I am not software developer but automation engineer so could somebody help me with the solution. Maybe it would e Docker Compose or something else.
Thanks in advance.
So) Solved this problem via crontab.
Selenium container starts in accordance with "--restart on-failure" option.
My Python container starts with a delay in accordance with crontab command:
#reboot sleep 20 && docker start [python_container]
Related
I would like to stop my running docker container after a specific time, let's say 2 hrs after startup. So far my research has led to the following solutions. I just wanted to know if there were better ways to do it.
Use a cron job to stop the container by calling the docker stop command.
Use an entry point like sleep 5000, but this does not suit my use case.
Using --stop-timeout in the docker run command ; I believe this is just the maximum timeout given for the container to gracefully shutdown. Am I missing something
here?
You can use the timeout command that's part of the coreutils package which is already installed in the debian images (and probably many others).
This will run the container for 30 seconds and then stop
docker run debian timeout 30 tail -f /dev/null
Basically, add timeout 7200 in front of the command you want to run in the container, and it'll be killed after 2 hours.
I have two nodes with docker. Zookeeper,Mesos and Spark were installed on each docker. I specify "slaves" on "slaves" file that I have just one master and one slave. Also, I have a "docker-compose.yaml" on each node in the same path. I do "docker-compose up" in each node. Then in the master node,inside docker,I run dispatcher:
"/home/spark/sbin/start-mesos-dispatcher.sh --master
mesos://150.20.11.136:5050".
After that I run my program with this command:
"/home/spark/bin/spark-submit --name test_mesos --master
mesos://150.20.11.136:5050 --executor-cores 4 --executor-memory 6G --
files iran2.npy --py-files a.zip myprogram.py".
When running my program, I get this error:
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
I have searched a lot. I disable firewall, first time it worked;but now I does not work at all. Moreover, I openned all port in docker via "expose" in docker compose file. I decreased resources in Submit command. But none of them solved my problem.
Would you please tell me what I am doing wrong?
Any help would be appreciated.
Thanks in advance.
I ran docker with this command and my program was run without any errors. But it took a lot of time, I do not know maybe it is because of Mesos.
Sudo docker run --network host -it ubuntu_mesos_spark_python3.6_oraclient
I hope this point solved others problem.
I pulled centos6 image and made a container from it. I got its bash by:
$ docker run -i -t centos:centos6 /bin/bash
On the centos6 container, I could use "service" command without any problem. But when I pulled&used centos7 image:
$ docker run -i -t centos:centos7 /bin/bash
Both of "service" and "systemctl" didn't work. The error message is:
Failed to get D-Bus connection: Operation not permitted
My question is:
1. How are people developing without "service" and "systemctl" commands?
2. If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is not recommended?
There is no process supervisor running inside either container. The service command in your CentOS 6 container works by virtue of the fact that it just runs a script from /etc/init.d, which by design ultimately launch a command in the background and return control to you.
CentOS 7 uses systemd, and systemd is not running inside your container, so there is nothing for systemctl to talk to.
In either situation, using the service or systemctl command is generally the wrong thing to do: you want to run a single application, and you want to run it in the foreground, so that your container continues to run (from Docker's perspective, a command that goes into the background has exited, and if that was pid 1 in the container, the container will exit).
How are people developing without "service" and "systemctl" commands?
They are starting their programs directly, by consulting the necessary documentation to figure out the appropriate command line.
If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is recommended?
You would start the httpd binary using something like:
CMD ["httpd", "-DFOREGROUND"]
If you like to stick with service/sytemctl commands to start/stop services then you can do that in a centos7 container by using the docker-systemctl-replacement script.
I had some deployment scripts that were using th service start/stop commands on a real machine - and they work fine with a container. Without any further modification. When putting the systemctl.py script into the CMD then it will simply start all enabled services somewhat like the init-process on a real machine.
systemd is included but not enabled by default in CentOS 7 docker image. It is mentioned on the repository page along with steps to enable it.
https://hub.docker.com/_/centos/
I have a container called sqlcontainer1.
The image is "microsoft/mssql-server-linux:2017-latest".
I restored a .bak file to it and now I can use it for local development.
And I can even see if from SSMS (SQL Server Management Studio). Great!
The problem is after I reboot it the container status says "Exited".
The only way I can see to restart is to type:
docker start -ai sqlcontainer1
Then no command prompt is ever returned so I have to open another command prompt and retype:
docker ps -a
to see the status is now "UP 7 minutes".
OK, I'm glad it's up and I can now connect back with SSMS and work from there (although I am wondering why it says 7 minutes. I've only had it up seconds).
Good.
But there has to be a better way.
I just want two commands like this;
docker start containerName
docker stop containerName
Is there anything like this?
If I can get that far then I would like to look into a proper restart policy.
You can set a container to restart=always when you create it or afterwards you can update it with
docker update --restart=always <container>
Then the container will always run on startup of your computer
Like most docker users, I periodically need to connect to a running container and execute various arbitrary commands via bash.
I'm using 17.06-CE with an ubuntu 16.04 image, and as far as I understand, the only way to do this without installing ssh into the container is via docker exec -it <container_name> bash
However, as is well-documented, for each bash shell process you generate, you leave a zombie process behind when your connection is interrupted. If you connect to your container often, you end up with 1000s of idle shells -a most undesirable outcome!
How can I ensure these zombie shell processes are killed upon disconnection -as they would be over ssh?
One way is to make sure the linux init process runs in your container.
In recent versions of docker there is an --init option to docker run that should do this. This uses tini to run init which can also be used in previous versions.
Another option is something like the phusion-baseimage project that provides a base docker image with this capability and many others (might be overkill).