It's my first time working with MAC OS. A MacBook Pro serves as my Replacement for my Server.
I want to run a Python Script which is stored inside a Docker Image from a Cromjob every Hour.
I Am familiar with the Syntax of Cronjobs.
My Crontab -e File looks like:
0 * * * * docker run -d --rm ImageName
Sadly my Image or an Container is not started and i dont know why. I tryed it with sudo to and other things. My Cron Daemon is turned on.
Any Ideas why?
My Second Question would be:
Better Approach to create new Containers with every hour and destory it afterwards again or to use the same Container every hour?
Related
I have two containers running on Ubuntu Server 22.04 LTS.
One of them is Selenium Grid and the second one is Python container that works in the connection with mentioned above Selenium container.
How can I get these two containers correctly restarted after system poweroff or reboot?
I tried this:
docker update --restart [on-failure] [always] [unless-stopped] container_grid docker update --restart [on-failure] [always] [unless-stopped] container_python
The Selenium Grid container restarts correctly, but Python container keeps restarting in a loop.
As I can suppose it cannot by some reason establish connection to the second one, exits with the code 1 and keeps restarting.
How can I avoid this? Maybe there is a solution that adds delay or sets the order of containers restart after turning on the system? Or should I simply add some delay in Python code because there is no any simple solution to this?
I am not software developer but automation engineer so could somebody help me with the solution. Maybe it would e Docker Compose or something else.
Thanks in advance.
So) Solved this problem via crontab.
Selenium container starts in accordance with "--restart on-failure" option.
My Python container starts with a delay in accordance with crontab command:
#reboot sleep 20 && docker start [python_container]
I would like to stop my running docker container after a specific time, let's say 2 hrs after startup. So far my research has led to the following solutions. I just wanted to know if there were better ways to do it.
Use a cron job to stop the container by calling the docker stop command.
Use an entry point like sleep 5000, but this does not suit my use case.
Using --stop-timeout in the docker run command ; I believe this is just the maximum timeout given for the container to gracefully shutdown. Am I missing something
here?
You can use the timeout command that's part of the coreutils package which is already installed in the debian images (and probably many others).
This will run the container for 30 seconds and then stop
docker run debian timeout 30 tail -f /dev/null
Basically, add timeout 7200 in front of the command you want to run in the container, and it'll be killed after 2 hours.
I have a container called sqlcontainer1.
The image is "microsoft/mssql-server-linux:2017-latest".
I restored a .bak file to it and now I can use it for local development.
And I can even see if from SSMS (SQL Server Management Studio). Great!
The problem is after I reboot it the container status says "Exited".
The only way I can see to restart is to type:
docker start -ai sqlcontainer1
Then no command prompt is ever returned so I have to open another command prompt and retype:
docker ps -a
to see the status is now "UP 7 minutes".
OK, I'm glad it's up and I can now connect back with SSMS and work from there (although I am wondering why it says 7 minutes. I've only had it up seconds).
Good.
But there has to be a better way.
I just want two commands like this;
docker start containerName
docker stop containerName
Is there anything like this?
If I can get that far then I would like to look into a proper restart policy.
You can set a container to restart=always when you create it or afterwards you can update it with
docker update --restart=always <container>
Then the container will always run on startup of your computer
Related to
docker container started in Detached mode stopped after process execution
https://serverfault.com/questions/661909/the-right-way-to-keep-docker-container-started-when-it-used-for-periodic-tasks
I do understand the difference between docker run and create + start, but don't understand how the actual containers created in these two ways differ.
Say I create and run a container with
docker run -dit debian:testing-slim
and then stop it. The created container can later be started with
docker start silly_docker_name
and it'll run in the background, because the entry command for the image is bash.
But when a container is first created
docker create --name silly_name debian:testing-slim
and then started with
docker start silly_name
then it'll exit immediately. Why isn't bash started, or how come it exits in this case?
The difference for a container process that is a shell (like bash in your debian example) is that a shell started without a terminal+interactive "mode" exits without doing anything.
You can test this by changing the command of a create'd container to something that doesn't require a terminal:
$ docker create --name thedate debian date
Now if I run thedate container, each time I run it it outputs the date (in the logs) and exits. docker logs thedate will show this; one entry for each run.
To be explicit, your docker run command has flags -dit: detached, interactive (connect STDIN), and tty are all enabled.
If you want a similar approach with create & start, then you need to allocate a tty for the created container:
$ docker create -it --name ashell debian
Now if I start it, I ask to attach/interactively to it and I get the same behavior as run:
$ docker start -ai ashell
root#6e44e2ae8817:/#
NOTE: [25 Jan 2018] Edited to add the -i flag on create as a commenter noted that as originally written this did not work, as the container metadata did not have stdin connected at the create stage
I am new to docker and I tried to run the linuxconfig/lemp-php7 image. Everything worked fine and I could access the nginx web server installed on the container. To run this image I used this command:
sudo docker run linuxconfig/lemp-php7
When I tried to run the image with the following command to gain access over the container through bash I couldn't connect to nginx and I got the connection refused error message. Command: sudo docker run -ti linuxconfig/lemp-php7 bash
I tried this several times so I'm pretty sure it's not any kind of coincidence.
Why does this happen? Is this a problem specific to this particular image or is this a general problem. And how can I gain access to the shell of the container and access the web server at the same time?
I'd really like to understand this behavior to improve my general understanding of docker.
docker run runs the specified command instead of what that container would normally run. In your case, it appears to be supervisord, which presumably in turn runs the web server. So you're preventing any of that from happening.
My preferred method (except in cases where I'm trying to debug cases where the container won't even start properly) is to do the following after running the container normally:
docker exec -i -t $CONTAINER_ID /bin/bash