I´m new on docker but i know that a docker container should have only one process. But is it possible to run a script inside of a docker container multiple times like by a cronjob?
For example I have a python script which manipulate my database. This process should be done every hour. For that i have created a container based on a file like that:
FROM python:slim
COPY ac.py ac.py
RUN pip install pymongo
CMD [ "python", "./ac.py" ]
If i load this container from my repository and start it on any environment the process is done only one time.
Is there any posibillity to start that like a cronjob (without use ubuntu image inside of my docker container)?
By the way I want to deploy this container in google cloud. Is there any cloud provider who provide a functionality like that?
You could leverage docker swam and create a service that will have restart condition set to any and a delay between restarts set to 1h.
docker service create --restart-condition any --restart-delay 1h myPythonImage:latest
See docker service create reference: https://docs.docker.com/engine/reference/commandline/service_create/#options
Related
I am considering implementing AirFlow and have no prior experience with it.
I have a VM with docker installed, and two containers running on it:
container with python environment where cronjobs currently run
container with an AirFlow installation
Is it possible to use AirFlow to run a task in the python container? I am not sure, because:
If I use the BashOperator with the command like docker exec mycontainer python main.py, I assume it will mark this task as success, even if the python script fails (it successfully run the command, but its resposibility ends there).
I see there is a DockerOperator, but it seems to take an image, create and run a container, but I want to run a task on a container that is already running.
The closest answer I found is using kubernetes here, which is overkill for my needs.
The BashOperator runs the bash command on:
the scheduler container if you use the LocalExecutor
one of the executors containers if you use the CeleryExecutor
a new separate pod if you use the KubernetesExecutor
While the DockerOperator is developed to create a new docker container on a docker server (local or remote server), and not to manage an existing container.
To run a task (command) on an existing container (or any other host), you can setup a ssh server within the python docker container, then use the sshOperator to run your command on the remote ssh server (the python container in your case).
In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.
I got stuck and need help. I have setup multiple stacks on docker cloud. The stacks are running multiple container like data, mysql, web, elasticsearch, etc.
Now I need to run commands on the web containers. Before docker I did this with cronjob eg:
*/10 * * * * php /var/www/public/index.php run my job
But my web Dockerfile ends with
CMD ["apache2-foreground"]
As I understand the docker concept running two commands on one container would be bad practice. But how would I schedule a job like the one cronjob above?
Should I start cron in the CMD too, something like?
CMD ["cron", "apache2-foreground"] ( should exit with 0 before apache starts)
Should I make a start up script running both commands?
In my opinion the smartest solution would be to create another service like the dockercloud haproxy one, where other services are linked.
Then the cron service would exec commands that are defined in the Stackfile of the linked containers/stacks.
Thanks for your help
With docker in general I see 3 options:
run your cron process in the same container
run your cron process in a different container
run cron on the host, outside of docker
For running cron in the same container you can look into https://github.com/phusion/baseimage-docker
Or you can create a separate container where the only running process inside is the cron daemon. I don't have a link handy for this, but they are our there. Then you use the cron invocations to connect to the other containers and call what you want to run. With an apache container that should be easy enough, just expose some minimal http API endpoint that will do what you want done when it's called (make sure it's not vulnerable to any injections, i.e. don't pass any arguments, keep it simple stupid).
If you have control of the host as well then you can (ab)use the cron daemon running there (I currently do this with my containers). I don't know docker cloud, but something tells me that this might not be an option for you.
I have two docker images, lets say
Image 1: This has an public API built on Python Flask
Image 2: This some functional tests written Python
I am looking for an option where the API in Image 1 container is posted with a specific param then the Image1 container should trigger a docker run of Image2.
Is this should to trigger a docker run from a docker container.
Thanks
You are talking about using Docker in Docker
Check out this blog post for more info about how it works:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
in short, you need to mount the docker socket as a volume (and now with docker 1.10, its dependencies as well)
then you can run docker in docker.
but it seems like what you are trying to do does not necessarily require that. you should rather look into making your 'worker' API an actual HTTP API that you can run and call an endpoint for to trigger the parametrized work. That way you run a container that waits for work requests and run them, without running a container each time you need a task done.
My goal is to use Docker to create a mail setup running postfix + dovecot, fully configured and ready to go (on Ubuntu 14.04), so I could easily deploy on several servers. As far as I understand Docker, the process to do this is:
Spin up a new container (docker run -it ubuntu bash).
Install and configure postfix and dovecot.
If I need to shut down and take a break, I can exit the shell and return to the container via docker start <id> followed by docker attach <id>.
(here's where things get fuzzy for me)
At this point, is it better to export the image to a file, import on another server, and run it? How do I make sure the container will automatically start postfix, dovecot, and other services upon running it? I also don't quite understand the difference between using a Dockerfile to automate installations vs just installing it manually and exporting the image.
Configure multiple docker images using Dockerfiles
Each docker container should run only one service. So one container for postfix, one for another service etc. You can have your running containers communicate with each other
Build those images
Push those images to a registry so that you can easily pull them on different servers and have the same setup.
Pull those images on your different servers.
You can pass ENV variables when you start a container to configure it.
You should not install something directly inside a running container.
This defeat the pupose of having a reproducible setup with Docker.
Your step #2 should be a RUN entry inside a Dockerfile, that is then used to run docker build to create an image.
This image could then be used to start and stop running containers as needed.
See the Dockerfile RUN entry documentation. This is usually used with apt-get install to install needed components.
The ENTRYPOINT in the Dockerfile should be set to start your services.
In general it is recommended to have just one process in each image.