How to stop DockerOperator after another one is completed Airflow - docker

So, basically i have an airflow dag that is as follow:
Operator T1 execute a container that listen on a port forever
Operator T2 execute a container that will use the container from T1 inside a python script
Operator T3 execute a python script that doesn't need the other 2 containers, but need to be executed after T2
The problem is, how can i stop container from Operator T1 after Operator T2 finished his task (failed or success)?
Basically i have the following graph
[T1, T2]
T2 >> T3
One solution was to add a forth operator that try to kill the first container created like using
docker stop container_name
But i don't know how to do this
The problem is that, since T1 run foverer, the dag won't stop.
PS: I can't set a limit of time, i don't know how much time T2 will take

Why dont you run your docker stop container_name inside BashOperator?

Related

Swarm | Start a service to execute just one time then be deleted

I have a Swarm cluster and I have a service that will just create a container, execute one script and then delete the container. And currently it works fine, the container execute the script then delete him-self after the execution. But after a few seconds, the service restart the container and execute again the script.
But I would like to delete the service after the first execution, like that the service don't do the job several times. 
My only idea is to in my first script that launch the service, I put a timer, like 4 seconds and after that i just delete the service in the script like that :
#!/bin/bash
docker stack deploy -c docker-compose.yaml database-preapre
sleep 4
docker service rm database-prepare
But if one day, the service make more time to execute his script and I cut the service when he is running. Or if the service execute itself one time real fast and then after 3 seconds restart it-self for the second execution. I don't want to cut him...
To resolve this issue, I just adjust some things in my script. First, in the script that create the temporary container, I just sleep 10 seconds and I destroy the container. (10 seconds is very large but i want to be sure the container have the time to do the job)
Secondly, when the container finished executing the entrypoint script he will just deleted himself and restart (cause swarm restart it), but I don't want that my script go Up, down and Up and be cut during the second execution and destroy the job. So I add a sleep timer to the script that my temporary container run, so he will don't go down and up and down and up ... He executes his job and then wait.

Force docker to wait for a specific container during docker-compose restart [duplicate]

This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 1 year ago.
I have container A and B, when I do docker-compose restart I want it to start container A first and then container B. I specified the depends_on directive, but it seems to be ignored (I see it starting up container B first).
Running version 3.4 of the YML file which has 2 services (A, B).
Thank you
depends_on only waits, until a container has the state running, not until it is ready. See the official documentation on how to wait until a container is "ready".

What all happens underneath the command Service Update?

When we run the following command, which of the following events does not occur?
$ docker service update --replicas=5 --detach=true nginx1
a) The state of the service is updated to 5 replicas, which is stored in the swarm's internal storage. --I believe this is True
b) Docker Swarm recognizes that the number of replicas that is scheduled now does not match the declared state of 5. --Not sure, ultimately it will check but how it happens on the timeline: immediately?/periodically?
c) This command checks aggregated logs on the updated replicas. --Don't think this is true but not sure.
d) Docker Swarm schedules 5 more tasks (containers) in an attempt to meet the declared state for the service. ----I believe this is True
D) Docker Swarm recognizes that the number of replicas that is scheduled now does not match the declared state of 5.

start multiple processes in docker container from Dockerfile

I want to start multiple processes p1, p2 ... pn when I start docker container. I can achieve that for one process by:
CMD p1
But I want to do that for multiple processes and I want to run all processes in background. Is there anyway to do that?
You could have a start script that executes the processes.
eg
Dockerfile
CMD ./start.sh
start.sh
./process-1.sh
./process-2.sh
./process-3.sh &
It's import to keep the parent process running otherwise docker will kill all processes and the container will stop running.(that's tripped me up before)
You could alternatively use supervisor or somethng to that effect.

Running a cronjob or task inside a docker cloud container

I got stuck and need help. I have setup multiple stacks on docker cloud. The stacks are running multiple container like data, mysql, web, elasticsearch, etc.
Now I need to run commands on the web containers. Before docker I did this with cronjob eg:
*/10 * * * * php /var/www/public/index.php run my job
But my web Dockerfile ends with
CMD ["apache2-foreground"]
As I understand the docker concept running two commands on one container would be bad practice. But how would I schedule a job like the one cronjob above?
Should I start cron in the CMD too, something like?
CMD ["cron", "apache2-foreground"] ( should exit with 0 before apache starts)
Should I make a start up script running both commands?
In my opinion the smartest solution would be to create another service like the dockercloud haproxy one, where other services are linked.
Then the cron service would exec commands that are defined in the Stackfile of the linked containers/stacks.
Thanks for your help
With docker in general I see 3 options:
run your cron process in the same container
run your cron process in a different container
run cron on the host, outside of docker
For running cron in the same container you can look into https://github.com/phusion/baseimage-docker
Or you can create a separate container where the only running process inside is the cron daemon. I don't have a link handy for this, but they are our there. Then you use the cron invocations to connect to the other containers and call what you want to run. With an apache container that should be easy enough, just expose some minimal http API endpoint that will do what you want done when it's called (make sure it's not vulnerable to any injections, i.e. don't pass any arguments, keep it simple stupid).
If you have control of the host as well then you can (ab)use the cron daemon running there (I currently do this with my containers). I don't know docker cloud, but something tells me that this might not be an option for you.

Resources