Docker mechanism to run a script on the host? - docker

Does anyone know if it is possible via a docker-compose file (or any other mechanism) to have docker-compose or docker stack (Swarm) start/execute a shell script, which executes on the host docker machine, as part of the normal command?
Healthcheck is the closest thing I could find, but that executes within the container, not on the host.
Obviously, one could wrap the docker command in a shell script, but that option is not viable for me.

Related

Restart a docker container from another running container

I am using docker-compose for deployment.
I want to restart my "centos-1" container from "centos-2" container. Both containers are running on the same host.
Please suggest, How could I achieve this in a simplest and automated way?
I followed How to run shell script on host from docker container? and tried to run a script on Host from "centos-2" container, but the script is executing inside a container and not on the host.
Script:
#!/bin/bash
sudo docker container restart centos-1
Error:
line 2: docker: command not found
(Docker isn't installed inside any centos-2 container)
You need:
Install docker CLI (command line interface) on second container. Do not confuse with full scale installation - you dont need docker daemon, only command line tool (docker executable)
Share you host's docker daemon (service) to make it accessible in second container. That is achieved with simply sharing /var/run/docker.sock when launching 2nd container, example:
docker run ... -v "/var/run/docker.sock:/var/run/docker.sock" container2 ...
Now you can execute any docker command, like docker stop from second container and these commands are happily passed to your main (and the only) docker daemon.
There is a approach from the CI-context to control the Docker Daemon on System from a running container called Docker-out-of-Docker (DooD):
you have to install docker inside your container
Map you docker installation from your system inside your container using volumes
-v /var/run/docker.sock:/var/run/docker.sock
Now each docker command inside your container are execute on the system docker installation. E.g. if you type docker image list inside your container there should be the same list as if your type the command on your system.

How to take automated database dump of rethinkdb inside container?

I am running rethinkdb inside docker container with bind mount (Which is bind with the host).
When we are running rethinkdb without the container, we shall take database dump with shell scripting very easily.
But when we are running rethinkdb in the docker container, where I wanted to take the dump with shell scripting.
We are running rethinkdb inside the container, so all rethinkdb commands run inside the container (not on the host system).
so How to set up automated dumps of rethinkdb running inside the container?
You can use the same script that you use normally when the database is not containerized. You can achieve that by copying the script into the container and then executing it inside.
docker cp dump.sh <container-name>:<script-container-path>
docker exec -it <container-name> <script-container-path>
The above commands will copy the dump script and execute it inside the container.
The resulting dump will be inside the container, which you can copy back into the host using docker cp <container-name>:<dump-path> .

how to access a path of a container from `docker-machine `

how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?
This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.

Execution commands between two dockers containers

I wonder if it's possible to exec commands between two containers (docker exec -it )?
I have a container running Jenkins and another one with my web application, after the build I want that the Jenkins Container send commands directly to the project container. I would like to avoid to use ssh. Is it possible?
You need to mount the docker socket from host to the container you want to run command with. See https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337

Consul and Tomcat in the same docker container

This is a two-part question.
First part:
What is the best approach to run Consul and a Tomcat in the same docker container?
I've built my own image, installing both Tomcat and Consul correctly, but I am not sure on how to start them. I tried putting both calls as CMD in the Dockerfile, but no success. I tried to put Consul as an ENTRYPOINT (Dockerfile) and Tomcat to be called in the "docker run" command. It could be vice versa but I have a feeling that it is no good way either.
The docker will run in one AWS instance. Each docker container would run Consul as a server, to register themselves in another AWS instance. Consul and Consul-template will be integrated into proper load balance. This way, my HAproxy instance will be able to correctly forward the requests as I plug or unplug containers.
Second part:
In individual tests I did, the docker container was able to reach my main Consul server(leader) but it failed to register itself as an "alive" node.
Reading the logs at Consul server, I think is a matter of which ports I am exposing and publishing. In AWS, I already allowed communication in all ports in TCP and UDP between the instances in this particular Security Group.
Do you know which ports I should be exposing and publishing to allow proper communication between a standalone consul(aws instance) and consul servers (running inside docker containers inside a aws container)? What is command to run the docker container: docker run -p 8300:8300 .........
Thank you.
I would use ENTRYPOINT to kick off a script on docker run.
Something like
ENTRYPOINT myamazingbashscript.sh
Syntax might be off but u get the idea
The script should start both services and finally should tail -f the tomcat logs (or any logs).
tail -f will prevent container exit since the tail -f command never exits and it will also help u to see what tomcat is doing
Do ... docker logs -f to watch the logs after a docker run
Note because the container doesn't exit u can exec into it with ... docker exec -it containerName bash
This lets you have a sniff around inside the container
Not generally the best approach to have two services in one container because it destroys the separation of concerns and reusability but u may have valid reasons
To build use docker build then run with docker run as u stated it.
If u decided to go for a 2 container solution then u will need to expose ports between containers to allow them to talk to each other. You could share files between containers using volumes_from

Resources