Execution commands between two dockers containers - docker

I wonder if it's possible to exec commands between two containers (docker exec -it )?
I have a container running Jenkins and another one with my web application, after the build I want that the Jenkins Container send commands directly to the project container. I would like to avoid to use ssh. Is it possible?

You need to mount the docker socket from host to the container you want to run command with. See https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337

Related

Why would it be necessary to give a docker container access to the docker socket?

I am reading a docker run command where it maps /var/run/docker.sock
like:
docker run -it --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock theimage /bin/bash
Why would the container would need access to the socket? (this article says it is a very bad idea.)
What would be one case where the container need access to the socket?
It is not necessary until the container needs to invoke itself the docker daemon, for example, in order to create and run an inner container.
For example, in my CI chain Jenkins builds a docker image to run the build and test process. Inside it we need to create an image to test and then submit it to K8S. In such situation Jenkins, when builds the pipeline container, passes to it the docker socket to allow the container to create other containers using the host server docker daemon.

Can I run a docker container inside of a docker container?

E.g., my local setup is to run a server (listens to :1111) and a docker container (with -p 5555:5555). The purpose is that a server can send the requests to docker container as well (to :5555).
I think that the typical way to deploy servers is to wrap it in a docker image and run the docker image in the cloud. How can I do the same thing but run my custom docker container inside of a server automatically (e.g., add docker run command to a Dockerfile)?

how to access a path of a container from `docker-machine `

how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?
This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.

Docker mechanism to run a script on the host?

Does anyone know if it is possible via a docker-compose file (or any other mechanism) to have docker-compose or docker stack (Swarm) start/execute a shell script, which executes on the host docker machine, as part of the normal command?
Healthcheck is the closest thing I could find, but that executes within the container, not on the host.
Obviously, one could wrap the docker command in a shell script, but that option is not viable for me.

Consul and Tomcat in the same docker container

This is a two-part question.
First part:
What is the best approach to run Consul and a Tomcat in the same docker container?
I've built my own image, installing both Tomcat and Consul correctly, but I am not sure on how to start them. I tried putting both calls as CMD in the Dockerfile, but no success. I tried to put Consul as an ENTRYPOINT (Dockerfile) and Tomcat to be called in the "docker run" command. It could be vice versa but I have a feeling that it is no good way either.
The docker will run in one AWS instance. Each docker container would run Consul as a server, to register themselves in another AWS instance. Consul and Consul-template will be integrated into proper load balance. This way, my HAproxy instance will be able to correctly forward the requests as I plug or unplug containers.
Second part:
In individual tests I did, the docker container was able to reach my main Consul server(leader) but it failed to register itself as an "alive" node.
Reading the logs at Consul server, I think is a matter of which ports I am exposing and publishing. In AWS, I already allowed communication in all ports in TCP and UDP between the instances in this particular Security Group.
Do you know which ports I should be exposing and publishing to allow proper communication between a standalone consul(aws instance) and consul servers (running inside docker containers inside a aws container)? What is command to run the docker container: docker run -p 8300:8300 .........
Thank you.
I would use ENTRYPOINT to kick off a script on docker run.
Something like
ENTRYPOINT myamazingbashscript.sh
Syntax might be off but u get the idea
The script should start both services and finally should tail -f the tomcat logs (or any logs).
tail -f will prevent container exit since the tail -f command never exits and it will also help u to see what tomcat is doing
Do ... docker logs -f to watch the logs after a docker run
Note because the container doesn't exit u can exec into it with ... docker exec -it containerName bash
This lets you have a sniff around inside the container
Not generally the best approach to have two services in one container because it destroys the separation of concerns and reusability but u may have valid reasons
To build use docker build then run with docker run as u stated it.
If u decided to go for a 2 container solution then u will need to expose ports between containers to allow them to talk to each other. You could share files between containers using volumes_from

Resources