how to access a path of a container from `docker-machine ` - docker

how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?

This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.

Related

Create docker container from within a container

I have docker on my host machine with a container running. I was wondering if it's possible, and what the best approach would be, to "trigger" a container creation from the running container.
Let's say my machine is host and I have a container called app (with id 123456789) running on host.
root#host $ docker contain ls
123456789 app_mage .... app
I would like to create a container on host from within app
root#123456789 $ docker run --name app2 ...
root#host docker container ls
123456789 app_mage .... app
12345678A app_mage .... app2
What I need is for my app to be running on docker and to run arbitrary applications in an isolated environment (but I'd rather avoid docker-in-docker)
A majority of the Docker community will veer away from these types of designs, however it is very doable.
Similar to Starting and stopping docker container from other container you can simply mount the docker.sock file from the host machine into the container, giving it privilege to access the docker daemon.
To make things more automated, you could use the docker-py sdk to start containers from inside a container, which would in turn access the Docker deamon on the host machine hosting the container that you are spawning more containers from.
For example:
docker run -v /var/run/docker.sock:/var/run/docker.sock image1 --name test1
----
import docker
def create_container():
docker.from_env().containers.run("image2", name="test2")
This example starts container test1, and runs that method inside the newly created container, which in turn creates a new container test2 running on the same host as test1.

How to disown a docker container running inside SSH session

I have accessed a Remote Machine (call it , RM) through SSH (from my host). And I am running a docker image inside RM via my SSH session. Both are Ubuntu 16.04 based.
There are some processes running inside this docker container, so I can't exit the container.
So,how do I detach this ssh session from my host, so that those processes inside the docker would still run unaffected.
I am doing this, because I have to restart my host machine for some purpose.
PS:
In this link Correct way to detach from a container without stopping it, it's not running the docker container via SSH session. So two scenarios are different.
First, you have to start your Docker container in daemon (non-interactive) mode, using -d argument and dropping -it. Don't forget to name your container for further usage with --name foo option.
After container is started, you can control it using docker exec -it foo sh-or-whatever. If your ssh session will terminate, container will continue running. However, you docker exec session will be over.

Downloading/uploading a file/folder directly from/to a docker container running on a web host to/from a local machine using SCP

So far, I have always copied files from the docker container to my VM first (web host), and later, run scp command line from my local machine to download it from the VM. Similar scenario happening for uploading files/folders. Is there a direct way to do that using scp?
In order to directly copy from your container you need sshd installed on the container and expose an port for ssh to public when you run the container.
Take in count that if you do you have to make sure that ssh is properly configured and secured.
Example:
*We take in count that you already have ssh configured on the container
docker run -d -p 8000:22 --name docker image
scp -P 8000 username#myserver.com:/root/file.txt ~/file.txt

Access host docker-machine from within container

I have an image that I'm using to run my CI/CD builds (using GitLab CE). I'd like to deploy my app doing something like this from within the container:
eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web
However, I'd like the docker-machine to access machines defined on the host system since the container will be destroyed and I don't want to include access details in the image.
I've tried a few things
Accessing the Remote Host via docker-machine
Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
Mounting the docker socket
In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.
Accessing the Remote Host via DOCKER_HOST
Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
export DOCKER_HOST=:2376
Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-‌​1
Test with docker info
This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority
Any ideas on how I can perform a remote deployment from within a container?
Thanks
EDIT
Here is a diagram to try and help better communicate the scenario.
Don't use docker-machine for this.
Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there's a much easier way for your purposes.
The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.
If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.
In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:
docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>
Now you can run docker commands on the host from within the CI container.

Execution commands between two dockers containers

I wonder if it's possible to exec commands between two containers (docker exec -it )?
I have a container running Jenkins and another one with my web application, after the build I want that the Jenkins Container send commands directly to the project container. I would like to avoid to use ssh. Is it possible?
You need to mount the docker socket from host to the container you want to run command with. See https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337

Resources