I have an image that I'm using to run my CI/CD builds (using GitLab CE). I'd like to deploy my app doing something like this from within the container:
eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web
However, I'd like the docker-machine to access machines defined on the host system since the container will be destroyed and I don't want to include access details in the image.
I've tried a few things
Accessing the Remote Host via docker-machine
Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
Mounting the docker socket
In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.
Accessing the Remote Host via DOCKER_HOST
Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
export DOCKER_HOST=:2376
Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-1
Test with docker info
This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority
Any ideas on how I can perform a remote deployment from within a container?
Thanks
EDIT
Here is a diagram to try and help better communicate the scenario.
Don't use docker-machine for this.
Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there's a much easier way for your purposes.
The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.
If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.
In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:
docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>
Now you can run docker commands on the host from within the CI container.
Related
When I use docker-machine in a Windows environment (installed with docker-toolbox), every docker run command uses that docker-machine as the docker daemon.
However, when I use docker-machine in a Linux environment, which has native docker daemon installed along with docker-machine, docker run command uses native docker daemon even if there is a running docker-machine instance.
Questions are:
How does docker run command decide which daemon to use?
Are there any method to list running containers on a docker-machine instance?
For the second one, I know I can SSH to the docker-machine instance and query docker ps in it, but I want check it from outside the instance.
Thanks in advance.
The Docker Machine stack works by firing up a VM, and then setting the DOCKER_HOST environment variable to point at it. In particular, it also does the required setup to TLS-encrypt the connection and to set up a TLS client certificate to authenticate the caller. (Without this setup, a remote DOCKER_HOST is extremely dangerous.)
So: docker run and every other Docker command uses the DOCKER_HOST environment variable to decide where to run things. If DOCKER_HOST points at a Docker Machine VM, docker ps will list the containers there; you won’t usually need to docker-machine ssh (though it’s a useful tool when you really need it).
On a native Linux host it’s far easier to just directly use a local Docker daemon. If you do have both a local daemon and a docker-machine VM, you can
# switch to the Docker Machine VM
eval $(docker-machine env default)
# switch back to the host Docker
eval $(docker-machine env -u)
how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?
This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.
Using this docker image from Docker Hub, I'm trying to run an ansible playbook that would configure the machine on which the container is running.
As an example, I run this:
docker run --net="host" -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory /playbook.yml
With this options, I can ping localhost and the inventory and playbook are both accessible.
The inventory is configured to use a local connection:
[executors]
127.0.0.1
[executors:vars]
ansible_connection=local
ansible_user=<my_user_in_docker_host>
ansible_become=True
The group executors is the one referenced from the playbook.
I see that the playbook is trying to connect as root (what I get by default when I attach to the container). Specifying -u when running the container doesn't seem to get along with Ansible.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
... followed by errors complaining about any command not available, after a successful local connection. That is what makes no sense for me given that both root or non-root users can execute them.
Any idea?
this image is designed to serve as a base for other images, and to take advantadge of ansible as a way of provisioning the requirements of the image rather than using the Dockerfile only.
This is stated in the documentation of the docker image:
Used mostly as a base image for configuring other software stack on
some specified Linux distribution(s).
Think of it as a base image to perform CI tasks on a lighter way than using other options (VMs, Vagrant...)
Take in account that the good thing about docker is that it isolates the host from the containers, so you can not reach the host files from the containers (except for whatever volumes you bind). Otherwise, it would be a security problem. See Here
regards
I was able to use ansible to configure the host from within a docker container. However, I didn't use a docker host network, but a docker bridge network.
When you start an ansible playbook in a container, then localhost will be the localhost of the container itself. This is just fine, because local_action(s) in ansible run in the container itself and remote actions on the host.
This is the modified version of your docker run example:
docker run -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory/playbook.yml
You should't configure the inventory to use localhost or a local connection, but to use the host (machine) and connect via ssh. This is an example:
[executors]
<my_host_ip>
[executors:vars]
ansible_connection=ssh
ansible_user=<my_host_user>
ansible_become=True
Assuming your docker container is running in the default bridge, you can find my_host_ip with the following command:
ip addr show docker0
The container will connect with ssh to the docker interface on the host.
Some additional hints:
ssh needs to listen on the docker0 interface
iptables/nftables needs to provide ssh access from the ansible container to the docker0 interface
Ansible uses keys to connect via ssh by default. By using the -k and/or the -K parameters of the ansible-playbook command, you can provide a password instead.
I´ve been looking in google but i cannot find any answer.
It is possible connect to a virtualbox docker container that I just start up. I have the IP of the virtual machine, but if I try to connect by SSH of course ask me for a password.
Regards.
see
https://github.com/BITPlan/docker-stackoverflowanswers/tree/master/33232371
to repeat steps.
On my Mac OS X machine
docker-machine env default
shows
export DOCKER_HOST="tcp://192.168.99.100:2376"
So i added an entry
192.168.99.100 docker
to my /etc/hosts
so that ping docker works.
As a Dockerfile i am using:
# Ubuntu image
FROM ubuntu:14.04
which I am building with
docker build -t bitplan/sshtest:0.0.1 .
and testing with
docker run -it bitplan/sshtest:0.0.1 /bin/bash
Now ssh docker will react with
The authenticity of host 'docker (192.168.99.100)' can't be established.
ECDSA key fingerprint is SHA256:osRuE6B8bCIGiL18uBBrtySH5+iGPkiHHiq5PZNfDmc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'docker,192.168.99.100' (ECDSA) to the list of known hosts.
wf#docker's password:
But here you are connecting to the docker machine not your image!
The ssh port is at port 22. You need to redirect it to another port and configure your image to support ssh to root or a valid user.
See e.g. https://docs.docker.com/examples/running_ssh_service/
Are you trying to connect to a running container or trying to connect to the virtualbox image running the docker daemon?
If the first, you cannot just SSH into a running container unless that container is running an ssh daemon. The easiest way to get a shell into a running container is with docker exec -ti <container name/id> /bin/sh. Do a docker ps to see running containers.
If the second, if your host was created with docker-machine then you can ssh into it with docker-machine ssh <machine name>. You can see all of you're running machines with docker-machine ls.
If this doesn't help can you clarify your question a little and provide details around how your creating your image and starting the container.
You can use ssh keys to access passwordless.
Here's some intro
https://wiki.archlinux.org/index.php/SSH_keys
I have installed docker on a CentOS machine. Now I am trying to run a MapR sandbox on it. After starting I get this:
Starting MapR Services.................
To manage this node go to: https://172.17.0.13:8443
But I am not able to access this URL from the windows machine in the same network as the CentOS machine.
This is an internal docker network inaccessible outside of the box. In order to access this container you need:
EXPOSE command in container (most likely it is already there)
run container with -p option
If you just specify -p port will be random - you could find it with inspect command, or you could use permanent port -p hostIp:externalPort:8443 where hostIp is address of your docker host.
After that you could access container from network as https://hostIp:externalPort