I want to create a network of a container in which one central container should be able to ssh into all other containers. Through ssh central container can change a configuration of all other container using Ansible. I know that it’s not advised to ssh from one container to another, and we can use volume for data sharing but that doesn't fit to my use case. I am able to ssh from host to the container but I am not able to ssh from one container to another.
Docker file I am using is:
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y netcat ssh iputils-ping
EXPOSE 22
Image created by the Dockerfile is named ubuntu:v2
Then using below commands I created two containers u1 and u2
docker run -p 22 --rm -ti --name u1 ubuntu:v2 bash
docker run -p 22 --rm -ti --name u2 ubuntu:v2 bash
In the container I am running below commands to create a user in container. Create user u1 in u1 container and u2 in u2 container
root#d0b0e44f7517:/# mkdir /var/run/sshd
root#d0b0e44f7517:/# chmod 0755 /var/run/sshd
root#d0b0e44f7517:/# /usr/sbin/sshd
root#d0b0e44f7517:/#
root#d0b0e44f7517:/# useradd --create-home --shell /bin/bash --groups sudo u2
root#d0b0e44f7517:/# passwd u2
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
root#d0b0e44f7517:/#
root#d0b0e44f7517:/#
I made two containers, both are same except one has user u1 and other has user u2 as shown above. After this, I tried to ssh from host to container using command ssh -X u2#localhost -p 32773 (32773 is a port which is mapped to container’s 22 port). So ssh works from host to container but I am not able to ssh from one container to another container.So can you help me to ssh from one container to other containers?
Use docker service discovery and then you can ssh from one container to another container. Here you can achieve service discovery by connecting all the containers to the same network.
docker network create -d bridge test
docker run -p 22 --rm -ti --name u1 --network test ubuntu:v2 bash
docker run -p 22 --rm -ti --name u2 --network test ubuntu:v2 bash
Now from u1 you can ssh into u2 as ssh user#u2.
Login to docker conatiner
docker exec -it u1 /bin/bash
docker exec -it u2 /bin/bash
After logging in to conatiner run the below commands to install required tools for sshing
passwd #Change the password of container it will be asked during ssh
apt-get update
apt-get install vim
apt-get install openssh-client openssh-server
vi /etc/ssh/sshd_config
Change the line "PermitRootLogin yes"
service ssh restart
Now you can ssh using root#container_ip to any container
Note: to get container ip you can run the below command
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <conatiner_name>
Related
I'm using NVIDIA Docker in a Linux machine (Ubuntu 20.04). I've created a container named user1 using nvidia/cuda:11.0-base image as follows:
docker run --gpus all --name user1 -dit nvidia/cuda:11.0-base /bin/bash
And, here is what I see if I run docker ps -a:
admin#my_desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a365362840de nvidia/cuda:11.0-base "/bin/bash" 3 seconds ago Up 2 seconds user1
I want to access to that container via ssh using its unique IP address from a totally different machine (other than my_desktop, which is the host). First of all, is it possible to grant each container a unique IP address? If so, how can I do it? Thanks in advance.
In case you want to access to your container with ssh from an external VM, you need to do the following
Install the ssh daemon for your container
Run the container and expose its ssh port
I would propose the following Dockerfile, which builds from nvidia/cuda:11.0-base and creates an image with the ssh daemon inside
Dockerfile
# Instruction for Dockerfile to create a new image on top of the base image (nvidia/cuda:11.0-base)
FROM nvidia/cuda:11.0-base
ARG root_password
RUN apt-get update || echo "OK" && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo "root:${root_password}" | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image from the Dockerfile
docker image build --build-arg root_password=password --tag nvidia/cuda:11.0-base-ssh .
Create the container
docker container run -d -P --name ssh nvidia/cuda:11.0-base-ssh
Run docker ps to see the container port
Finally, access the container
ssh -p 49157 root#<VM_IP>
EDIT: As David Maze correctly pointed out. You should be aware that the root password will be visible in the image history. Also this way overwrites the original container process.
This process, if it is to be adopted it needs to be modified in case you need it for production use. This serves as a starting point for someone who wishes to add ssh to his container.
I have two machines:
Ubuntu workstation running docker
Macbook with Mac OS
I want to be able to run docker commands from MacOS through ssh on my Ubuntu workstation.
Docker works fine when running commands on Ubuntu.
SSH works fine (key based with entity saved).
I've tried creating a context:
docker context create ubuntu --docker "host=ssh://myuser#192.168.1.100"
docker context use ubuntu
docker run -it alpine sh
and I get:
docker: Cannot connect to the Docker daemon at http://docker. Is the docker daemon running?.
the same error I get when trying to:
docker -H ssh://myuser#192.168.1.100 run -it alpine sh
Nothing from the solutions I've found seems to be helping.
PS: 192.168.1.100 is only for the question. When running commands I use real IP, which is correct and not colliding with anything. Dirrect SSH is working perfectly.
For your case you can use docker-machine:
Install:
base=https://github.com/docker/machine/releases/download/v0.16.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
sudo mv /tmp/docker-machine /usr/local/bin/docker-machine &&
chmod +x /usr/local/bin/docker-machine
Run/create:
docker-machine create \
--driver generic \
--generic-ip-address=put_here_ip_of_remote_docker \
--generic-ssh-key ~/.ssh/id_rsa \
vm_123
Check:
docker-machine ls
docker-machine ip vm_123
docker-machine inspect vm_123
Use:
docker-machine ssh vm_123
docker run -it alpine sh
exit
exit
eval $(docker-machine env -u)
Extra tips:
Also you can make vm_123 as the active docker machine via this command:
eval $(docker-machine env vm_123)
docker run -it alpine sh
exit
eval $(docker-machine env -u)
and unset docker machine vm_123 as active via this command:
eval $(docker-machine env -u)
https://docs.docker.com/machine/drivers/generic/
https://docs.docker.com/machine/examples/aws/
https://docs.docker.com/machine/install-machine/
https://docs.docker.com/machine/reference/ssh/
Is you sure that ip on your Ubuntu is 192.168.1.1 ?
Because I think that its your router ip :)
Can you post ip a from your Ubuntu, please ?
I would like to access iptables, ufw and reboot running on host OS (Snappy Ubuntu Core 18.04) from Docker container (running on the same host).
What volumes or Docker container parameters are required to make this possible? Container can be run with root user and privileged access.
I´m totally aware of the security implications here, but security is not a concern in this context.
Using SSH
You can run the container with --net=host option, then it's possible to connect to the host from the container using ssh.
in the host mode, connecting to the port 22 on the host from the container is possible.
Without SSH
if you don't want to use ssh, one way is explained in this post. You need to run the container with --privileged and --pid=host and then use nsenter command. Using this command you get an interactive shell form the host. You can also only run desired command.
$ sudo docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
$ sudo docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n | sudo iptables -S
note that if you are using MacOS or Windows, the docker is running in a hypervisor, so using this, you would be in the shell of the hypervisor.
I'd like to create the following infrastructure flow:
How can that be achieved using Docker?
Firstly you need to install a SSH server in the images you wish to ssh-into. You can use a base image for all your container with the ssh server installed.
Then you only have to run each container mapping the ssh port (default 22) to one to the host's ports (Remote Server in your image), using -p <hostPort>:<containerPort>. i.e:
docker run -p 52022:22 container1
docker run -p 53022:22 container2
Then, if ports 52022 and 53022 of host's are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:
ssh -p 52022 myuser#RemoteServer --> SSH to container1
ssh -p 53022 myuser#RemoteServer --> SSH to container2
Notice: this answer promotes a tool I've written.
The selected answer here suggests to install an SSH server into every image. Conceptually this is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/).
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has bash.
The following example would start an SSH server exposed on port 2222 of the local machine.
$ docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
$ ssh -p 2222 localhost
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Not only does this defeat the idea of one process per container, it is also a cumbersome approach when using images from the Docker Hub since they often don't (and shouldn't) contain an SSH server.
These files will successfully open sshd and run service so you can ssh in locally. (you are using cyberduck aren't you?)
Dockerfile
FROM swiftdocker/swift
MAINTAINER Nobody
RUN apt-get update && apt-get -y install openssh-server supervisor
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
to build / run start daemon / jump into shell.
docker build -t swift3-ssh .
docker run -p 2222:22 -i -t swift3-ssh
docker ps # find container id
docker exec -i -t <containerid> /bin/bash
I guess it is possible. You just need to install a SSH server in each container and expose a port on the host. The main annoyance would be maintaining/remembering the mapping of port to container.
However, I have to question why you'd want to do this. SSH'ng into containers should be rare enough that it's not a hassle to ssh to the host then use docker exec to get into the container.
Create docker image with openssh-server preinstalled:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image using:
$ docker build -t eg_sshd .
Run a test_sshd container:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
Ssh to your container:
$ ssh root#192.168.1.2 -p 49154
# The password is ``screencast``.
root#f38c87f2a42d:/#
Source: https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
It is a short way but not permanent
first create a container
docker run ..... -p 22022:2222 .....
port 22022 on your host machine will map on 2222, we change the ssh port on container later
, then on your container executing the following commands
apt update && apt install openssh-server # install ssh server
passwd #change root password
in file /etc/ssh/sshd_config change these :
uncomment Port and change it to 2222
Port 2222
uncomment PermitRootLogin to
PermitRootLogin yes
and finally restart ssh server
/etc/init.d/ssh start
you can login to your container now
ssh -p 22022 root#HostIP
Remember : if you restart the container you need to restart ssh server again
I am trying to connect to a web app running on tomcat8 in a docker container.
I am able to access it from within the container doing lynx http://localhost:8080/myapp, but when I try to access it from the host I only get HTTP request sent; waiting for response.
I am exposing port 8080 in the Dockerfile, I am using sudo docker inspect mycontainer | grep IPAddress to get the ip address of the container.
The command I am using to run the docker container is this:
sudo docker run -ti --name myapp --link mysql1:mysql1 --link rabbitmq1:rabbitmq1 -e "MYSQL_HOST=mysql1" -e "MYSQL_USER=myuser" -e "MYSQL_PASSWORD=mysqlpassword" -e "MYSQL_USERNAME=mysqlusername" -e "MYSQL_ROOT_PASSWORD=rootpassword" -e "RABBITMQ_SERVER_ADDRESS=rabbitmq1" -e "MY_WEB_ENVIRONMENT_ID=qa" -e "MY_WEB_TENANT_ID=tenant1" -p "8080:8080" -d localhost:5000/myapp:latest
My Dockerfile:
FROM localhost:5000/web_base:latest
MAINTAINER "Me" <me#my_company.com>
#Install mysql client
RUN yum -y install mysql
#Add Run shell script
ADD run.sh /home/ec2-user/run.sh
RUN chmod +x /home/ec2-user/run.sh
EXPOSE 8080
ENTRYPOINT ["/bin/bash"]
CMD ["/home/ec2-user/run.sh"]
My run.sh:
sudo tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Any ideas why I can access it from within the container but not from the host?
Thanks
What does your docker run command look like? You still need to do -p 8080:8080. Expose in the dockerfile only exposes it for linked containers not to the host vm.
I am able to access the tomcat8 server from the host now. The problem was here:
sudo tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Tomcat8 must be started as a service instead:
sudo service tomcat8 start && sudo tail -f /var/log/tomcat8/catalina.out
Hit given command to find IP address of docker-machine
$ docker-machine ls
The output will be like :
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
Now run your application from host machine as : http://192.168.99.100:8080/myapp