I am playing around with docker containers and trying to do ssh with the docker containers using the host machine.
So I am creating my docker container using the below docker file.
FROM ubuntu:18.04
LABEL maintainer="Sagar Shroff" version="1.0" type="ubuntu-with-ssh"
RUN apt-get update -y && \
apt-get install -y openssh-server
RUN service ssh restart
EXPOSE 22
USER root
WORKDIR /root
CMD service ssh restart && \
echo "Enter root's password: " && passwd root && \
/bin/bash
and I run my docker container using the command
docker run --rm -it -p 1022:22 ssh-ubuntu-example
After entering the root password I make the container go into background mode by pressing Ctrl+P, Q and then do ssh from my host machine using the command
ssh root#127.0.0.1 -p 1022
But I am unable to connect
ssh root#127.0.0.1 -p 1022
root#127.0.0.1's password:
Permission denied, please try again.
root#127.0.0.1's password:
Permission denied, please try again.
root#127.0.0.1's password:
root#127.0.0.1: Permission denied (publickey,password).
Related
I want to use ssh to access the docker container on my Mac (docker is also installed on the Mac).
I don't know how to solve this problem, if you have a way, I will sincerely appreciate it.
ssh: connect to host 172.17.0.2 port 9999: Operation timed out
I have a docker image of ubuntu1804, I tried it in the following way:
1. docker run -itd -p 192.168.31.151:9999:22 slamcabbage/221212_ubuntu1804 /bin/bash
2. docker exec -it 95a3f4c876b00 /bin/bash
After entering the container, i tried below:
apt-get update
apt-get install passwd openssl openssh-server openssh-client
passwd
echo "Port 22" >> /etc/ssh/sshd_config
echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
finally
service ssh restart
After that, I try to ssh in macos terminal:
ssh root#172.17.0.2 -p9999
ssh: connect to host 172.17.0.2 port 9999: Operation timed out
I want to create a network of a container in which one central container should be able to ssh into all other containers. Through ssh central container can change a configuration of all other container using Ansible. I know that it’s not advised to ssh from one container to another, and we can use volume for data sharing but that doesn't fit to my use case. I am able to ssh from host to the container but I am not able to ssh from one container to another.
Docker file I am using is:
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y netcat ssh iputils-ping
EXPOSE 22
Image created by the Dockerfile is named ubuntu:v2
Then using below commands I created two containers u1 and u2
docker run -p 22 --rm -ti --name u1 ubuntu:v2 bash
docker run -p 22 --rm -ti --name u2 ubuntu:v2 bash
In the container I am running below commands to create a user in container. Create user u1 in u1 container and u2 in u2 container
root#d0b0e44f7517:/# mkdir /var/run/sshd
root#d0b0e44f7517:/# chmod 0755 /var/run/sshd
root#d0b0e44f7517:/# /usr/sbin/sshd
root#d0b0e44f7517:/#
root#d0b0e44f7517:/# useradd --create-home --shell /bin/bash --groups sudo u2
root#d0b0e44f7517:/# passwd u2
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
root#d0b0e44f7517:/#
root#d0b0e44f7517:/#
I made two containers, both are same except one has user u1 and other has user u2 as shown above. After this, I tried to ssh from host to container using command ssh -X u2#localhost -p 32773 (32773 is a port which is mapped to container’s 22 port). So ssh works from host to container but I am not able to ssh from one container to another container.So can you help me to ssh from one container to other containers?
Use docker service discovery and then you can ssh from one container to another container. Here you can achieve service discovery by connecting all the containers to the same network.
docker network create -d bridge test
docker run -p 22 --rm -ti --name u1 --network test ubuntu:v2 bash
docker run -p 22 --rm -ti --name u2 --network test ubuntu:v2 bash
Now from u1 you can ssh into u2 as ssh user#u2.
Login to docker conatiner
docker exec -it u1 /bin/bash
docker exec -it u2 /bin/bash
After logging in to conatiner run the below commands to install required tools for sshing
passwd #Change the password of container it will be asked during ssh
apt-get update
apt-get install vim
apt-get install openssh-client openssh-server
vi /etc/ssh/sshd_config
Change the line "PermitRootLogin yes"
service ssh restart
Now you can ssh using root#container_ip to any container
Note: to get container ip you can run the below command
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <conatiner_name>
I a running Jenkins in an docker container. When spinning off a node in another docker container I receive the message:
[11/18/16 20:46:21] [SSH] Opening SSH connection to 192.168.99.100:32826.
ERROR: Server rejected the 1 private key(s) for Jenkins (credentialId:528bbe19-eb26-4c9f-bae3-82cd1247d50a/method:publickey)
[11/18/16 20:46:22] [SSH] Authentication failed.
hudson.AbortException: Authentication failed.
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1217)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:711)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:706)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[11/18/16 20:46:22] Launch failed - cleaning up connection
[11/18/16 20:46:22] [SSH] Connection closed.
Using the docker exec -i -t slave_name /bin/bash command I am able to get into the home/jenkins/.ssh directory to confirm the ssh key is where it is expected to be.
Under the CLOUD headnig on my configure page the Test Connection returns
Version = 1.12.3, API Version = 1.24
.
I am running OSX Sierra and attempting to follow the RIOT Games Jenkins-Docker tutorial http://engineering.riotgames.com/news/building-jenkins-inside-ephemeral-docker-container.
Jenkins Master Docker file:
FROM debian:jessie
# Create the jenkins user
RUN useradd -d "/var/jenkins_home" -u 1000 -m -s /bin/bash jenkins
# Create the folders and volume mount points
RUN mkdir -p /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
VOLUME ["/var/log/jenkins", "/var/jenkins_home"]
USER jenkins
CMD ["echo", "Data container for Jenkins"]
Jenkins Slave Dockerfile
FROM centos:7
# Install Essentials
RUN yum update -y && yum clean all
# Install Packages
RUN yum install -y git \
&& yum install -y wget \
&& yum install -y openssh-server \
&& yum install -y java-1.8.0-openjdk \
&& yum install -y sudo \
&& yum clean all
# gen dummy keys, centos doesn't autogen them.
RUN /usr/bin/ssh-keygen -A
# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional \
pam_loginuid.so/' /etc/pam.d/sshd
# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash
# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/authorized_keys /home/jenkins/.ssh/authorized_keys
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh
# Add the jenkins user to sudoers
RUN echo "jenkins ALL=(ALL) ALL" >> etc/sudoers
# Set Name Servers to avoid Docker containers struggling to route or resolve DNS names.
COPY /files/resolv.conf /etc/resolv.conf
# Expose SSH port and run SSHD
EXPOSE 22
CMD ["/usr/sbin/sshd","-D"]
I've been working with another individual doing the same tutorial on a Linux box who is stuck at the same place. Any help would be appreciated.
The problem you are running into probably has to do with interactive authorization of the host. Try adding the following command to your slave's Dockerfile
RUN ssh-keyscan -H 192.168.99.100 >> /home/jenkins/.ssh/known_hosts
Be sure to add it after you created the jenkins user, preferably after
USER jenkins
to avoid wrong ownership of the file.
Also make sure to do this when the master host is online, else it will tell you the host is unreachable. If you can't, then get the known_hosts file from the slave after you did it manually and copy it into your slave.
You can verify this. If you attach your console to the docker slave and ssh to the master, it will ask you to trust the server and add it to known hosts.
I'd like to create the following infrastructure flow:
How can that be achieved using Docker?
Firstly you need to install a SSH server in the images you wish to ssh-into. You can use a base image for all your container with the ssh server installed.
Then you only have to run each container mapping the ssh port (default 22) to one to the host's ports (Remote Server in your image), using -p <hostPort>:<containerPort>. i.e:
docker run -p 52022:22 container1
docker run -p 53022:22 container2
Then, if ports 52022 and 53022 of host's are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:
ssh -p 52022 myuser#RemoteServer --> SSH to container1
ssh -p 53022 myuser#RemoteServer --> SSH to container2
Notice: this answer promotes a tool I've written.
The selected answer here suggests to install an SSH server into every image. Conceptually this is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/).
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has bash.
The following example would start an SSH server exposed on port 2222 of the local machine.
$ docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
$ ssh -p 2222 localhost
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Not only does this defeat the idea of one process per container, it is also a cumbersome approach when using images from the Docker Hub since they often don't (and shouldn't) contain an SSH server.
These files will successfully open sshd and run service so you can ssh in locally. (you are using cyberduck aren't you?)
Dockerfile
FROM swiftdocker/swift
MAINTAINER Nobody
RUN apt-get update && apt-get -y install openssh-server supervisor
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
to build / run start daemon / jump into shell.
docker build -t swift3-ssh .
docker run -p 2222:22 -i -t swift3-ssh
docker ps # find container id
docker exec -i -t <containerid> /bin/bash
I guess it is possible. You just need to install a SSH server in each container and expose a port on the host. The main annoyance would be maintaining/remembering the mapping of port to container.
However, I have to question why you'd want to do this. SSH'ng into containers should be rare enough that it's not a hassle to ssh to the host then use docker exec to get into the container.
Create docker image with openssh-server preinstalled:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image using:
$ docker build -t eg_sshd .
Run a test_sshd container:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
Ssh to your container:
$ ssh root#192.168.1.2 -p 49154
# The password is ``screencast``.
root#f38c87f2a42d:/#
Source: https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
It is a short way but not permanent
first create a container
docker run ..... -p 22022:2222 .....
port 22022 on your host machine will map on 2222, we change the ssh port on container later
, then on your container executing the following commands
apt update && apt install openssh-server # install ssh server
passwd #change root password
in file /etc/ssh/sshd_config change these :
uncomment Port and change it to 2222
Port 2222
uncomment PermitRootLogin to
PermitRootLogin yes
and finally restart ssh server
/etc/init.d/ssh start
you can login to your container now
ssh -p 22022 root#HostIP
Remember : if you restart the container you need to restart ssh server again
I am having a weird problem.
I am not able to ssh to docker container having ip address 172.17.0.61.
I am getting following error:
$ ssh 172.17.0.61
ssh: connect to host 172.17.0.61 port 22: Connection refused
My Dockerfile does contain openssh-server installation step:
RUN apt-get -y install curl runit openssh-server
And also step to start ssh:
RUN service ssh start
What could be the issue?
When I enter into container using nsenter and start ssh service then I am able to ssh. But while creating container ssh-server doesn't seems to start.
What should I do?
When building a Dockerfile you would create an image. But you can't create an image with an already running ssh daemon or any running service else. First if you create a running container out of the image you can start services inside. E.g. by appending the start instruction to the docker run command:
sudo docker run -d mysshserver service ssh start
You can define a default command for your docker image with CMD. Here is an example Dockerfile:
FROM ubuntu:14.04.1
MAINTAINER Thomas Steinbach
EXPOSE 22
RUN apt-get install -y openssh-server
CMD service ssh start && while true; do sleep 3000; done
You can build an run this image with the following two commands:
sudo docker build -t sshtest .
sudo docker run -d -P --name ssht sshtest
Now you can connect to this container via ssh. Note that in the example Dockerfile no user and no login was created. This image is just for example and you can start an ssh connection to it, but not login.
In my opinion there is a better approach:
Dockerfile
FROM ubuntu:14.04.1
EXPOSE 22
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN apt-get install -y openssh-server
ENTRYPOINT ["sh", "/docker-entrypoint.sh"]
# THIS PART WILL BE REPLACED IF YOU PASS SOME OTHER COMMAND TO docker RUN
CMD while true; do echo "default arg" && sleep 1; done
docker-entrypoint.sh
#!/bin/bash
service ssh restart
exec "$#"
Build command
docker build -t sshtest .
The benefit of this approach is that your ssh daemon will always start when you use docker run, but you can also specify optional arguments e.g.:
docker run sshtest will print default arg every 1 second
whether docker run sshtest sh -c 'while true; do echo "passed arg" && sleep 3; done' will print passed arg every 3 seconds
I had the same problem.
Luckily I could solve it by checking kenorb answer and adapting it to my Dockerfile:
https://stackoverflow.com/a/61738823/4058295
It's worth a try :)