Can't ssh localhost within docker - docker

I build docker image with ssh enabled by such dockerfile: docker build -t debian-ssh:v00 .
From debian
WORKDIR /
RUN apt update && apt install -y openssh-server sudo
RUN sed -i "s/UsePAM yes/UsePAM no/g" /etc/ssh/sshd_config
RUN echo "root:123456" | chpasswd
RUN echo "root ALL=(ALL) ALL" >> /etc/sudoers
# RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
# RUN ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key
# RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
RUN mkdir /run/sshd
# RUN mkdir /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
After building, I start container by docker run -d --name ssh00 debian-ssh00. Then docker exec -it ssh00 bash -> ssh localhost, it give me message:
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:sF5hbx2GTw/Fq3QhQyRJ2+YNwBFPy/Iu5c8PtgpU/ok.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
root#localhost's password:
Permission denied, please try again.
root#localhost's password:
Permission denied, please try again.
root#localhost's password:
root#localhost: Permission denied (publickey,password).
I type password 123456 above. Why this happended?
I use docker for windows with latest version, i.e. docker engine v20.10.2 but still using backend hyper-V
Update:
There was an official tutorial about Dockerize an SSH service in the year 2020. But now it is discouraged.

First, once in your Docker bash session, try and change the root password (again) with the passwd command: it will ask you for your old password (the one you put in Dockerfile).
That way, you can double check the default container account (here root) does indeed have the password '123456'.
Second, try the same ssh command in verbose mode, to see if any clues are apparent:
ssh -vv localhost
If the password for root is correct, then check you /etc/ssh/sshd_config: if it has PermitRootLogin no, it would disallow any root session.
If this works, you would need to modify your Dockerfile in order to amend the /etc/ssh/sshd_config.
The OP Spaceship222 confirms in the discussion:
RUN echo "PermitRootLogin yes" >> /etc/ssh/sshd_config will make debian-based container work

This is purely configuration of sshd daemon issue. By default for security reasons access to root account with password authentication is disabled so you have two options:
Change the configuration of the ssh daemon and allow password authentication for root account (NOTE there is a reason why we don't allow root access by default so I would suggest you leave it this way)
Set up public/private key and set up authorized_keys file for root account in this context. I'm not sure how do you want to use this container and in general you should simply add your public key in /root/.ssh/authorized_keys file and you' ll be fine.
For your particular case if you really want to solve your problem with
ssh localhost
You can add one line to your Dockerfile which generates a public/private keypair and adds it to your authorized_keys for root user OR you can run this command after you first login using docker exec command.
Your altered Dockerfile (public/private key version)
FROM debian
WORKDIR /
RUN apt update && apt install -y openssh-server sudo
RUN sed -i "s/UsePAM yes/UsePAM no/g" /etc/ssh/sshd_config
RUN echo "root:123456" | chpasswd
RUN echo "root ALL=(ALL) ALL" >> /etc/sudoers
# RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
# RUN ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key
# RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
RUN ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N "" && cat /root/.ssh/id_rsa.pub>/root/.ssh/authorized_keys
RUN mkdir /run/sshd
# RUN mkdir /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
OR simply run this command in container after you execute into bash
ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N "" && cat /root/.ssh/id_rsa.pub>/root/.ssh/authorized_keys
UPDATE:
You are using sed but sed isn't available so as for starter you need to add sed with apt and if you want to build this container with PermitRootLogin yes you need to use sed to change the /etc/ssh/sshd_config file.
Your altered Dockerfile (root password login allowed)
FROM Debian
WORKDIR /
RUN apt update && apt install -y openssh-server sudo sed
RUN sed -i "s/UsePAM yes/UsePAM no/g" /etc/ssh/sshd_config && sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/g" /etc/ssh/sshd_config
RUN echo "root:123456" | chpasswd
RUN echo "root ALL=(ALL) ALL" >> /etc/sudoers
# RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
# RUN ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key
# RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
RUN mkdir /run/sshd
# RUN mkdir /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I hope this solves your problem fully.

Related

Docker entrypoint user switch

I am creating a docker image to be used as base for other applications. The requirements are:
application must run as non root user
optionally, certificates must be loaded before executing the application
I created the following Dockerfile
FROM node:14.15.1-alpine3.11
# Specify node/npm related envs
ENV NPM_CONFIG_LOGLEVEL=warn \
NO_UPDATE_NOTIFIER=1
# Change cwd for next commands
WORKDIR /home/node/code
# Set local registry
RUN echo "registry=http://192.168.100.175:4873" > /home/node/.npmrc && \
chown -R node:node /home/node && \
apk add --update --no-cache tzdata=2021a-r0 ca-certificates=20191127-r2 su-exec=0.2-r1
# Need root to update CA certificates in entrypoint.sh and then switch back to restricted user
USER root
COPY entrypoint.sh entrypoint.sh
ENTRYPOINT [ "./entrypoint.sh" ]
# Execute the service entrypoint
CMD ["sh"]
and entrypoint.sh
#!/bin/sh
DIR_CRT="/home/node/certificates"
if [ "$(ls -A ${DIR_CRT})" ]; then
cp -r "${DIR_CRT}/." /usr/local/share/ca-certificates/
update-ca-certificates
echo "******* Updated CA certificates *******"
fi
exec su-exec node "$#"
This seems to cover the requirements but I noticed that if I open a shell inside the image it is always with node user even if I specify a different one from parameters:
$ docker run --rm -it -u root docker.repo.asts.com/scc-2.0/app-tg:1.6.0-beta50 whoami
node
Is it possible to have both the requirements and the possibility to execute a direct command with required user?
docker run -u xxx works only if you did not use exec in entrypoint to change PID1. E.g.
$ docker run --rm -it node:14.15.1-alpine3.11 whoami
root
$ docker run --rm -u node -it node:14.15.1-alpine3.11 whoami
node
After you use exec su-exec node "$#" to change the user to node, you won't have way to use -u xxx again. The only solution is override the entrypoint like next, but I don't see the meaning here:
docker run --rm -u root --entrypoint=/bin/sh xxx
But, you still could use docker exec -u root or docker exec -u node to get a shell for that user in exist container.

SSH Permission denied (publickey,password) - container docker ubuntu 18.04

I've installed Docker on my windows 10 and I'm using my WSL1 in order to create dockerfile, build and run containers and I cannot connect via ssh, I get Permission denied (publickey,password)
My dockerfile is:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
My docker ps is :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b41411ef7a8a eg_sshd "/usr/sbin/sshd -D" 4 minutes ago Up 4 minutes 0.0.0.0:32768->22/tcp test_sshd
The ssh port is this :
➜ root$ docker port test_sshd 22
0.0.0.0:32768
When I'm trying to connet via ssh I get "Permission denied"
➜ root$ ssh root#0.0.0.0 -p 32768
root#0.0.0.0: Permission denied (publickey,password).
The ssh service is up
➜ root$ docker exec b41411ef7a8a service ssh status
* sshd is running
What I'm doing wrong...I don't have any idea.
The problem is in this line:
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
because the original line is:
#PermitRootLogin prohibit-password
So sed works but the option remains commented-out. No doubt you know what to do to fix this but just in case the solution is to add # to the matching part:
RUN sed -Ei 's/#(PermitRootLogin).+/\1 yes/' /etc/ssh/sshd_config
By the way, usually you do not need a ssh server in a container to get inside it. It is possible to open a shell inside a container with docker exec -it <container> sh or (for Kubernetes) kubectl exec -it <pod_name> sh.

Enabling ssh at docker build time

Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container

Unable to find user root: no matching entries in passwd file in Docker

I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error

How to SSH into Docker?

I'd like to create the following infrastructure flow:
How can that be achieved using Docker?
Firstly you need to install a SSH server in the images you wish to ssh-into. You can use a base image for all your container with the ssh server installed.
Then you only have to run each container mapping the ssh port (default 22) to one to the host's ports (Remote Server in your image), using -p <hostPort>:<containerPort>. i.e:
docker run -p 52022:22 container1
docker run -p 53022:22 container2
Then, if ports 52022 and 53022 of host's are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:
ssh -p 52022 myuser#RemoteServer --> SSH to container1
ssh -p 53022 myuser#RemoteServer --> SSH to container2
Notice: this answer promotes a tool I've written.
The selected answer here suggests to install an SSH server into every image. Conceptually this is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/).
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has bash.
The following example would start an SSH server exposed on port 2222 of the local machine.
$ docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
$ ssh -p 2222 localhost
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Not only does this defeat the idea of one process per container, it is also a cumbersome approach when using images from the Docker Hub since they often don't (and shouldn't) contain an SSH server.
These files will successfully open sshd and run service so you can ssh in locally. (you are using cyberduck aren't you?)
Dockerfile
FROM swiftdocker/swift
MAINTAINER Nobody
RUN apt-get update && apt-get -y install openssh-server supervisor
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
to build / run start daemon / jump into shell.
docker build -t swift3-ssh .
docker run -p 2222:22 -i -t swift3-ssh
docker ps # find container id
docker exec -i -t <containerid> /bin/bash
I guess it is possible. You just need to install a SSH server in each container and expose a port on the host. The main annoyance would be maintaining/remembering the mapping of port to container.
However, I have to question why you'd want to do this. SSH'ng into containers should be rare enough that it's not a hassle to ssh to the host then use docker exec to get into the container.
Create docker image with openssh-server preinstalled:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image using:
$ docker build -t eg_sshd .
Run a test_sshd container:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
Ssh to your container:
$ ssh root#192.168.1.2 -p 49154
# The password is ``screencast``.
root#f38c87f2a42d:/#
Source: https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
It is a short way but not permanent
first create a container
docker run ..... -p 22022:2222 .....
port 22022 on your host machine will map on 2222, we change the ssh port on container later
, then on your container executing the following commands
apt update && apt install openssh-server # install ssh server
passwd #change root password
in file /etc/ssh/sshd_config change these :
uncomment Port and change it to 2222
Port 2222
uncomment PermitRootLogin to
PermitRootLogin yes
and finally restart ssh server
/etc/init.d/ssh start
you can login to your container now
ssh -p 22022 root#HostIP
Remember : if you restart the container you need to restart ssh server again

Resources