Setup Docker Container with SSH server? - docker

I want to setup a very minimalistic alpine linux docker container with the following capabilities:
It runs an ssh server
It copies over a SSH public key of my choice to which I can then authenticate
I looked into various options, and in the end decided to write my own small Dockerfile.
However, I ran into some problems.
Dockerfile
FROM alpine:latest
RUN apk update
RUN apk upgrade
RUN apk add openssh-server
RUN mkdir -p /var/run/sshd
RUN mkdir -p /root/.ssh
ADD authorized_keys /root/.ssh/authorized_keys
ADD entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
EXPOSE 22
CMD ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
/usr/sbin/sshd -D
authorized_keys
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP8cIHIPgV9QoAaSsYNGHktiP/QnWfKPOeyzjujUXgQMQLBw3jJ1EBe04Lk3FXTMxwrKk3Dxq0VhJ+Od6UwzPDg=
Starting the container gives an error: sshd: no hostkeys available -- exiting.
How can I fix my Dockerfile, to ensure the ssh server is running correctly and the authorized_keys file is there. What am I missing?

In order to start, the SSH daemon does need host keys.
Those does not represents the keys that you are going to use to connect to your container, just the keys that define this specific host.
A host key is a cryptographic key used for authenticating computers in the SSH protocol.
Source: https://www.ssh.com/ssh/host-key
So you have to generate some keys for your host, you can then safely ignore those if you do not really intend to use them.
Generating those keys can be done via
ssh-keygen -A
So in your image, just adding a
RUN ssh-keygen -A
should do.
For the record, here is my own sshd Alpine image:
FROM alpine
RUN apk add --no-cache \
openssh \
&& ssh-keygen -A \
&& mkdir /root/.ssh \
&& chmod 0700 /root/.ssh \
&& echo "root:$(openssl rand 96 | openssl enc -A -base64)" | chpasswd \
&& ln -s /etc/ssh/ssh_host_ed25519_key.pub /root/.ssh/authorized_keys
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]
Extra notes:
I am reusing the SSH keys generated by ssh-keygen -A, exposing them in a volume, this is the reason why I am doing the command:
ln -s /etc/ssh/ssh_host_ed25519_key.pub /root/.ssh/authorized_keys
Because this is just an Ansible node cluster lab, I am SSH'ing this machine as the root user, this is why I need the, quite insecure
echo "root:$(openssl rand 96 | openssl enc -A -base64)" | chpasswd

Related

SSH into Azure web-app container running with non root user

I am running an Elastic and Kibana service within a container using an Azure Web app container service. I was keen on checking the SSH connectivity for this container using Azures Web SSH console feature. Followed the microsoft documentation for SSH into custom containers https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#enable-ssh which shows the example of running the container as default root user.
My issue is Elasticsearch process does not run as a root user so I had to make the sshd process run as an elastic user. I was able to get the sshd process running which accepts the SSH connection from my host however the credentials I am setting in the docker file (elasticsearch:Docker!) are throwing Access Denied error.Any idea where i am going wrong here?
Dockerfile
FROM openjdk:jre-alpine
ARG ek_version=6.5.4
RUN apk add --quiet --no-progress --no-cache nodejs wget \
&& adduser -D elasticsearch \
&& apk add openssh \
&& echo "elasticsearch:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY startup.sh /home/elasticsearch/
RUN chmod +x /home/elasticsearch/startup.sh && \
chown elasticsearch /home/elasticsearch/startup.sh
COPY sshd_config /home/elasticsearch/
USER elasticsearch
WORKDIR /home/elasticsearch
ENV ES_TMPDIR=/home/elasticsearch/elasticsearch.tmp ES_DATADIR=/home/elasticsearch/elasticsearch/data
RUN wget -q -O - https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ek_version}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ek_version} elasticsearch \
&& mkdir -p ${ES_TMPDIR} ${ES_DATADIR} \
&& wget -q -O - https://artifacts.elastic.co/downloads/kibana/kibana-oss-${ek_version}-linux-x86_64.tar.gz \
| tar -zx \
&& mv kibana-${ek_version}-linux-x86_64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm
EXPOSE 9200 5601 2222
ENTRYPOINT ["/home/elasticsearch/startup.sh"]
startup.sh script
#!/bin/sh
# Generating hostkey
ssh-keygen -f /home/elasticsearch/ssh_host_rsa_key -N '' -t rsa
# starting sshd process
echo "Starting SSHD"
/usr/sbin/sshd -f sshd_config
# Staring the ES stack
echo "Starting ES"
sh elasticsearch/bin/elasticsearch -E http.host=0.0.0.0 & kibana/bin/kibana --host 0.0.0.0
sshd_config file
Port 2222
HostKey /home/elasticsearch/ssh_host_rsa_key
PidFile /home/elasticsearch/sshd.pid
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Error i am getting
Please check and verify that your docker image supports SSH. It would appear that you have done everything correctly so one of the final troubleshooting steps left as this point is to verify that your image supports SSH to begin with.

How do I configure umask in alpine based docker container

I have a Java application that runs in docker based on the cutdown alpine distribution, I want umask to be set to 0000 so that all files created by the application in the configured volume /music are accessible to all users.
The last thing the Dockerfile does is run a script that starts the application
CMD /opt/songkong/songkongremote.sh
This file contains the following
umask 0000
java -XX:MaxRAMPercentage=60 \
-Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog\
-Dorg.jboss.logging.provider=jdk \
-Djava.util.logging.config.class=com.jthink.songkong.logging.StandardLogging\ --add-opens java.base/java.lang=ALL-UNNAMED -jar lib/songkong-6.9.jar -r
The application runs, but in the docker container logs I see the following is output to stdout
/opt/songkong/songkongremote.sh: umask: line 1: illegal mode: 0000
indicating the umask command did not work, which I do not understand since that is a valid value for umask. (I have also tried umask 000 at that failed with same error)
I also tried adding
#!/bin/sh
as the first line to the file, but then Docker complained it could not find /bin/sh.
Full Dockerfile is:
FROM adoptopenjdk/openjdk11:alpine-jre
RUN apk --no-cache add \
ca-certificates \
curl \
fontconfig \
msttcorefonts-installer \
tini \
&& update-ms-fonts \
&& fc-cache -f
RUN mkdir -p /opt \
&& curl http://www.jthink.net/songkong/downloads/build1114/songkong-linux-docker.tgz?val=121| tar -C /opt -xzf - \
&& find /opt/songkong -perm /u+x -type f -print0 | xargs -0 chmod a+x
EXPOSE 4567
ENTRYPOINT ["/sbin/tini"]
# Config, License, Logs, Reports and Internal Database
VOLUME /songkong
# Music folder should be mounted here
VOLUME /music
WORKDIR /opt/songkong
CMD /opt/songkong/songkongremote.sh
Your /opt/songkong/songkongremote.sh script has what looks like non-linux newlines (Windows?).
You can view it by running:
$ docker run --rm -it your-image-name vi /opt/songkong/songkongremote.sh
And it is the same reason the #!/bin/sh line did not work, it probably looked like #!/bin/sh^M as well.
You have carriage return characters in your script file:
umask 0000^M
java -XX:MaxRAMPercentage=60 -Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog -Dorg.jboss.logging.provider=jdk -Djava.util.logging.config.class=com.jthink.songkong.logging.StandardLoggi
^M
You can add RUN sed -i -e 's/\r//g' /opt/songkong/songkongremote.sh to the Dockerfile or better recreate the script.

How to access docker daemon from container with other user than root

I'm trying to run a Jenkins container that builds docker images. I've started last week with docker and I'm a bit confused with the use of volumes from host and how users are handled.
I've been searching on internet and I've found a git issue were someone posted a solution to have access to the docker daemon from the container. Basically, the idea is to mound inside the Jenkins container the volumes that contain the docker bin folder and the docker.sock from the host like this:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
I've done that and it works but only if I'm root. When I started to learn docker, I followed the example in a blog where, instead of directly using a jenkins image, the author copied the Dockerfiles from the jenkins image itself and its dependencies to explain the process. As part of the process, a jenkins user is created and it is the one in used when starting the container. My problem now is that I cannot make the jenkins user have access to the docker.sock mounted as it belongs to root and the group docker in the host. I tried adding the user docker in the Dockerfile but I still get a permission denied error from a Jenkins job when accessing the docker.sock. If I inspect the mounted /var/run/docker.sock inside the container I can see that docker.sock belongs to group user instead of docker so I don't know exactly what's going on when the directory is mounted. I haven't worked much with Linux so my guess is that the user docker doesn't exist when the directory is mounted and that it then uses a default user but I may probably be completely wrong.
Another thing I still don't get is, if I create a container specifically to be used as a Jenkins container and nothing else is supposed to be run there, what's the purpose of creating a specific jenkins user? Is there any reason why I cannot use directly the user root?
This is the Dockerfile I use. Thanks.
FROM centos:7
# Yum workaround to stalled mirror
RUN sed -i -e 's/enabled=1/enabled=0/g' /etc/yum/pluginconf.d/fastestmirror.conf
RUN rm -f /var/lib/rpm/__*
RUN rpm --rebuilddb -v -v
RUN yum clean all
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN yum -v install -y \
wget \
zip \
which \
openssh-client \
unzip \
java-1.8.0-openjdk-devel \
git \
&& yum clean all
#RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# SET Jenkins Environment Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ENV JENKINS_VERSION 2.22
ENV JENKINS_SHA 5b89b6967e7af8119c52c7e86223b47665417a22
ENV JENKINS_UC https://updates.jenkins-ci.org
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
# SET Java variables
ENV JAVA_HOME /usr/lib/jvm/java/jre
ENV PATH /usr/lib/jvm/java/bin:$PATH
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN useradd -d "$JENKINS_HOME" -u 1000 -m -s /bin/bash jenkins
#Not working. Folder not yet mounted?
#RUN DOCKER_GID=$(stat -c '%g' /var/run/docker.sock) && \
#Using gid from host
RUN groupadd -for -g 50 docker && \
usermod -aG docker jenkins
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins
RUN curl -fL http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war -o /usr/share/jenkins/jenkins.war \
&& echo "$JENKINS_SHA /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Prep Jenkins Directories
RUN chown -R jenkins "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins
# Expose Ports for web and slave agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config files
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins.sh /usr/local/bin/jenkins.sh
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Install default plugins
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/plugins.sh /tmp/plugins.txt
# Add ssh key
RUN eval "$(ssh-agent -s)"
RUN mkdir /usr/share/jenkins/ref/.ssh && \
chmod 700 /usr/share/jenkins/ref/.ssh && \
ssh-keyscan github.com > /usr/share/jenkins/ref/.ssh/known_hosts
COPY id_rsa /usr/share/jenkins/ref/.ssh/id_rsa
COPY id_rsa /usr/share/jenkins/ref/.ssh/id_rsa.pub
COPY hudson.tasks.Maven.xml /usr/share/jenkins/ref/hudson.tasks.Maven.xml
RUN chown -R jenkins:jenkins /usr/share/jenkins/ref && \
chmod 600 /usr/share/jenkins/ref/.ssh/id_rsa && \
chmod 600 /usr/share/jenkins/ref/.ssh/id_rsa.pub && \
chmod 600 /usr/share/jenkins/ref/hudson.tasks.Maven.xml
COPY id_rsa /root/.ssh/id_rsa
COPY id_rsa /root/.ssh/id_rsa.pub
# ssh keys for root. To use root as the user
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Switch to the jenkins user
USER jenkins
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
Apparently the issue was in the gid. For some reason I thought the docker gid of the group in the host was 50 but actually it was actually 100. When I changed it to be 100, the jenkins job started to work.
I still don't know why docker.sock shows it belongs to group user instead of docker in the container though. If I do cat /etc/group in the container I see
root:x:0:
...
users:x:100:
...
jenkins:x:1000:
docker:x:100:jenkins
and in the host
root:x:0:
lp:x:7:lp
nogroup:x:65534:
staff:x:50:docker
docker:x:100:docker
dockremap:x:101:dockremap

docker container can't use `service sshd restart`

I am trying to build a hadoop Dockerfile.
In the build process, I added:
&& apt install -y openssh-client \
&& apt install -y openssh-server \
&& ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa \
&& cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys \
&& chmod 0600 ~/.ssh/authorized_keys
&& sed -i '/\#AuthorizedKeysFile/ d' /etc/ssh/sshd_config \
&& echo "AuthorizedKeysFile ~/.ssh/authorized_keys" >> /etc/ssh/sshd_config \
&& /etc/init.d/ssh restart
I assumed that when I ran this container:
docker run -it --rm hadoop/tag bash
I would be able to:
ssh localhost
But I got an error:
ssh: connect to host localhost port 22: Connection refused
If I run this manually inside the container:
/etc/init.d/ssh restart
# or this
service ssh restart
Then I can get connected. I am thinking that this means the sshd restart didn't work.
I am using FROM java in the Dockerfile.
The build process only builds an image. Processes that are run at that time (using RUN) are no longer running after the build, and are not started again when a container is launched using the image.
What you need to do is get sshd to start at container runtime. The simplest way to do that is using an entrypoint script.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["whatever", "your", "command", "is"]
entrypoint.sh:
#!/bin/sh
# Start the ssh server
/etc/init.d/ssh restart
# Execute the CMD
exec "$#"
Rebuild the image using the above, and when you use it to start a container, it should start sshd before running your CMD.
You can also change the base image you start from to something like Phusion baseimage if you prefer. It makes it easy to start some services like syslogd, sshd, that you may wish the container to have running.

docker image - centos 7 > ssh service not found

I installed docker image - centos 7 on my ubuntu machine. But ssh service not found. so I cant run this service.
[root#990e92224a82 /]# yum install openssh-server openssh-clients
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: mirror.dhakacom.com
* extras: mirror.dhakacom.com
* updates: mirror.dhakacom.com
Package openssh-server-6.6.1p1-31.el7.x86_64 already installed and latest version
Package openssh-clients-6.6.1p1-31.el7.x86_64 already installed and latest version
Nothing to do
[root#990e92224a82 /]# ss
ssh ssh-agent ssh-keygen sshd ssltap
ssh-add ssh-copy-id ssh-keyscan sshd-keygen
How can I remotely login docker image?
You have to do the following instructions on Dockerfile.
RUN yum install -y sudo wget telnet openssh-server vim git ncurses-term
RUN useradd your_account
RUN mkdir -p /home/your_account/.ssh && chown -R your_account /home/your_account/.ssh/
# Create known_hosts
RUN touch /home/your_account/.ssh/known_hosts
COPY files/authorized_keys /home/your_account/.ssh/
COPY files/config /home/your_account/.ssh/
COPY files/pam.d/sshd /etc/pam.d/sshd
RUN touch /home/your_account/.ssh/environment
RUN chown -R your_account /home/your_account/.ssh
RUN chmod 400 -R /home/your_account/.ssh/*
RUN chmod 700 -R /home/your_account/.ssh/known_hosts
RUN chmod 700 /home/your_account/.ssh/environment
# Enable sshd
COPY files/sshd_config /etc/ssh/
RUN ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
# Add a account into sudoers and this account doesn't need to type his password
COPY files/sudoers /etc/
COPY files/start.sh /root/
I have to remove "pam_nologin.so" on the file /etc/pam.d/sshd, because when I upgrade the openssh-server's version to openssh-server-6.6.1p1-31.el7, the pam_nologin.so will disallow remote login for any users even the file /etc/nologin is not exist.
start.sh
#!/bin/bash
/usr/sbin/sshd -E /tmp/sshd.log
Start centos container
docker run -d -t -p $(sshPort):22 --name $(containerName) $(imageName) /bin/bash
docker exec -d $(containerName) bash -c "sh /root/start.sh"
Login container
ssh $(Docker ip) $(sshPort)
In extend to #puritys
You could do this in the Dockerfile instead
Last in the file:
ENTRYPOINT /usr/sbin/sshd -E /tmp/sshd.log && /bin/bash
Then you will only need to run:
docker run -d -p -t $(sshPort):22 --name $(containerName) $(imageName) /bin/bash

Resources