unable to run docker as non-root user? - jenkins

I have tried this post and it did NOT help.
I have created jenkins user and add it to docker group.
I have also switched the user in dockerFile (see below).
I started the container as following
docker run -u jenkins -d -t -p 8080:8080 -v /var/jenkins:/jenkins -P docker-registry:5000/bar/helloworld:001
The container starts fine. but when I look at the process, this is what I have
root 13575 1 1 09:34 ? 00:05:56 /usr/bin/docker daemon -H fd://
root 28409 13575 0 16:13 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.2 -container-port 8080
The first one is daemon. so I guess it is ok to be root.
but the second one (which I have switched to jenkins user by issuing sudo su jenkins) is showing root. I started the docker as jenkins user. why this process belongs to root?
Here is my dockerfile
#copy jenkins war file to the container
ADD http://mirrors.jenkins-ci.org/war/1.643/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
RUN useradd -d /home/jenkins -m -s /bin/bash jenkins
USER jenkins
ENV HOME /home/jenkins
WORKDIR /home/jenkins
# Maven settings
RUN mkdir .m2
ADD settings.xml .m2/settings.xml
ENTRYPOINT ["java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
CMD [""]
EDIT2
I am certain the container the running. I could attach to the container.
I can also browse through web-ui of jenkins, which is only possible if the container has started without errors (jenkins run inside the container)
Here is my command inside the container
ps -ef | grep java
jenkins 1 0 7 19:29 ? 00:00:28 java -jar /opt/jenkins.war
ls -l /jenkins
drwxr-xr-x 2 jenkins jenkins 4096 Jan 11 18:54 jobs
But from the host file system, I see that the newly created "jobs" directory shows as user "admin"
ls -l /var/jenkins/
drwxr-xr-x 2 admin admin 4096 Jan 11 10:54 jobs
Inside the container, the jenkins process (war) is started by "jenkins" user. Once the jenkins starts, it writes to host file system under "admin" user.
Here is my entire dockerFile (NOTE: I don't use the one from here)
FROM centos:7
RUN yum install -y sudo
RUN yum install -y -q unzip
RUN yum install -y -q telnet
RUN yum install -y -q wget
RUN yum install -y -q git
ENV mvn_version 3.2.2
# get maven
RUN wget --no-verbose -O /tmp/apache-maven-$mvn_version.tar.gz http://archive.apache.org/dist/maven/maven-3/$mvn_version/binaries/apache-maven-$mvn_version-bin.tar.gz
# verify checksum
RUN echo "87e5cc81bc4ab9b83986b3e77e6b3095 /tmp/apache-maven-$mvn_version.tar.gz" | md5sum -c
# install maven
RUN tar xzf /tmp/apache-maven-$mvn_version.tar.gz -C /opt/
RUN ln -s /opt/apache-maven-$mvn_version /opt/maven
RUN ln -s /opt/maven/bin/mvn /usr/local/bin
RUN rm -f /tmp/apache-maven-$mvn_version.tar.gz
ENV MAVEN_HOME /opt/maven
# set shell variables for java installation
ENV java_version 1.8.0_11
ENV filename jdk-8u11-linux-x64.tar.gz
ENV downloadlink http://download.oracle.com/otn-pub/java/jdk/8u11-b12/$filename
# download java, accepting the license agreement
RUN wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" -O /tmp/$filename $downloadlink
# unpack java
RUN mkdir /opt/java-oracle && tar -zxf /tmp/$filename -C /opt/java-oracle/
ENV JAVA_HOME /opt/java-oracle/jdk$java_version
ENV PATH $JAVA_HOME/bin:$PATH
# configure symbolic links for the java and javac executables
RUN update-alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 20000 && update-alternatives --install /usr/bin/javac javac $JAVA_HOME/bin/javac 20000
# copy jenkins war file to the container
ADD http://mirrors.jenkins-ci.org/war/1.643/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
#RUN useradd jenkins
#RUN chown -R jenkins:jenkins /home/jenkins
#RUN chmod -R 700 /home/jenkins
#USER jenkins
RUN useradd -d /home/jenkins -m -s /bin/bash jenkins
#RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
ENV HOME /home/jenkins
WORKDIR /home/jenkins
# Maven settings
RUN mkdir .m2
ADD settings.xml .m2/settings.xml
USER root
RUN chown -R jenkins:jenkins .m2
USER jenkins
ENTRYPOINT ["java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
CMD [""]

The second process
root 28409 13575 0 16:13 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.2 -container-port 8080
is NOT the process for your jenkins container but an internal process of the Docker engine to manage the network.
If, using the ps command, you cannot find the process which is supposed to run in your docker container, that means your docker container isn't running.
To ease figuring this out, start your container with the following command (adding --name test):
docker run --name test -u jenkins -d -t -p 8080:8080 -v /var/foo:/foo -P docker-registry:5000/bar/helloworld:001
Then type docker ps, you should see your container running. If not, type docker ps -a and you should see with which exit code it crashed.
If you need to know why it crashed, display its logs with docker logs test.
To look for the Jenkins process that runs from the official Jenkins docker image, use the following command:
ps aux | grep java
EDIT
why does the files seem to be owned by admin from the docker host point of view?
In your docker image, the jenkins user has UID 1000. You can easily verify this with the following command: docker run --rm -u jenkins --entrypoint /bin/id docker-registry:5000/bar/helloworld:001
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
On your docker host, UID 1000 is for the admin user. You can verify this with id admin which in your case shows:
uid=1000(admin) gid=1000(admin) groups=1000(admin),10(wheel)
The users which are available in a Docker container are not the ones from the docker host. However it might happen by coincidence that they have the same UID. This is why the ls -l command run on the docker host will tell you the files are owned by the admin user.
In fact the files are owned by the user of UID 1000 which happens to be named admin on the docker host and jenkins on your docker image.

Related

How to run a crontab job on elastic search docker image

I want to run a crontab job on elastic search docker image and here is my docker file
FROM docker.elastic.co/elasticsearch/elasticsearch:6.6.0
ENV PATH=$PATH:/usr/share/elasticsearch/bin
RUN yum -y update
RUN yum -y install crontabs
RUN echo -e "root\nelasticsearch" > /etc/cron.allow
RUN echo "" >> /etc/cron.allow
RUN chmod -R 644 /etc/cron.d
RUN cat /etc/cron.allow
RUN chown -R elasticsearch /etc/cron.d
RUN chmod -R 755 /etc/cron.d
RUN chown -R elasticsearch /var/spool/cron
RUN chmod -R 744 /var/spool/cron
RUN chown -R elasticsearch /etc/crontab
RUN chmod -R 744 /etc/crontab
RUN chown -R elasticsearch /etc/cron.d
RUN chmod -R 744 /etc/cron.d
COPY ./purge.sh /usr/share/elasticsearch
RUN ls -l /etc/crontab
RUN ls -l /etc/cron.d
RUN touch /usr/share/elasticsearch/cron.log
ADD ./cron /etc/cron.d/cron_test
RUN chmod 0644 /etc/cron.d/cron_test
RUN cd /etc/cron.d && cat cron_test
RUN chown -R elasticsearch /etc/cron.d/cron_test
RUN ls -l /etc/cron.d/cron_test
RUN crontab /etc/cron.d/cron_test
RUN crontab -l
RUN cd /var/spool/cron && ls
USER elasticsearch
ENTRYPOINT elasticsearch
CMD crond start && pgrep cron && tail -f && tail -f /usr/share/elasticsearch/cron.log
EXPOSE 9200 9300
after running this docker file and executing the container, i am getting this
enter image description here
In this step in docker file
RUN cd /var/spool/cron && ls
it's showing only root , but how can i get elasticsearch user in it ?**
my cron file present locally
*/1 * * * * echo "Hello world" >> /usr/share/elasticsearch/cron.log
*/1 * * * * elasticsearch /usr/share/elasticsearch/purge.sh
my purge.sh file
curl -XPOST "http://localhost:9200/hydro_dashboard_index/_delete_by_query" -H 'Content-Type: application/json' -d'
{
"query": {
"range" : {
"query_service_entry_time" : {
"lt" : "now-14d"
}
}
}
}'
It's usually considered a better practice to run only one process in a container. Since the thing you're trying to run in cron is just making an HTTP request to elasticsearch, there's nothing about it that needs to run in the same container, or even in Docker at all.
If your host is running a standard Linux distribution with a standard cron daemon, the absolute easiest thing to do is just to stash this purge script somewhere on your host and run it via the host's cron service. If you know cron and elasticsearch are on the same host and you start the container with a -p 9200:9200 option to publish the standard elasticsearch HTTP port, the script should work unmodified.
If absolutely everything must run in Docker, you might search Docker Hub for a prebuilt cron image (there are a couple, though none look especially actively maintained). You also might be able to use the minimal set of tools in the busybox image; its documentation can be a little light. Still, the basic approach you'd need to take looks like:
Find or build a Docker image that contains only cron and curl – no Elasticsearch, no actual crontabs, just the programs themselves.
If you're manually docker running containers, docker network create some_network (with any name and default options), and run both the Elasticsearch and cron containers with --net some_network.
In the curl commands, use the docker run --name of the Elasticsearch container, or the name of its Docker Compose services: block, as a hostname; localhost will always mean "this container".
Put the crontabs and support scripts in some directory on your host, and inject them into the cron container with the docker run -v option (that is, treat them as configuration).

amc, aerospike are not recognized inside docker container

I've docker ubuntu 16.04 image and I'm running aerospike server in it.
$ docker run -d -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 -p 8081:8081 --name aerospike aerospike/aerospike-server
The docker container is running successfully.
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
b0b4c63d7e22 aerospike/aerospike-server "/entrypoint.sh asd"
36 seconds ago Up 35 seconds 0.0.0.0:3000-3003->3000-3003/tcp, 0.0.0.0:8081->8081/tcp aerospike
I've logged into the docker container
$ docker exec -it b0b4c63d7e22 bash
root#b0b4c63d7e22:/#
I have listed the directories -
root#b0b4c63d7e22:/# ls
bin boot core dev entrypoint.sh etc home lib lib64 media mnt opt
proc root run sbin srv sys tmp usr var
root#b0b4c63d7e22:/#
I changed the directory to bin folder and listed the commands
root#b0b4c63d7e22:/# cd bin
root#b0b4c63d7e22:/bin# ls
bash dnsdomainname ip mount readlink systemctl
touch zegrep
cat domainname journalctl mountpoint rm systemd
true zfgrep
chgrp echo kill mv rmdir systemd-ask-
password umount zforce
chmod egrep ln netstat run-parts systemd-escape
uname zgrep
chown false login networkctl sed systemd-inhibit
uncompress zless
cp fgrep loginctl nisdomainname sh systemd-machine-
id-setup vdir zmore
dash findmnt ls pidof sh.distrib systemd-notify
wdctl znew
date grep lsblk ping sleep systemd-tmpfiles
which
dd gunzip mkdir ping6 ss systemd-tty-ask-
password-agent ypdomainname
df gzexe mknod ps stty tailf
zcat
dir gzip mktemp pwd su tar
zcmp
dmesg hostname more rbash sync tempfile
zdiff
Then I want to check the service -
root#b0b4c63d7e22:/bin# service amc status
amc: unrecognized service
Aerospike's official docker container does not have Aerospike Server running as a daemon, but instead as a foreground process. You can see this in the official github DOCKERFILE.
AMC is not part of Aerospike's Docker Image. It is up to you to run AMC from the environment of your choosing.
Finally, since you have not created a custom aerospike.conf file, Aerospike Server will only respond to clients on the Docker internal network. The -p parameters are not sufficient in itself to expose Aerospike's ports to clients, you'd also need to configure access-address, if you'd want client access from outside of the docker environment. Read more about Aerospike's networking at: https://www.aerospike.com/docs/operations/configure/network/general
You can build your own Docker container for amc to connect to aerospike running on containers.
Here is a sample Dockerfile for AMC.
cat Dockerfile
FROM ubuntu:xenial
ENV AMC_VERSION 4.0.13
# Install AMC server
RUN \
apt-get update -y \
&& apt-get install -y wget python python-argparse python-bcrypt python-openssl logrotate net-tools iproute2 iputils-ping \
&& wget "https://www.aerospike.com/artifacts/aerospike-amc-community/${AMC_VERSION}/aerospike-amc-community-${AMC_VERSION}_amd64.deb" -O aerospike-amc.deb \
&& dpkg -i aerospike-amc.deb \
&& apt-get purge -y
# Expose Aerospike ports
#
# 8081 – amc port
#
EXPOSE 8081
# Execute the run script in foreground mode
ENTRYPOINT ["/opt/amc/amc"]
CMD [" -config-file=/etc/amc/amc.conf -config-dir=/etc/amc"]
#/opt/amc/amc -config-file=/etc/amc/amc.conf -config-dir=/etc/amc
# Docker build sample:
# docker build -t amctest .
# Docker run sample for running amc on port 8081
# docker run -tid --name amc -p 8081:8081 amctest
# and access through http://127.0.0.1:8081
Then you can build the image:
docker build -t amctest .
Sending build context to Docker daemon 50.69kB
Step 1/6 : FROM ubuntu:xenial
---> 2fa927b5cdd3
Step 2/6 : ENV AMC_VERSION 4.0.13
---> Using cache
---> edd6bddfe7ad
Step 3/6 : RUN apt-get update -y && apt-get install -y wget python python-argparse python-bcrypt python-openssl logrotate net-tools iproute2 iputils-ping && wget "https://www.aerospike.com/artifacts/aerospike-amc-community/${AMC_VERSION}/aerospike-amc-community-${AMC_VERSION}_amd64.deb" -O aerospike-amc.deb && dpkg -i aerospike-amc.deb && apt-get purge -y
---> Using cache
---> f916199044d8
Step 4/6 : EXPOSE 8081
---> Using cache
---> 06f7888c1721
Step 5/6 : ENTRYPOINT /opt/amc/amc
---> Using cache
---> bc39346cd94f
Step 6/6 : CMD -config-file=/etc/amc/amc.conf -config-dir=/etc/amc
---> Using cache
---> 8ae4300e7c7c
Successfully built 8ae4300e7c7c
Successfully tagged amctest:latest
and finally run it with port forwarding to port 8081:
docker run -tid --name amc -p 8081:8081 amctest
a07cdd8bf8cec6ba41ce068c01544920136a6905e7a05e9a2c315605f62edfce

Unable to find user root: no matching entries in passwd file in Docker

I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error

Keeping Docker container alive running Java application

Im having a recurring issue while trying to set up a Docker container so that it stays running.
Here is a sample of the Dockerfile that I am wanting to use:
RUN wget -O /usr/local/nexus-2.11.3-01-bundle.tar.gz http://www.sonatype.org/downloads/nexus-2.11.3-01-bundle.tar.gz
WORKDIR /usr/local
RUN tar xvzf /usr/local/nexus-2.11.3-01-bundle.tar.gz
RUN ln -s nexus-2.11.3-01 nexus
ENV NEXUS_HOME /usr/local/nexus
ENV RUN_AS_USER root
CMD ["/usr/local/nexus/bin/nexus", "start"]
EXPOSE 8081
Basically when I build this, and then run it, the container just dies, and doing a docker ps command returns that there are no running containers.
As far as I know, (please correct me if I'm wrong...) the docker container should stay running so long as theres a process with a pid of 1. Would the usage of the previous commands use PID 1, and if so, how can I force the nexus start command to use it? Or to just keep the container alive...
The contents of a docker logs nexus gives:
****************************************
WARNING - NOT RECOMMENDED TO RUN AS ROOT
****************************************
Starting Nexus OSS...
Started Nexus OSS.
It seems to suggest that Nexus has started, but then again when I do a docker ps, I don't see it running.
If the process running with PID 1 exits, then the container is automatically stopped. You can check on the sonatype/nexus repository here, using the concept of Launcher.
Here is how they are avoiding the container to exit:
...
RUN mkdir -p /opt/sonatype/nexus \
&& curl --fail --silent --location --retry 3 \
https://download.sonatype.com/nexus/professional-bundle/nexus-professional-${NEXUS_VERSION}-bundle.tar.gz \
| gunzip \
| tar x -C /tmp nexus-professional-${NEXUS_VERSION} \
&& mv /tmp/nexus-professional-${NEXUS_VERSION}/* /opt/sonatype/nexus/ \
&& rm -rf /tmp/nexus-professional-${NEXUS_VERSION}
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
...
EXPOSE 8081
WORKDIR /opt/sonatype/nexus
USER nexus
ENV CONTEXT_PATH /
ENV MAX_HEAP 768m
ENV MIN_HEAP 256m
ENV JAVA_OPTS -server -XX:MaxPermSize=192m -Djava.net.preferIPv4Stack=true
ENV LAUNCHER_CONF ./conf/jetty.xml ./conf/jetty-requestlog.xml
CMD java \
-Dnexus-work=${SONATYPE_WORK} -Dnexus-webapp-context-path=${CONTEXT_PATH} \
-Xms${MIN_HEAP} -Xmx${MAX_HEAP} \
-cp 'conf/:lib/*' \
${JAVA_OPTS} \
org.sonatype.nexus.bootstrap.Launcher ${LAUNCHER_CONF}
Since it is an open repository, you can directly refer to their repo, if you like.
A quick guess from the logs is that running /usr/local/nexus/bin/nexus start would start it as a daemon.
That would cause another process to spawn and the one that started the daemon would exit, terminating the container.
One solution is to start the process not as a daemon, but I couldn't find a option to do this in your nexus case.
Another is to use something like supervisord as the CMD to docker. Then make it start your process.

How to SSH into Docker?

I'd like to create the following infrastructure flow:
How can that be achieved using Docker?
Firstly you need to install a SSH server in the images you wish to ssh-into. You can use a base image for all your container with the ssh server installed.
Then you only have to run each container mapping the ssh port (default 22) to one to the host's ports (Remote Server in your image), using -p <hostPort>:<containerPort>. i.e:
docker run -p 52022:22 container1
docker run -p 53022:22 container2
Then, if ports 52022 and 53022 of host's are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:
ssh -p 52022 myuser#RemoteServer --> SSH to container1
ssh -p 53022 myuser#RemoteServer --> SSH to container2
Notice: this answer promotes a tool I've written.
The selected answer here suggests to install an SSH server into every image. Conceptually this is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/).
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has bash.
The following example would start an SSH server exposed on port 2222 of the local machine.
$ docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
$ ssh -p 2222 localhost
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Not only does this defeat the idea of one process per container, it is also a cumbersome approach when using images from the Docker Hub since they often don't (and shouldn't) contain an SSH server.
These files will successfully open sshd and run service so you can ssh in locally. (you are using cyberduck aren't you?)
Dockerfile
FROM swiftdocker/swift
MAINTAINER Nobody
RUN apt-get update && apt-get -y install openssh-server supervisor
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
to build / run start daemon / jump into shell.
docker build -t swift3-ssh .
docker run -p 2222:22 -i -t swift3-ssh
docker ps # find container id
docker exec -i -t <containerid> /bin/bash
I guess it is possible. You just need to install a SSH server in each container and expose a port on the host. The main annoyance would be maintaining/remembering the mapping of port to container.
However, I have to question why you'd want to do this. SSH'ng into containers should be rare enough that it's not a hassle to ssh to the host then use docker exec to get into the container.
Create docker image with openssh-server preinstalled:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image using:
$ docker build -t eg_sshd .
Run a test_sshd container:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
Ssh to your container:
$ ssh root#192.168.1.2 -p 49154
# The password is ``screencast``.
root#f38c87f2a42d:/#
Source: https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
It is a short way but not permanent
first create a container
docker run ..... -p 22022:2222 .....
port 22022 on your host machine will map on 2222, we change the ssh port on container later
, then on your container executing the following commands
apt update && apt install openssh-server # install ssh server
passwd #change root password
in file /etc/ssh/sshd_config change these :
uncomment Port and change it to 2222
Port 2222
uncomment PermitRootLogin to
PermitRootLogin yes
and finally restart ssh server
/etc/init.d/ssh start
you can login to your container now
ssh -p 22022 root#HostIP
Remember : if you restart the container you need to restart ssh server again

Resources