Dockerfile to add ssh key of container and host - docker

I want to make a container ssh into the host without asking for the password. For this, I need to save the ssh key. I have following dockerfile:
FROM easypi/alpine-arm
RUN apk update && apk upgrade
RUN apk add openssh
RUN ssh-keygen -f /root/.ssh/id_rsa
RUN ssh-copy-id -i /root/.ssh/id_rsa user#<ipadress of host>
But the problem is the ip address is not constant. So if I use the same image on some other machine, it wont work there. How can I resolve this issue.
Thanks

None of these things should be in your Dockerfile. Putting an ssh private key in your Dockerfile is especially dangerous, since anyone who has your image can almost trivially get the key out.
Also consider that it's unusual to make either inbound or outbound ssh connections from a Docker container at all; they are usually self-contained, all of the long-term state should be described by the Dockerfile and the source control repository in which it lives, and they conventionally run a single server-type process which isn't sshd.
That all having been said: if you really want to do this, the right way is to build an image that expects the .ssh directory to be injected from the host and takes the outbound IP address as a parameter of some sort. One way is to write a shell script that's "the single thing the container does":
#!/bin/sh
usage() {
echo "Usage: docker run --rm -it -v ...:$HOME/.ssh myimage $0 10.20.30.40"
}
if [ -n "$1" ]; then
usage >&2
exit 1
fi
if [ ! -f "$HOME/.ssh" ]; then
usage >&2
exit 1
fi
exec ssh "user#$1"
Then build this into your Dockerfile:
FROM easypi/alpine-arm
RUN apk update \
&& apk upgrade \
&& apk add openssh
COPY ssh_user.sh /usr/bin
CMD ["/usr/bin/ssh_user.sh"]
Now on the host generate the ssh key pair
mkdir some_ssh
ssh-keygen -f some_ssh/id_rsa
ssh-copy-id -i some_ssh/id_rsa user#10.20.30.40
sudo chown root some_ssh
And then inject that into the Docker container at runtime
sudo docker run --rm -it \
-v $PWD/some_ssh:/root/.ssh \
my_image \
ssh_user.sh 10.20.30.40
(I'm pretty sure the outbound ssh connection will complain if the bind-mounted .ssh directory isn't owned by the same numeric user ID that's running the process; hence the chown root above. Also note that you're setting up a system where you have to have root permissions on the host to make a simple outbound ssh connection, which feels a little odd from a security perspective. [Consider that you could put any directory into that -v option and run an interactive shell.])

Related

Emacs in Docker container can't connect to MELPA

I'm currently facing the problem of Emacs not being able to connect to MELPA. When researching this problem, I found out that this problem is discussed in many topics. However, every suggested solution I tried didn't work for me, beside that, the way I've set up my Emacs instance makes me think the root of the problem might be somewhere else (not emacs related itself).
So let me explain my setup:
I have a Windows 10 host, and I'm trying to get GUI Emacs for Linux running on that host. I've tried various methods (VMs, Emacs for Windows, ...). Currently I'm trying to run Emacs inside a Docker container with X11 forwarding with XMing on my host.
Docker version on host: Docker version 19.03.2, build 6a30dfc
I'm using this docker container as a base. However I modified the Dockerfile a bit to fit my needs (I should mention that I've never worked with docker before):
FROM alpine:edge
MAINTAINER Daniel Guerra <daniel.guerra69#gmail.com>
ARG authorizedKeys=authorized_keys
RUN apk add --update openssh util-linux dbus ttf-freefont xauth xf86-input-keyboard emacs-x11 bash git sudo\
&& rm -rf /tmp/* /var/cache/apk/*
RUN addgroup alpine \
&& adduser -G alpine -s /bin/bash -D alpine \
&& echo "alpine:alpine" | /usr/sbin/chpasswd \
&& echo "alpine ALL=(ALL) ALL" >> /etc/sudoers
RUN cp -r /etc/ssh /ssh_orig
RUN rm -rf /etc/ssh/*
ADD etc /etc
ADD docker-entrypoint.sh /usr/local/bin
VOLUME ["/etc/ssh"]
RUN mkdir -p /home/alpine/.ssh
ADD $authorizedKeys /home/alpine/.ssh/authorized_keys
RUN mkdir -p /home/alpine/.config
RUN git clone https://github.com/minikN/dotemacs.git /home/alpine/.config/emacs/
RUN mkdir -p /home/alpine/.emacs.d
RUN ln -s /home/alpine/.config/emacs/init.el /home/alpine/.emacs.d/init.el
RUN ln -s /home/alpine/.config/emacs/config.el /home/alpine/.emacs.d/config.el
RUN ln -s /home/alpine/.config/emacs/config.org /home/alpine/.emacs.d/config.org
RUN chown -R alpine:alpine /home/alpine/.emacs.d
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/sbin/sshd","-D"]
I basically copy my hosts authorized_keys to the container. I do this because I'm using Putty and it's ssh-agent to connect to the machine via SSH.
After that I just copy my emacs config from my GitHub to the container.
You can find the emacs config here.
If I now start the container connect to it via SSH and start emacs, it opens in XMing. However it just hangs (in fact the whole container hangs up), I just see a white screen. Upon restarting the container and running emacs --daemon I can see that it hangs at
alpine-sshdx:~$ emacs --daemon
Contacting host: melpa.org:443
BTW I've checked several mirrors. melpa.org:443 is just the latest try. Also tried both HTTP/HTTPS.
However doing ping 8.8.8.8 works just fine.
I have absolutely no idea what the cause of this is. But I believe it's something in the container.
Would appreciate any help.

how to correctly use system user in docker container

I'm starting containers from my docker image like this:
$ docker run -it --rm --user=999:998 my-image:latest bash
where the uid and gid are for a system user called sdp:
$ id sdp uid=999(sdp) gid=998(sdp) groups=998(sdp),999(docker)
but: container says "no"...
groups: cannot find name for group ID 998
I have no name!#75490c598f4c:/home/myfolder$ whoami
whoami: cannot find name for user ID 999
what am I doing wrong?
Note that I need to run containers based on this image on multiple systems and cannot guarantee that the uid:gid of the user will be the same across systems which is why I need to specify it on the command line rather than in the Dockerfile.
Thanks in advance.
This sort of error will happen when the uid/gid does not exist in the /etc/passwd or /etc/group file inside the container. There are various ways to work around that. One is to directly map these files from your host into the container with something like:
$ docker run -it --rm --user=999:998 \
-v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro \
my-image:latest bash
I'm not a fan of that solution since files inside the container filesystem may now have the wrong ownership, leading to potential security holes and errors.
Typically, the reason people want to change the uid/gid inside the container is because they are mounting files from the host into the container as a host volume and want permissions to be seamless across the two. In that case, my solution is to start the container as root and use an entrypoint that calls a script like:
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The above is from a fix-perms script that I include in my base image. What's happening there is the uid of the user inside the container is compared to the uid of the file or directory that is mounted into the container (as a volume). When those id's do not match, the user inside the container is modified to have the same uid as the volume, and any files inside the container with the old uid are updated. The last step of my entrypoint is to call something like:
exec gosu app_user "$#"
Which is a bit like an su command to run the "CMD" value as the app_user, but with some exec logic that replaces pid 1 with the "CMD" process to better handle signals. I then run it with a command like:
$ docker run -it --rm --user=0:0 -v /host/vol:/container/vol \
-e RUN_AS app_user --entrypoint /entrypoint.sh \
my-image:latest bash
Have a look at the base image repo I've linked to, including the example with nginx that shows how these pieces fit together, and avoids the need to run containers in production as root (assuming production has known uid/gid's that can be baked into the image, or that you do not mount host volumes in production).
It's strange to me that there's no built-in command-line option to simply run a container with the "same" user as the host so that file permissions don't get messed up in the mounted directories. As mentioned by OP, the -u $(id -u):$(id -g) approach gives a "cannot find name for group ID" error.
I'm a docker newb, but here's the approach I've been using in case it helps others:
# See edit below before using this.
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && su - $USER"
I.e. add a user (useradd) with a matching name, make it sudo (usermod), then open a terminal with that user (su -).
Edit: I've just found that this causes a E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) error when trying to use apt. Using sudo gives the error -su: sudo: command not found because sudo isn't install by default on the image I'm using. So the command becomes even more hacky and requires running an apt update and apt install sudo at launch:
docker run --rm -it -v /foo:/bar ubuntu:20.04 sh -c "useradd -m -s /bin/bash $USER && usermod -a -G sudo $USER && apt update && apt install sudo && passwd -d $USER && su - $USER"
Not ideal! I'd have hoped there was a much more simple way of doing this (using command-line options, not creating a new image), but I haven't found one.
1) Make sure that the user 999 has right privilege on the current directory, you need to try something like this in your docker file
FROM
RUN mkdir /home/999-user-dir && \
chown -R 999:998 /home/999-user-dir
WORKDIR /home/999-user-dir
USER 999
try to spin up the container using this image without the user argument and see if that works.
2) other reason could be permission issue on the below files, make sure your group 998 has read permission on these files
-rw-r--r-- 1 root root 690 Jan 2 06:27 /etc/passwd
-rw-r--r-- 1 root root 372 Jan 2 06:27 /etc/group
Thanks
So, on your host you probably see your user and group:
$ cat /etc/passwd
sdp:x:999:998::...
But inside the container, you will not see them in /etc/passwd.
This is the expected behavior, the host and the container are completely separated as long as you don't mount the /etc/passwd file inside the container (and you shouldn't do it from security perspective).
Now if you specified a default user inside your Dockerfile, the --user operator overrides the USER instruction, so you left without a username inside your container, but please notice that specifying the uid:gid option means that the container have the permissions of the user with the same uid value in the host.
Now for your request not to specify a user in the Dockerfile - that shouldn't be a problem. You can set it on runtime like you did as long as that uid matches an existing user uid on the host.
If you have to run some of the containers in privileged mode - please consider using user namespace.

Enabling ssh at docker build time

Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container

SSH agent forwarding during docker build

While building up a docker image through a dockerfile, I have to clone a github repo. I added my public ssh keys to my git hub account and I am able to clone the repo from my docker host. While I see that I can use docker host's ssh key by mapping $SSH_AUTH_SOCK env variable at the time of docker run like
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
How can I do the same during a docker build?
For Docker 18.09 and newer
You can use new features of Docker to forward your existing SSH agent connection or a key to the builder. This enables for example to clone your private repositories during build.
Steps:
First set environment variable to use new BuildKit
export DOCKER_BUILDKIT=1
Then create Dockerfile with new (experimental) syntax:
# syntax=docker/dockerfile:experimental
FROM alpine
# install ssh client and git
RUN apk add --no-cache openssh-client git
# download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# clone our private repository
RUN --mount=type=ssh git clone git#github.com:myorg/myproject.git myproject
And build image with
docker build --ssh default .
Read more about it here: https://medium.com/#tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
Unfortunately, you cannot forward your ssh socket to the build container since build time volume mounts are currently not supported in Docker.
This has been a topic of discussion for quite a while now, see the following issues on GitHub for reference:
https://github.com/moby/moby/issues/6396
https://github.com/moby/moby/issues/14080
As you can see this feature has been requested multiple times for different use cases. So far the maintainers have been hesitant to address this issue because they feel that volume mounts during build would break portability:
the result of a build should be independent of the underlying host
As outlined in this discussion.
This may be solved using an alternative build script. For example you may create a bash script and put it in ~/usr/local/bin/docker-compose or your favourite location:
#!/bin/bash
trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
/usr/bin/docker-compose $#
Then in your Dockerfile you would use your existing ssh socket:
...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
&& apk add --no-cache socat openssh \
&& /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
&& bundle install \
...
or any other ssh commands will works
Now you can call our custom docker-compose build. It would call the actual docker script with a shared ssh socket.
This one is also interesting:
https://github.com/docker/for-mac/issues/483#issuecomment-344901087
It looks like:
On the host
mkfifo myfifo
nc -lk 12345 <myfifo | nc -U $SSH_AUTH_SOCK >myfifo
In the dockerfile
RUN mkfifo myfifo
RUN while true; do \
nc 172.17.0.1 12345 <myfifo | nc -Ul /tmp/ssh-agent.sock >myfifo \
done &
RUN export SSH_AUTH_SOCK=/tmp/ssh-agent.sock
RUN ssh ...

how to make docker image ssh enabled

We have docker running on one machine
Workstation running on other machine
I want to do bootstrap from workstation on docker container then our image should be ssh enabled
How to make docker image ssh enabled.
Before you add ssh you should see if docker exec will be sufficient for what you need. (doc link)
If you do need SSH, the following Dockerfile should help (copied from Docker docs):
# sshd
#
# VERSION 0.0.2
FROM ubuntu:14.04
MAINTAINER Sven Dowideit <SvenDowideit#docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Using the CMD command in your Dockerfile will indeed enable ssh
CMD ["/usr/sbin/sshd", "-D"]
But there is a huge downside. If you already have a CMD command (that starts MySQL for example), then you are facing a problem not easily resolved in Docker. You can use only one CMD in Dockerfile. But there is a workaround for that, using supervisor. What you do is tell Dockerfile to install Supervisor:
RUN apt-get install -y openssh-server supervisor
Using supervisor, you can start as many processes as you want on container startup. These processes are defined in supervisor.conf file (naming is arbitrary) located in the directory with your Dockerfile. In your Dockerfile you tell Docker to copy this file during building:
ADD supervisor-base.conf /etc/supervisor.conf
Then you tell Docker to start supervisor when container starts (when supervisor starts, supervisor will also start all processes listed in the supervisor.conf file mentioned above).
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
Your supervisor.conf file may look like this:
[supervisord]
nodaemon=true
[program:sshd]
directory=/usr/local/
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
redirect_stderr=true
There is one issue to be careful about. Supervisor needs to start as a root, otherwise it will throw errors. So if your Dockerfile defines an user to start container with (e.g USER jboss), then you should put USER root at the end of your Dockerfile, so that supervisor starts with root. In your supervisor.conf file you simply define a user for each process:
[program:wildfly]
user=jboss
command=/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
[program:chef]
user=chef
command=/bin/bash -c chef-2.1/bin/start.sh
Of course, these users need to be pre-defined in your dockerfile. E.g.
RUN groupadd -r -f jboss -g 2000 && useradd -u 2000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
You can learn more about Supervisor+Docker+SSH in more details in this article.
Notice: this answer promotes a tool I've written.
Some answers here suggest to place an SSH server inside your container. Conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
You can find prebuilt images with SSH installed, for instance CentOS tutum/centos and Debian tutum/debian
And the Dockerfiles used to build them
https://github.com/tutumcloud/tutum-centos/blob/master/Dockerfile
https://github.com/tutumcloud/tutum-debian/blob/master/Dockerfile

Resources