Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container
Related
I'm trying to get Jenkins to run Docker that runs SystemD.
So far I've been able to run systemd inside docker locally without Jenkins. Here are the steps to run it locally without jenkins:
# pull unop/fedora-systemd and create and run the container for it
sudo docker run --cap-add=SYS_ADMIN -e container=docker --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro -t -i unop/fedora-systemd
# on a different terminal window, I can:
# get the container id of the "unop/fedora-systemd" image
sudo docker ps
# then exec bash on it
sudo docker container exec -t -i a98aa2bcd19e bash # where a98aa2bcd19e is the container id found above
# once inside the container, I can run systemd without any problems. examples:
systemctl status
systemctl start dbus.service
systemctl status dbus.service
The above works locally and I am able to run systemd inside the docker container.
The problem I get is when I try the same thing, but inside Jenkins.
I've tried to tweak Jenkinsfile several times, but not of my previous tries seemed to work. I always get an error when running under Jenkins similar to this:
+ systemctl status
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
This is my latest Jenkinsfile that I've tried
pipeline {
agent {
docker {
image 'unop/fedora-systemd'
args '--cap-add=SYS_ADMIN -e container=docker --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro -t -i'
}
}
stages {
stage('test') {
steps {
sh "echo hello world"
sh "systemctl status"
sh "systemctl start dbus.service"
sh "systemctl dbus.service"
}
}
}
}
On previous iterations of the Jenkinsfile, I've tried to replace -cap-add=SYS_ADMIN -e container=docker for --privileged, but that didn't help, I still got the same errors
Anyone have an idea of how can I get this to work? Why does the above work locally, but not on Jenkins? what am I missing here?
Note: Jenkins version: 2.150.2 and this is the Dockerfile used by unop/fedora-systemd
FROM fedora:rawhide
MAINTAINER http://fedoraproject.org/wiki/Cloud
ENV container docker
RUN dnf -y update && dnf clean all
RUN dnf -y install systemd && dnf clean all && \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup", "/tmp", "/run" ]
CMD ["/usr/sbin/init"]
PS: I've seen a related question, but what they were asking is different
I did not know about the related question. Let me point out again that you do not need to run a systemd daemon in a systemd controlled container if it is just about running multiple services in it. Simply overwrite /usr/bin/systemctl with the docker-systemctl-replacement script. Then go to register it with CMD ["/usr/bin/systemctl"] as the init process of the container.
That's it. Now you can run any systemctl-start process from the operating system. It works to the extent that even provisioning with ansible/puppet scripts have no problem at all. An specficially, I am using that to provision Jenkins images with the operating system that the developers like to have as a basis. No priviledged mode required.
You may try an image that has Fedora with System D already active with this command:
docker run -d --name systemd-fedora --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-fedora
Then you just need to run:
docker exec -it systemd-fedora /bin/bash
and there you can just install, start and restart any service you need.
I want to make a container ssh into the host without asking for the password. For this, I need to save the ssh key. I have following dockerfile:
FROM easypi/alpine-arm
RUN apk update && apk upgrade
RUN apk add openssh
RUN ssh-keygen -f /root/.ssh/id_rsa
RUN ssh-copy-id -i /root/.ssh/id_rsa user#<ipadress of host>
But the problem is the ip address is not constant. So if I use the same image on some other machine, it wont work there. How can I resolve this issue.
Thanks
None of these things should be in your Dockerfile. Putting an ssh private key in your Dockerfile is especially dangerous, since anyone who has your image can almost trivially get the key out.
Also consider that it's unusual to make either inbound or outbound ssh connections from a Docker container at all; they are usually self-contained, all of the long-term state should be described by the Dockerfile and the source control repository in which it lives, and they conventionally run a single server-type process which isn't sshd.
That all having been said: if you really want to do this, the right way is to build an image that expects the .ssh directory to be injected from the host and takes the outbound IP address as a parameter of some sort. One way is to write a shell script that's "the single thing the container does":
#!/bin/sh
usage() {
echo "Usage: docker run --rm -it -v ...:$HOME/.ssh myimage $0 10.20.30.40"
}
if [ -n "$1" ]; then
usage >&2
exit 1
fi
if [ ! -f "$HOME/.ssh" ]; then
usage >&2
exit 1
fi
exec ssh "user#$1"
Then build this into your Dockerfile:
FROM easypi/alpine-arm
RUN apk update \
&& apk upgrade \
&& apk add openssh
COPY ssh_user.sh /usr/bin
CMD ["/usr/bin/ssh_user.sh"]
Now on the host generate the ssh key pair
mkdir some_ssh
ssh-keygen -f some_ssh/id_rsa
ssh-copy-id -i some_ssh/id_rsa user#10.20.30.40
sudo chown root some_ssh
And then inject that into the Docker container at runtime
sudo docker run --rm -it \
-v $PWD/some_ssh:/root/.ssh \
my_image \
ssh_user.sh 10.20.30.40
(I'm pretty sure the outbound ssh connection will complain if the bind-mounted .ssh directory isn't owned by the same numeric user ID that's running the process; hence the chown root above. Also note that you're setting up a system where you have to have root permissions on the host to make a simple outbound ssh connection, which feels a little odd from a security perspective. [Consider that you could put any directory into that -v option and run an interactive shell.])
Im having a recurring issue while trying to set up a Docker container so that it stays running.
Here is a sample of the Dockerfile that I am wanting to use:
RUN wget -O /usr/local/nexus-2.11.3-01-bundle.tar.gz http://www.sonatype.org/downloads/nexus-2.11.3-01-bundle.tar.gz
WORKDIR /usr/local
RUN tar xvzf /usr/local/nexus-2.11.3-01-bundle.tar.gz
RUN ln -s nexus-2.11.3-01 nexus
ENV NEXUS_HOME /usr/local/nexus
ENV RUN_AS_USER root
CMD ["/usr/local/nexus/bin/nexus", "start"]
EXPOSE 8081
Basically when I build this, and then run it, the container just dies, and doing a docker ps command returns that there are no running containers.
As far as I know, (please correct me if I'm wrong...) the docker container should stay running so long as theres a process with a pid of 1. Would the usage of the previous commands use PID 1, and if so, how can I force the nexus start command to use it? Or to just keep the container alive...
The contents of a docker logs nexus gives:
****************************************
WARNING - NOT RECOMMENDED TO RUN AS ROOT
****************************************
Starting Nexus OSS...
Started Nexus OSS.
It seems to suggest that Nexus has started, but then again when I do a docker ps, I don't see it running.
If the process running with PID 1 exits, then the container is automatically stopped. You can check on the sonatype/nexus repository here, using the concept of Launcher.
Here is how they are avoiding the container to exit:
...
RUN mkdir -p /opt/sonatype/nexus \
&& curl --fail --silent --location --retry 3 \
https://download.sonatype.com/nexus/professional-bundle/nexus-professional-${NEXUS_VERSION}-bundle.tar.gz \
| gunzip \
| tar x -C /tmp nexus-professional-${NEXUS_VERSION} \
&& mv /tmp/nexus-professional-${NEXUS_VERSION}/* /opt/sonatype/nexus/ \
&& rm -rf /tmp/nexus-professional-${NEXUS_VERSION}
RUN useradd -r -u 200 -m -c "nexus role account" -d ${SONATYPE_WORK} -s /bin/false nexus
...
EXPOSE 8081
WORKDIR /opt/sonatype/nexus
USER nexus
ENV CONTEXT_PATH /
ENV MAX_HEAP 768m
ENV MIN_HEAP 256m
ENV JAVA_OPTS -server -XX:MaxPermSize=192m -Djava.net.preferIPv4Stack=true
ENV LAUNCHER_CONF ./conf/jetty.xml ./conf/jetty-requestlog.xml
CMD java \
-Dnexus-work=${SONATYPE_WORK} -Dnexus-webapp-context-path=${CONTEXT_PATH} \
-Xms${MIN_HEAP} -Xmx${MAX_HEAP} \
-cp 'conf/:lib/*' \
${JAVA_OPTS} \
org.sonatype.nexus.bootstrap.Launcher ${LAUNCHER_CONF}
Since it is an open repository, you can directly refer to their repo, if you like.
A quick guess from the logs is that running /usr/local/nexus/bin/nexus start would start it as a daemon.
That would cause another process to spawn and the one that started the daemon would exit, terminating the container.
One solution is to start the process not as a daemon, but I couldn't find a option to do this in your nexus case.
Another is to use something like supervisord as the CMD to docker. Then make it start your process.
We have docker running on one machine
Workstation running on other machine
I want to do bootstrap from workstation on docker container then our image should be ssh enabled
How to make docker image ssh enabled.
Before you add ssh you should see if docker exec will be sufficient for what you need. (doc link)
If you do need SSH, the following Dockerfile should help (copied from Docker docs):
# sshd
#
# VERSION 0.0.2
FROM ubuntu:14.04
MAINTAINER Sven Dowideit <SvenDowideit#docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Using the CMD command in your Dockerfile will indeed enable ssh
CMD ["/usr/sbin/sshd", "-D"]
But there is a huge downside. If you already have a CMD command (that starts MySQL for example), then you are facing a problem not easily resolved in Docker. You can use only one CMD in Dockerfile. But there is a workaround for that, using supervisor. What you do is tell Dockerfile to install Supervisor:
RUN apt-get install -y openssh-server supervisor
Using supervisor, you can start as many processes as you want on container startup. These processes are defined in supervisor.conf file (naming is arbitrary) located in the directory with your Dockerfile. In your Dockerfile you tell Docker to copy this file during building:
ADD supervisor-base.conf /etc/supervisor.conf
Then you tell Docker to start supervisor when container starts (when supervisor starts, supervisor will also start all processes listed in the supervisor.conf file mentioned above).
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
Your supervisor.conf file may look like this:
[supervisord]
nodaemon=true
[program:sshd]
directory=/usr/local/
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
redirect_stderr=true
There is one issue to be careful about. Supervisor needs to start as a root, otherwise it will throw errors. So if your Dockerfile defines an user to start container with (e.g USER jboss), then you should put USER root at the end of your Dockerfile, so that supervisor starts with root. In your supervisor.conf file you simply define a user for each process:
[program:wildfly]
user=jboss
command=/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
[program:chef]
user=chef
command=/bin/bash -c chef-2.1/bin/start.sh
Of course, these users need to be pre-defined in your dockerfile. E.g.
RUN groupadd -r -f jboss -g 2000 && useradd -u 2000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
You can learn more about Supervisor+Docker+SSH in more details in this article.
Notice: this answer promotes a tool I've written.
Some answers here suggest to place an SSH server inside your container. Conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
You can find prebuilt images with SSH installed, for instance CentOS tutum/centos and Debian tutum/debian
And the Dockerfiles used to build them
https://github.com/tutumcloud/tutum-centos/blob/master/Dockerfile
https://github.com/tutumcloud/tutum-debian/blob/master/Dockerfile
I'd like to create the following infrastructure flow:
How can that be achieved using Docker?
Firstly you need to install a SSH server in the images you wish to ssh-into. You can use a base image for all your container with the ssh server installed.
Then you only have to run each container mapping the ssh port (default 22) to one to the host's ports (Remote Server in your image), using -p <hostPort>:<containerPort>. i.e:
docker run -p 52022:22 container1
docker run -p 53022:22 container2
Then, if ports 52022 and 53022 of host's are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:
ssh -p 52022 myuser#RemoteServer --> SSH to container1
ssh -p 53022 myuser#RemoteServer --> SSH to container2
Notice: this answer promotes a tool I've written.
The selected answer here suggests to install an SSH server into every image. Conceptually this is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/).
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has bash.
The following example would start an SSH server exposed on port 2222 of the local machine.
$ docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
$ ssh -p 2222 localhost
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Not only does this defeat the idea of one process per container, it is also a cumbersome approach when using images from the Docker Hub since they often don't (and shouldn't) contain an SSH server.
These files will successfully open sshd and run service so you can ssh in locally. (you are using cyberduck aren't you?)
Dockerfile
FROM swiftdocker/swift
MAINTAINER Nobody
RUN apt-get update && apt-get -y install openssh-server supervisor
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
to build / run start daemon / jump into shell.
docker build -t swift3-ssh .
docker run -p 2222:22 -i -t swift3-ssh
docker ps # find container id
docker exec -i -t <containerid> /bin/bash
I guess it is possible. You just need to install a SSH server in each container and expose a port on the host. The main annoyance would be maintaining/remembering the mapping of port to container.
However, I have to question why you'd want to do this. SSH'ng into containers should be rare enough that it's not a hassle to ssh to the host then use docker exec to get into the container.
Create docker image with openssh-server preinstalled:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image using:
$ docker build -t eg_sshd .
Run a test_sshd container:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
Ssh to your container:
$ ssh root#192.168.1.2 -p 49154
# The password is ``screencast``.
root#f38c87f2a42d:/#
Source: https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
It is a short way but not permanent
first create a container
docker run ..... -p 22022:2222 .....
port 22022 on your host machine will map on 2222, we change the ssh port on container later
, then on your container executing the following commands
apt update && apt install openssh-server # install ssh server
passwd #change root password
in file /etc/ssh/sshd_config change these :
uncomment Port and change it to 2222
Port 2222
uncomment PermitRootLogin to
PermitRootLogin yes
and finally restart ssh server
/etc/init.d/ssh start
you can login to your container now
ssh -p 22022 root#HostIP
Remember : if you restart the container you need to restart ssh server again