Jenkins slave in docker is denying SSH keys - jenkins

I a running Jenkins in an docker container. When spinning off a node in another docker container I receive the message:
[11/18/16 20:46:21] [SSH] Opening SSH connection to 192.168.99.100:32826.
ERROR: Server rejected the 1 private key(s) for Jenkins (credentialId:528bbe19-eb26-4c9f-bae3-82cd1247d50a/method:publickey)
[11/18/16 20:46:22] [SSH] Authentication failed.
hudson.AbortException: Authentication failed.
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1217)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:711)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:706)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[11/18/16 20:46:22] Launch failed - cleaning up connection
[11/18/16 20:46:22] [SSH] Connection closed.
Using the docker exec -i -t slave_name /bin/bash command I am able to get into the home/jenkins/.ssh directory to confirm the ssh key is where it is expected to be.
Under the CLOUD headnig on my configure page the Test Connection returns
Version = 1.12.3, API Version = 1.24
.
I am running OSX Sierra and attempting to follow the RIOT Games Jenkins-Docker tutorial http://engineering.riotgames.com/news/building-jenkins-inside-ephemeral-docker-container.
Jenkins Master Docker file:
FROM debian:jessie
# Create the jenkins user
RUN useradd -d "/var/jenkins_home" -u 1000 -m -s /bin/bash jenkins
# Create the folders and volume mount points
RUN mkdir -p /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
VOLUME ["/var/log/jenkins", "/var/jenkins_home"]
USER jenkins
CMD ["echo", "Data container for Jenkins"]
Jenkins Slave Dockerfile
FROM centos:7
# Install Essentials
RUN yum update -y && yum clean all
# Install Packages
RUN yum install -y git \
&& yum install -y wget \
&& yum install -y openssh-server \
&& yum install -y java-1.8.0-openjdk \
&& yum install -y sudo \
&& yum clean all
# gen dummy keys, centos doesn't autogen them.
RUN /usr/bin/ssh-keygen -A
# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional \
pam_loginuid.so/' /etc/pam.d/sshd
# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash
# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/authorized_keys /home/jenkins/.ssh/authorized_keys
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh
# Add the jenkins user to sudoers
RUN echo "jenkins ALL=(ALL) ALL" >> etc/sudoers
# Set Name Servers to avoid Docker containers struggling to route or resolve DNS names.
COPY /files/resolv.conf /etc/resolv.conf
# Expose SSH port and run SSHD
EXPOSE 22
CMD ["/usr/sbin/sshd","-D"]
I've been working with another individual doing the same tutorial on a Linux box who is stuck at the same place. Any help would be appreciated.

The problem you are running into probably has to do with interactive authorization of the host. Try adding the following command to your slave's Dockerfile
RUN ssh-keyscan -H 192.168.99.100 >> /home/jenkins/.ssh/known_hosts
Be sure to add it after you created the jenkins user, preferably after
USER jenkins
to avoid wrong ownership of the file.
Also make sure to do this when the master host is online, else it will tell you the host is unreachable. If you can't, then get the known_hosts file from the slave after you did it manually and copy it into your slave.
You can verify this. If you attach your console to the docker slave and ssh to the master, it will ask you to trust the server and add it to known hosts.

Related

Rootless VS Code (dockerized)?

Is there any method to install VS Code in a docker container as a web-based editor that can be run in a rootless mode (no sudo in container entrypoint scripts etc.)?
E.g. to run it in this scenario:
docker run -u 12345 --cap-drop=all repo/rootless-vscode
Here is an example of how it can be done with code-server.
Note that it needs root permissions to install the server, but runs it as newuser.
FROM ubuntu:22.04
RUN apt update
RUN apt install -y sudo curl
RUN curl -fsSL https://code-server.dev/install.sh | sh
RUN useradd -ms /bin/bash newuser
USER newuser
CMD [ "code-server", "--bind-addr", "0.0.0.0:8080" ]
For a more complete example, check out their code-server CI release Dockerfile.

Facing issue with ssh to docker container

I am playing around with docker containers and trying to do ssh with the docker containers using the host machine.
So I am creating my docker container using the below docker file.
FROM ubuntu:18.04
LABEL maintainer="Sagar Shroff" version="1.0" type="ubuntu-with-ssh"
RUN apt-get update -y && \
apt-get install -y openssh-server
RUN service ssh restart
EXPOSE 22
USER root
WORKDIR /root
CMD service ssh restart && \
echo "Enter root's password: " && passwd root && \
/bin/bash
and I run my docker container using the command
docker run --rm -it -p 1022:22 ssh-ubuntu-example
After entering the root password I make the container go into background mode by pressing Ctrl+P, Q and then do ssh from my host machine using the command
ssh root#127.0.0.1 -p 1022
But I am unable to connect
ssh root#127.0.0.1 -p 1022
root#127.0.0.1's password:
Permission denied, please try again.
root#127.0.0.1's password:
Permission denied, please try again.
root#127.0.0.1's password:
root#127.0.0.1: Permission denied (publickey,password).

Run Omnet++ inside docker with x11 forwarding on windows. SSH not working

Cannot ssh into container running on Windows hostmachine
For a university project i build a docker image containing Omnet++ to provide a consistent development environment.
The Image uses phusions's Baseimage and sets up x11 forwarding via SSH like rogaha did in his docker-desktop image.
The image works perfectly fine on a Linux Host System. But on Windows and OS X i was unable to ssh on the container from the host machine.
I reckon this is due to the different implementation of Docker on Windows and OS X. As explained in this Article by Microsoft Docker uses a NAT Network for Containers as a default to Separate the Networks from Host and Containers.
My problem is i don't know how to reach the running container via ssh.
I already tried the following:
Change the Container Network to a transparent Network as described in the Microsoft Article. The following error occurs both in Windows and OS X:
docker network create -d transparent MyTransparentNetwork
Error response from daemon: legacy plugin: plugin not found
On Windows run Docker in Virtualbox instead of Hyper-V
Explicitly expose port 22 like this:
docker run -p 52022:22 containerName
ssh -p 52022 root#ContainerIP
Dockerfile
FROM phusion/baseimage:latest
MAINTAINER Robin Finkbeiner
LABEL Description="Docker image for Nesting Stupro University of Stuttgart containing full omnet 5.1.1"
# Install dependencies
RUN apt-get update && apt-get install -y \
xpra\
rox-filer\
openssh-server\
pwgen\
xserver-xephyr\
xdm\
fluxbox\
sudo\
git \
xvfb\
wget \
build-essential \
gcc \
g++\
bison \
flex \
perl \
qt5-default\
tcl-dev \
tk-dev \
libxml2-dev \
zlib1g-dev \
default-jre \
doxygen \
graphviz \
libwebkitgtk-3.0-0 \
libqt4-opengl-dev \
openscenegraph-plugin-osgearth \
libosgearth-dev\
openmpi-bin\
libopenmpi-dev
# Set the env variable DEBIAN_FRONTEND to noninteractive
ENV DEBIAN_FRONTEND noninteractive
#Enabling SSH -- from phusion baseimage documentation
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Copied command from https://github.com/rogaha/docker-desktop/blob/master/Dockerfile
# Configuring xdm to allow connections from any IP address and ssh to allow X11 Forwarding.
RUN sed -i 's/DisplayManager.requestPort/!DisplayManager.requestPort/g' /etc/X11/xdm/xdm-config
RUN sed -i '/#any host/c\*' /etc/X11/xdm/Xaccess
RUN ln -s /usr/bin/Xorg
RUN echo X11Forwarding yes >> /etc/ssh/ssh_config
# OMnet++ 5.1.1
# Create working directory
RUN mkdir -p /usr/omnetpp
WORKDIR /usr/omnetpp
# Fetch Omnet++ source
RUN wget https:******omnetpp-5.1.1-src-linux.tgz
RUN tar -xf omnetpp-5.1.1-src-linux.tgz
# Path
ENV PATH $PATH:/usr/omnetpp/omnetpp-5.1.1/bin
# Configure and compile
RUN cd omnetpp-5.1.1 && \
xvfb-run ./configure && \
make
# Cleanup
RUN apt-get clean && \
rm -rf /var/lib/apt && \
rm /usr/omnetpp/omnetpp-5.1.1-src-linux.tgz
Solution that worked for me
First of all the linked Microsoft Article is only valid for windows container.
This Article explains very well how docker networks work.
To simplify the explanation i drew a simple example.Simple ssh into docker network.
To be able to reach a container in bridged networks one is required to expose the necessary ports explicitly.
Expose Port
docker run -p 22 {$imageName}
Find Port mapping on host machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2ec2bd2b53b renderfehler/omnet_ide_baseimage "/sbin/my_init" 17 hours ago Up 17 hours 0.0.0.0:32773->22/tcp tender_newton
ssh onto container using mapped port
ssh -p 32772 root#0.0.0.0

Is it possible to access Hbase installed inside docker container to be accessed using java client on mac OSX?

I created a docker container that have HBase installed in standalone mode. I used -net=host mode to run docker container. I can see UI for master and regionserver but when I am trying to connect to HBase from my java program after establishment of connection with zookeeper, it says that This server is in the failed servers list: boot2docker:60020. I am using mac OSX and boot2docker. Please give suggestion to this. Here is my dockerfile.
FROM centos:6
# Install required libraries.
RUN yum install -y tar
# Install java.
RUN curl -LO \
'http://download.oracle.com/otn-pub/java/jdk/7u71-b14/jdk-7u71-linux-x64.rpm'\
-H 'Cookie: oraclelicense=accept-securebackup-cookie'
RUN rpm -i jdk-7u71-linux-x64.rpm
RUN rm -f jdk-7u71-linux-x64.rpm
# Export JAVA_HOME.
ENV JAVA_HOME /usr
# Copy hbase code to docker container.
COPY hbase-*.tar.gz /
RUN tar -xzvf hbase-*.tar.gz
RUN rm hbase-*.tar.gz
RUN mv hbase-* hbase
# Copy hbase-site.xml.
ADD hbase-config-files/hbase-site.xml /hbase/conf/hbase-site.xml
# Start Hbase.
CMD ["./hbase/bin/hbase", "master", "start"]`
To run this container I used docker run --net=host -t docker_image

Jenkins in docker with access to host docker

I have a workflow as follows for publishing webapps to my dev server. The server has a single docker host and I'm using docker-compose for managing containers.
Push changes in my app to a private gitlab (running in docker). The app includes a Dockerfile and docker-compose.yml
Gitlab triggers a jenkins build (jenkins is also running in docker), which does some normal build stuff (e.g. run test)
Jenkins then needs to build a new docker image and deploy it using docker-compose.
The problem I have is in step 3. The way I have it set up, the jenkins container has access to the host docker so that running any docker command in the build script is essentially the same as running it on the host. This is done using the following DockerFile for jenkins:
FROM jenkins
USER root
# Give jenkins access to docker
RUN groupadd -g 997 docker
RUN gpasswd -a jenkins docker
# Install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
USER jenkins
and mapping the following volumes to the jenkins container:
-v /var/run/docker.sock:/var/run/docker.sock
-v /usr/bin/docker:/usr/bin/docker
A typical build script in jenkins looks something like this:
docker-compose build
docker-compose up
This works ok, but there are two problems:
It really feels like a hack. But the only other options I've found is to use the docker plugin for jenkins, publish to a registry and then have some way of letting the host know it needs to restart. This is quite a lot more moving parts, and the docker-jenkins plugin required that the docker host is on an open port, which I don't really want to expose.
The jenkins DockerFile includes groupadd -g 997 docker which is needed to give the jenkins user access to docker. However, the GID (997) is the GID on the host machine, and is therefore not portable.
I'm not really sure what solution I'm looking for. I can't see any practical way to get around this approach, but it would be nice if there was a way to allow running docker commands inside the jenkins container without having to hard code the GID in the DockerFile. Does anyone have any suggestions about this?
My previous answer was more generic, telling how you can modify the GID inside the container at runtime. Now, by coincidence, someone from my close colleagues asked for a jenkins instance that can do docker development so I created this:
FROM bdruemen/jenkins-uid-from-volume
RUN apt-get -yqq update && apt-get -yqq install docker.io && usermod -g docker jenkins
VOLUME /var/run/docker.sock
ENTRYPOINT groupmod -g $(stat -c "%g" /var/run/docker.sock) docker && usermod -u $(stat -c "%u" /var/jenkins_home) jenkins && gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh
(The parent Dockerfile is the same one I have described in my answer to: Changing the user's uid in a pre-build docker container (jenkins))
To use it, mount both, jenkins_home and docker.sock.
docker run -d /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock <IMAGE>
The jenkins process in the container will have the same UID as the mounted host directory. Assuming the docker socket is accessible to the docker group on the host, there is a group created in the container, also named docker, with the same GID.
I ran into the same issues. I ended up giving Jenkins passwordless sudo privileges because of the GID problem. I wrote more about this here: https://blog.container-solutions.com/running-docker-in-jenkins-in-docker
This doesn't really affect security as having docker privileges is effectively equivalent to sudo rights.
Please take a look at this docker file I just posted:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/gid-from-volume/Dockerfile
Here the GID extracted from a mounted volume (host directory), with
stat -c '%g' <VOLUME-PATH>
Then the GID of the group of the container user is changed to the same value with
groupmod -g <GID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real GID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the GID, there might be other files in the container no longer accessible for the process, so you might need a
chgrp -R <GROUPNAME> <SOME-PATH>
before the gosu command.
You can also change the UID, see my answer here Changing the user's uid in a pre-build docker container (jenkins)
and maybe you want to change both to increase security.
I solved a similar problem in the following way.
Docker is installed on the host. Jenkins is deployed in the docker container of the host. Jenkins must build and run containers with web applications on the host.
Jenkins master connects to the docker host using REST APIs. So we need to enable the remote API for our docker host.
Log in to the host and open the docker service file /lib/systemd/system/docker.service. Search for ExecStart and replace that line with the following.
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
Reload and restart docker service
sudo systemctl daemon-reload
sudo service docker restart
Docker file for Jenkins
FROM jenkins/jenkins:lts
USER root
# Install the latest Docker CE binaries and add user `jenkins` to the docker group
RUN apt-get update
RUN apt-get -y --no-install-recommends install apt-transport-https \
apt-utils ca-certificates curl gnupg2 software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
RUN apt-get update && apt-get install -y docker-ce-cli docker-ce && \
apt-get clean && \
usermod -aG docker jenkins
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.6 docker-workflow:1.29 ansicolor"
Build jenkins docker image
docker build -t you-jenkins-name .
Run Jenkins
docker run --name you-jenkins-name --restart=on-failure --detach \
--network jenkins \
--env DOCKER_HOST=tcp://172.17.0.1:4243 \
--publish 8080:8080 --publish 50000:50000 \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
you-jenkins-name
Your web application has a repository at the root of which is jenkins and a docker file.
Jenkinsfile for web app:
pipeline {
agent any
environment {
PRODUCT = 'web-app'
HTTP_PORT = 8082
DEVICE_CONF_HOST_PATH = '/var/web-app'
}
options {
ansiColor('xterm')
skipDefaultCheckout()
}
stages {
stage('Checkout') {
steps {
script {
//BRANCH_NAME = env.CHANGE_BRANCH ? env.CHANGE_BRANCH : env.BRANCH_NAME
deleteDir()
//git url: "git#<host>:<org>/${env.PRODUCT}.git", branch: BRANCH_NAME
}
checkout scm
}
}
stage('Stop and remove old') {
steps {
script {
try {
sh "docker stop ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker rm ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker image rm ${env.PRODUCT}"
} catch (Exception e) {}
}
}
}
stage('Build') {
steps {
sh "docker build . -t ${env.PRODUCT}"
}
}
// ④ Run the test using the built docker image
stage('Run new') {
steps {
script {
sh """docker run
--detach
--name ${env.PRODUCT} \
--publish ${env.HTTP_PORT}:8080 \
--volume ${env.DEVICE_CONF_HOST_PATH}:/var/web-app \
${env.PRODUCT} """
}
}
}
}
}

Resources