I'm practicing docker in practice by manning.
The technical recipe is about configuring jenkins slave which is docker container.
Below is the Dockerfile for jenkins_slave
FROM ubuntu:latest
ENV DEBIAN_FRONTEND noninteractive
RUN groupadd -g 1000 jenkins_slave
RUN useradd -d /home/jenkins_slave -s /bin/bash \
-m jenkins_slave -u 1000 -g jenkins_slave
RUN echo jenkins_slave:jpass | chpasswd
RUN apt-get update && \
apt-get install -y openssh-server openjdk-8-jre wget iproute2
RUN mkdir -p /var/run/sshd
CMD ip route | grep "default via" \
| awk '{print $3}' && /usr/sbin/sshd -D
I built docker images using the command
docker build -t jenkins_slave .
Then I run the docker images as container using the command
$ docker run --name jenkins_slave -it -p 2222:22 jenkins_slave
172.17.0.1
Then I run the jenkins server using the below docker command
$ docker run --name jenkins_server -p 8080:8080 -p 50000:50000 dockerinpractice/jenkins:server
Below is the node configuration details -
Then I get the error message saying This agent is offline because Jenkins failed to launch the agent process on it
Below is the error stack trace
[12/07/17 08:50:00] [SSH] Opening SSH connection to 172.17.0.1:2222.
/var/jenkins_home/.ssh/known_hosts [SSH] No Known Hosts file was found at
/var/jenkins_home/.ssh/known_hosts. Please ensure one is created at this path and that Jenkins can read it.
Key exchange was not finished, connection is closed.
java.io.IOException: There was a problem while connecting to 172.17.0.1:2222
at com.trilead.ssh2.Connection.connect(Connection.java:834)
at com.trilead.ssh2.Connection.connect(Connection.java:703)
at com.trilead.ssh2.Connection.connect(Connection.java:617)
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1284)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:804)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:793)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Key exchange was not finished, connection is closed.
at com.trilead.ssh2.transport.KexManager.getOrWaitForConnectionInfo(KexManager.java:95)
at com.trilead.ssh2.transport.TransportManager.getConnectionInfo(TransportManager.java:237)
at com.trilead.ssh2.Connection.connect(Connection.java:786)
... 9 more
Caused by: java.io.IOException: The server hostkey was not accepted by the verifier callback
at com.trilead.ssh2.transport.KexManager.handleMessage(KexManager.java:548)
at com.trilead.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:790)
at com.trilead.ssh2.transport.TransportManager$1.run(TransportManager.java:502)
... 1 more
I have a simple build configuration called test but the build is not running since the slave is offline.
Any idea why the jenkins master is not identifying the slave server.
Just change the Host Key verification strategy to Non verfiying Verification Strategy in the node configuration.
You are using Host Key verification strategy method which checks the known_hosts file ( usually it's at ~/.ssh/known_hosts). However the jenkins server is checking /var/jenkins_home/.ssh/known_hosts, which probably is empty now, in the docker container.
You can use Manually provided key Verification Strategy and paste the public key there or using other methods with the help of this document.
Related
Installed a brand new Gitlab CE 13.9.1 on a Ubuntu Server 20.04.2.0.
This is the pipeline
image: node:latest
before_script:
- apt-get update -qq
stages:
- install
install:
stage: install
script:
- npm install --verbose
To run it I configure my Gitlab Runner using the same procedure as in my previous Gitlab CE 12:
I pull last Gitlab runner image:
docker pull gitlab/gitlab-runner:latest
First try:
Start GitLab Runner container mounting on local volume
docker run -d \
--name gitlab-runner \
--restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
And register runner
docker run --rm -t -i \
-v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register
When registering runner, for executor I pick shell
Finally, when I push to Gitlab, on the pipeline, I see this error:
$ apt-get update -qq
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
ERROR: Job failed: exit status 1
Second try:
Start GitLab Runner container mounting on Docker volume
Create volume
docker volume create gitlab-runner-config
Start GitLab Runner container
docker run -d \
--name gitlab-runner \
--restart always \
-v gitlab-runner-config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Register runner (picking shell again as executor)
docker run \
--rm -t -i \
-v gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner register
Same results.
$ apt-get update -qq
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
ERROR: Job failed: exit status 1
Third try:
Granting permissions to gitlab-runner
I ended up reading In gitlab CI the gitlab runner choose wrong executor and https://docs.gitlab.com/runner/executors/shell.html#running-as-unprivileged-user, which states these solutions:
move to docker
grant user gitlab-runner the permissions he needs to run specified commands. gitlab-runner may run apt-get without sudo, also he will need perms for npm install and npm run.
grant sudo nopasswd to user gitlab-runner. Add gitlab-runner ALL=(ALL) NOPASSWD: ALL (or similar) to /etc/sudoers on the machine gitlab-runner is installed and change the lines apt-get update to sudo apt-get update, which will execute them as privileged user (root).
I need to use shell
I already did that with sudo usermod -aG docker gitlab-runner
Tried as well with sudo nano /etc/sudoers, adding gitlab-runner ALL=(ALL) NOPASSWD: ALL, and using sudo apt-get update -qq in the pipeline, which results in bash: line 106: sudo: command not found
I'm pretty lost here now. Any idea will be welcome.
IMHO, using shell executor on a Docker runner with already mounted Docker socket on it is not a good idea. You'd better use docker executor, which will take care of everything and probably is how it's supposed to be run.
Edit
Alternatively, you can use a customized Docker image to allow using the shell executor with root permissions. First, you'll need to create a Dockerfile:
FROM gitlab/gitlab-runner:latest
# Change user to root
USER root
Then, you'll have to build the image (here, I tagged it as custom-gitlab-runner):
$ docker build -t custom-gitlab-runner .
Finally, you'll need to use this image:
docker run -d \
--name gitlab-runner \
--restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
custom-gitlab-runner:latest
I had a similar issue trying to use locally installed gitlab-runner on ubuntu with a shell executor (I had other issues using docker executor not being able to communicate between stages)
$ docker build -t myapp .
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=myapp&target=&ulimits=null&version=1": dial unix /var/run/docker.sock: connect: permission denied
ERROR: Job failed: exit status 1
I then printed what user was running the docker command within the gitlab-ci.yml file, which was gitlab-runner
...
build:
script:
- echo $USER
- docker build -t myapp .
...
I then added gitlab-runner to the docker group using
sudo usermod -aG docker gitlab-runner
which fixed my issue. No more docker permission errors.
After setting up gitlab-runner as a Docker container with an executor of docker, I fail to run any builds. The displayed log reads like the following:
Running with gitlab-runner 11.4.2 (cf91d5e1)
on <hostname> 9f1c1a0d
Using Docker executor with image docker:stable-git ...
Starting service docker:stable-dind ...
Pulling docker image docker:stable-dind ...
Using docker image sha256:acfec978837639b4230111b35a775a67ccbc2b08b442c1ae2cca4e95c3e6d08a for docker:stable-dind ...
Waiting for services to be up and running...
Pulling docker image docker:stable-git ...
Using docker image sha256:a8a2d0da40bc37344c35ab723d4081a5ef6122d466bf0a0409f742ffc09c43b9 for docker:stable-git ...
Running on runner-9f1c1a0d-project-1-concurrent-0 via a7b6a57c58f8...
Fetching changes...
HEAD is now at 5430a3d <Commit message>
Checking out 5430a3d8 as master...
Skipping Git submodules setup
$ # Auto DevOps variables and functions # collapsed multi-line command
$ setup_docker
$ build
Logging to GitLab Container Registry with CI credentials...
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
ERROR: Job failed: exit code 1
Note the attempt to ligin to docker-hub (I guess) and the credentials-error. But I do not desire nor configured a username/password to access docker-hub. Any suggestion what is wrong here or how to go on debugging this?
The runner was registered with the following command (which also dictates the contents of the configuration file):
docker run --rm -ti \
-v <config-volume>:/etc/gitlab-runner \
-v $(pwd)/self-signed-server.crt:/etc/ssl/certs/server.crt \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner register \
--tls-ca-file /etc/ssl/certs/server.crt \
--url https://my.server.url/gitlab/ --registration-token <token> \
--name myserver --tag-list "" \
--executor docker --docker-privileged --docker-image debian \
--non-interactive
I used --docker-privileged because I originally had the same problem discussed here (thanks, wendellmva). I just can't configure running the gitlab-runner container itself privileged, but don't see link-failure-problem problem even though I don't.
To get past this point, one needs to overwrite the CI_REGISTRY_USER variable in the projects Settings -> CI / CD -> Variables block. Assigning an empty value will get past this point.
Background: by exporting the project and then parsing the JSON settings with jq, one can get the preconfigured list of commands that run:
jq -r .pipelines[0].stages[0].statuses[0].commands project.json
# ...
function registry_login() {
if [[ -n "$CI_REGISTRY_USER" ]]; then
echo "Logging to GitLab Container Registry with CI credentials..."
docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
echo ""
fi
}
# ...
So there is apparently some non-empty string preloaded to CI_REGISTRY_USER, but with an invalid CI_REGISTRY_PASSWORD.
What I haven't found yet is where to make such settings globally for all projects or how to edit the AutoDevOps pipeline.
I' m a beginner in the Docker;
I have pulled a CentOS 7 image from Hub and ran it ;
I need to ssh in to the docker container(CentOS 7) from my host.
Got the docker container's IP using docker inspect container-id
I have installed the following using
initscripts
systemd.x86_64
systemd-libs.x86_64
open-ssh
firewalld
net-tools
when i tried to start the firewall to open the port for ssh(22)
[root#a6f3e3eb095c ~]# systemctl start firewall
Failed to get D-Bus connection: Operation not permitted
Also tried,
[root#a6f3e3eb095c ~]# /usr/lib/systemd/systemd --system &
[1] 353
[root#a6f3e3eb095c ~]# systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization xen.
Detected architecture x86-64.
Welcome to CentOS Linux 7 (Core)!
Set hostname to <a6f3e3eb095c>.
Cannot determine cgroup we are running in: No such file or directory
Failed to allocate manager object: No such file or directory
[1]+ Exit 1 /usr/lib/systemd/systemd --system
How to start the firewall/ssh inside the docker container ?
inside docker container run following commands :
yum update -y glibc-common
yum install -y sudo passwd openssh-server openssh-clients tar screen crontabs strace telnet perl libpcap bc patch ntp dnsmasq unzip pax which
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install -y hiera lsyncd sshpass rng-tools
service sshd start;
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config;
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config;
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config;
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Base.repo
mkdir -p /root/.ssh/;
rm -f /var/lib/rpm/.rpm.lock;
echo "StrictHostKeyChecking=no" > /root/.ssh/config;
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config
echo "root:password" | chpasswd
( or )
Simply you can pull docker image of centos with ssh in docker hub
https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=centos+ssh&starCount=0
https://hub.docker.com/r/kinogmt/centos-ssh/
https://hub.docker.com/r/jdeathe/centos-ssh/
You can avoid the "Failed to get D-Bus connection: Operation not permitted" / aka installing systemd inside a docker by using the https://github.com/gdraheim/docker-systemctl-replacement ... after that the docker-exec stuff should be all fine to do things inside a container.
If you really do need an ssh or sftp container, then you can use my Docker Image as a source image for your own or run it directly:
If using the official CentOS-7 Image and you require systemd, there are instructions on how to enable it under the section "Systemd integration".
However, based on the following:
I need to ssh in to the docker container(CentOS 7) from my host.
You can use docker exec to run commands in a running, (backgrounded), container so, for images that have bash available, you can access an interactive tty and run bash as follows from your host - where container can be either the name or id:
docker exec --tty --interactive <container> bash
OR
docker exec -ti <container> bash
Finally, it's unlikely to be necessary to install the firewall package in your image as the operator will decide what ports to publish from those which are exposed and you can make use of Docker Networking to only expose the necessary public facing services.
If you are using the Docker CLI, then you can get into the Docker container using the following command
docker exec -it containerId bash
I am not sure how to ssh into the docker container, but if you want to do basic operation inside the Docker container, you can make use of the above docker command.
I a running Jenkins in an docker container. When spinning off a node in another docker container I receive the message:
[11/18/16 20:46:21] [SSH] Opening SSH connection to 192.168.99.100:32826.
ERROR: Server rejected the 1 private key(s) for Jenkins (credentialId:528bbe19-eb26-4c9f-bae3-82cd1247d50a/method:publickey)
[11/18/16 20:46:22] [SSH] Authentication failed.
hudson.AbortException: Authentication failed.
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1217)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:711)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:706)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[11/18/16 20:46:22] Launch failed - cleaning up connection
[11/18/16 20:46:22] [SSH] Connection closed.
Using the docker exec -i -t slave_name /bin/bash command I am able to get into the home/jenkins/.ssh directory to confirm the ssh key is where it is expected to be.
Under the CLOUD headnig on my configure page the Test Connection returns
Version = 1.12.3, API Version = 1.24
.
I am running OSX Sierra and attempting to follow the RIOT Games Jenkins-Docker tutorial http://engineering.riotgames.com/news/building-jenkins-inside-ephemeral-docker-container.
Jenkins Master Docker file:
FROM debian:jessie
# Create the jenkins user
RUN useradd -d "/var/jenkins_home" -u 1000 -m -s /bin/bash jenkins
# Create the folders and volume mount points
RUN mkdir -p /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
VOLUME ["/var/log/jenkins", "/var/jenkins_home"]
USER jenkins
CMD ["echo", "Data container for Jenkins"]
Jenkins Slave Dockerfile
FROM centos:7
# Install Essentials
RUN yum update -y && yum clean all
# Install Packages
RUN yum install -y git \
&& yum install -y wget \
&& yum install -y openssh-server \
&& yum install -y java-1.8.0-openjdk \
&& yum install -y sudo \
&& yum clean all
# gen dummy keys, centos doesn't autogen them.
RUN /usr/bin/ssh-keygen -A
# Set SSH Configuration to allow remote logins without /proc write access
RUN sed -ri 's/^session\s+required\s+pam_loginuid.so$/session optional \
pam_loginuid.so/' /etc/pam.d/sshd
# Create Jenkins User
RUN useradd jenkins -m -s /bin/bash
# Add public key for Jenkins login
RUN mkdir /home/jenkins/.ssh
COPY /files/authorized_keys /home/jenkins/.ssh/authorized_keys
RUN chown -R jenkins /home/jenkins
RUN chgrp -R jenkins /home/jenkins
RUN chmod 600 /home/jenkins/.ssh/authorized_keys
RUN chmod 700 /home/jenkins/.ssh
# Add the jenkins user to sudoers
RUN echo "jenkins ALL=(ALL) ALL" >> etc/sudoers
# Set Name Servers to avoid Docker containers struggling to route or resolve DNS names.
COPY /files/resolv.conf /etc/resolv.conf
# Expose SSH port and run SSHD
EXPOSE 22
CMD ["/usr/sbin/sshd","-D"]
I've been working with another individual doing the same tutorial on a Linux box who is stuck at the same place. Any help would be appreciated.
The problem you are running into probably has to do with interactive authorization of the host. Try adding the following command to your slave's Dockerfile
RUN ssh-keyscan -H 192.168.99.100 >> /home/jenkins/.ssh/known_hosts
Be sure to add it after you created the jenkins user, preferably after
USER jenkins
to avoid wrong ownership of the file.
Also make sure to do this when the master host is online, else it will tell you the host is unreachable. If you can't, then get the known_hosts file from the slave after you did it manually and copy it into your slave.
You can verify this. If you attach your console to the docker slave and ssh to the master, it will ask you to trust the server and add it to known hosts.
I'd like to create the following infrastructure flow:
How can that be achieved using Docker?
Firstly you need to install a SSH server in the images you wish to ssh-into. You can use a base image for all your container with the ssh server installed.
Then you only have to run each container mapping the ssh port (default 22) to one to the host's ports (Remote Server in your image), using -p <hostPort>:<containerPort>. i.e:
docker run -p 52022:22 container1
docker run -p 53022:22 container2
Then, if ports 52022 and 53022 of host's are accessible from outside, you can directly ssh to the containers using the ip of the host (Remote Server) specifying the port in ssh with -p <port>. I.e.:
ssh -p 52022 myuser#RemoteServer --> SSH to container1
ssh -p 53022 myuser#RemoteServer --> SSH to container2
Notice: this answer promotes a tool I've written.
The selected answer here suggests to install an SSH server into every image. Conceptually this is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/).
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container. The only requirement is that the container has bash.
The following example would start an SSH server exposed on port 2222 of the local machine.
$ docker run -d -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock \
-e CONTAINER=my-container -e AUTH_MECHANISM=noAuth \
jeroenpeeters/docker-ssh
$ ssh -p 2222 localhost
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
Not only does this defeat the idea of one process per container, it is also a cumbersome approach when using images from the Docker Hub since they often don't (and shouldn't) contain an SSH server.
These files will successfully open sshd and run service so you can ssh in locally. (you are using cyberduck aren't you?)
Dockerfile
FROM swiftdocker/swift
MAINTAINER Nobody
RUN apt-get update && apt-get -y install openssh-server supervisor
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
to build / run start daemon / jump into shell.
docker build -t swift3-ssh .
docker run -p 2222:22 -i -t swift3-ssh
docker ps # find container id
docker exec -i -t <containerid> /bin/bash
I guess it is possible. You just need to install a SSH server in each container and expose a port on the host. The main annoyance would be maintaining/remembering the mapping of port to container.
However, I have to question why you'd want to do this. SSH'ng into containers should be rare enough that it's not a hassle to ssh to the host then use docker exec to get into the container.
Create docker image with openssh-server preinstalled:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image using:
$ docker build -t eg_sshd .
Run a test_sshd container:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
Ssh to your container:
$ ssh root#192.168.1.2 -p 49154
# The password is ``screencast``.
root#f38c87f2a42d:/#
Source: https://docs.docker.com/engine/examples/running_ssh_service/#build-an-eg_sshd-image
It is a short way but not permanent
first create a container
docker run ..... -p 22022:2222 .....
port 22022 on your host machine will map on 2222, we change the ssh port on container later
, then on your container executing the following commands
apt update && apt install openssh-server # install ssh server
passwd #change root password
in file /etc/ssh/sshd_config change these :
uncomment Port and change it to 2222
Port 2222
uncomment PermitRootLogin to
PermitRootLogin yes
and finally restart ssh server
/etc/init.d/ssh start
you can login to your container now
ssh -p 22022 root#HostIP
Remember : if you restart the container you need to restart ssh server again