How to deploy to a (local) Kubernetes cluster using Jenkins - jenkins

This question is somewhat related to one of my previous questions as in it gives a clearer idea on what I am trying to achieve.. This question is about an issue I ran into when trying to achieve the task in that previous question...
I am trying to test if my kubectl works from within the Jenkins container. When I start up my Jenkins container I use the following command:
docker run \
-v /home/student/Desktop/jenkins_home:/var/jenkins_home \
-v $(which kubectl):/usr/local/bin/kubectl \ #bind docker host binary to docker container binary
-v ~/.kube:/home/jenkins/.kube \ #docker host kube config file stored in /.kube directory. Binding this to $HOME/.kube in the docker container
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker -v ~/.kube:/home/root/.kube \
--group-add 998
-p 8080:8080 -p 50000:50000
-d --name jenkins jenkins/jenkins:lts
The container starts up and I can login/create jobs/run pipeline scripts all no issue.
I created a pipeline script just to check if I can access my cluster like this:
pipeline {
agent any
stages {
stage('Kubernetes test') {
steps {
sh "kubectl cluster-info"
}
}
}
}
When running this job, it fails with the following error:
+ kubectl cluster-info // this is the step
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
Thanks!

I'm not getting why there is:
-v $(which kubectl):/usr/local/bin/kubectl -v ~/.kube:/home/jenkins/.kube
/usr/local/bin/kubectl is a kubectl binary and ~/.kube:/home/jenkins/.kube should be the location where the kubectl binary looks for the cluster context file i.e. kubeconfig. First, you should make sure that the kubeconfig is mounted to the container at /home/jenkins/.kube and is accessible to kubectl binary. After appropriate volume mounts, you can verify by creating a session in the jenkins container with docker container exec -it jenkins /bin/bash and test with kubectl get svc. Make sure you have KUBECONFIG env var set in the session with:
export KUBECONFIG=/home/jenkins/.kube/kubeconfig
Before you run the verification test and
withEnv(["KUBECONFIG=$HOME/.kube/kubeconfig"]) {
// Your stuff here
}
In your pipeline code. If it works with the session, it should work in the pipeline as well.
I would personally recommend to create a custom Docker image for Jenkins which will contain kubectl binary and other utilities necessary (such as aws-iam-authenticator for AWS EKS IAM-based authentication) for working with Kubernetes cluster. This creates isolation between your host system binaries and your Jenkins binaries.
Below is the Dockerfile I'm using which contains, helm, kubectl and aws-iam-authenticator.
# This Dockerfile contains Helm, Docker client-only, aws-iam-authenticator, kubectl with Jenkins LTS.
FROM jenkins/jenkins:lts
USER root
ENV VERSION v2.9.1
ENV FILENAME helm-${VERSION}-linux-amd64.tar.gz
ENV HELM_URL https://storage.googleapis.com/kubernetes-helm/${FILENAME}
ENV KUBE_LATEST_VERSION="v1.11.0"
# Install the latest Docker CE binaries
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce \
&& curl -o /tmp/$FILENAME ${HELM_URL} \
&& tar -zxvf /tmp/${FILENAME} -C /tmp \
&& mv /tmp/linux-amd64/helm /bin/helm \
&& rm -rf /tmp/linux-amd64/helm \
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& curl -L https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator -o /usr/local/bin/aws-iam-authenticator \
&& chmod +x /usr/local/bin/aws-iam-authenticator

Kubernetes fails inside jenkins pipeline this was my solution for jenkins installed locally on a windows machine.

Related

Jenkins on docker not finding xml pytest report

I have rootless docker host, jenkins on docker and a fastapi app inside a container as well.
Jenkins dockerfile:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
This is the docker run command:
docker run -d --name jenkins-docker --restart=on-failure -v jenkins_home:/var/jenkins_home -v /run/user/1000/docker.sock:/var/run/docker.sock -p 8080:8080 -p 5000:5000 jenkins-docker-image
Where -v /run/user/1000/docker.sock:/var/run/docker.sock is used so jenkins-docker can use the host's docker engine.
Then, for the tests I have a docker compose file:
services:
app:
volumes:
- /home/jap/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result:/usr/src
depends_on:
- testdb
...
testdb:
image: postgres:14-alpine
...
volumes:
test-result:
Here I am using the volume create on the host when I ran the jenkins-docker-image. After running jenkins 'test' stage I can see that a report.xml file was created inside the host and jenkins-docker volumes.
Inside jenkins-docker
root#89b37f219be1:/var/jenkins_home/workspace/vlep-pipeline_main/test-result# ls
report.xml
Inside host
jap#jap:~/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result $ ls
report.xml
I then have the following steps on my jenkinsfile:
steps {
sh 'docker compose -p testing -f docker/testing.yml up -d'
junit "/var/jenkins_home/workspace/vlep-pipeline_main/test-result/report.xml"
}
I also tried using the host path for the junit step, but either way I get on jenkins logs:
Recording test results
No test report files were found. Configuration error?
What am I doing wrong?

Running Docker commands inside Jenkins pipeline

Is there a proper way to run Docker commands through a Jenkins containerized service?
I see there are many plugins to support Docker commands in the Jenkins ecosystem, although all of them raise errors because Docker isn't installed in the Jenkins container.
I have a Dockerfile that provides a Jenkins image with a working Docker installation, but to work I have to mount the host's Docker socket:
FROM jenkins/jenkins:lts
USER root
RUN apt-get -y update && \
apt-get -y install sudo \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get -y update && \
apt-get -y install --allow-unauthenticated \
docker-ce \
docker-ce-cli \
containerd.io
RUN echo "jenkins:jenkins" | chpasswd && adduser jenkins sudo
RUN echo jenkins ALL= NOPASSWD: ALL >> /etc/sudoers
USER jenkins
It can be run like this:
docker run -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock
This way it's possible to run Docker commands inside the Jenkins container. Although, I am concerned about security: namely this way the Jenkins container can access all the containers running in the host machine, moreover Jenkins is a root user, which I wouldn't like for production.
I seek to run a Jenkins instance within a Kubernetes cluster to support CI and CD pipelines within that cluster, therefore I'm guessing Jenkins must be containerized.
Am I missing something?

Connecting Rstudio and SFTP docker containers directly

I'm trying to run RStudio Server in a docker container. Users will connect to this docker container and use RStudio via the internet.
The built-in mechanism for uploading and downloading files in Rstudio is very slow so I'd also like to run an SFTP server in a separate container.
I'm trying to link the two containers using Docker Volumes but I'm having some trouble. Here's is how I'm trying to run the two images.
I'm running the FTP sever using:
docker run -p 2222:22 -v /home/rstudio --name ftpserver -d atmoz/sftp rstudio:foo:1001
Then I'm trying to connect to the same directory in RStudio by doing:
docker run -d -p 8787:8787 -e PASSWORD=foo --volumes-from ftpserver --name rstudio r-studio-bio:Dockerfile
This causes RStudio to give an error
RStudio Initialization Error. Unable to connect to service.
Likewise I'm unable to upload to the FTP server because it's saying I lack the proper permissions.
The FTP server image is here : https://hub.docker.com/r/atmoz/sftp/
The RStudio-Server Dockerfile is:
# See the following for more info:
# https://hub.docker.com/r/pgensler/sandboxr/
# https://www.rocker-project.org/images/
# https://hub.docker.com/r/rocker/rstudio
FROM rocker/tidyverse
LABEL maintainer="Alex"
#
RUN mkdir -p $HOME/.R
RUN mkdir $HOME/Rlibs
ENV R_LIBS $HOME/Rlibs
# COPY R/Makevars /root/.R/Makevars
RUN apt-get update -qq \
&& apt-get -y --no-install-recommends install \
curl \
clang \
ccache \
default-jdk \
default-jre \
wget \
systemd \
# openssh-server \
&& R CMD javareconf \
# && systemctl ssh start \
# && systemctl enable ssh \
&& rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
# # Install additional R packages
RUN Rscript -e "BiocManager::install()"
RUN Rscript -e "BiocManager::install('multtest')"
RUN Rscript -e "install.packages('Seurat')"

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:

Use Case:
Base Instance has a Ubuntus 16.04
Installed Docker and it works find and I'm able to checkout docker images.
Deployed an Instance of Jenkins Docker container.
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
This Jenkins Instance will mount the host machine's Docker socket in the container. As Mentioned in the Article.
https://getintodevops.com/blog/the-simple-way-to-run-docker-in-docker-for-ci
Now installed he docker binaries on the Jenkins container.
apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-c
Ran the Docker ps from the Jenkins conatiner and listed the available Containers.
But when triggering a job from Jenkins it fails withe the below error
+ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
I tried solution provided to add the user to the group but it still fails
https://techoverflow.net/2017/03/01/solving-docker-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket/
Any help is greatly appreciated.
Thank you
I got this to work.
ADD THE USER TO THE GROUP
STOP AND RESTART THE CONTAINER

How to add JDBC driver to Kafka Connect on DC/OS?

running Kafka Connect 4.1.1 on DC/OS using the confluent community package. How can we upload or add our jdbc driver to the remote cluster?
Update: It's a package installed DC/OS catalog, which is a mesos framework, running docker images.
Update
Script borrowed from here (thanks to #rmoff)
It's an example of overriding the Docker CMD with a bash script to download and extract the REST API source connector.
bash -c 'echo Installing unzip… && \
curl -so unzip.deb http://ftp.br.debian.org/debian/pool/main/u/unzip/unzip_6.0-16+deb8u3_amd64.deb && \
dpkg -i unzip.deb && \
echo Downloading connector… && \
curl -so kafka-connect-rest.zip https://storage.googleapis.com/rmoff-connectors/kafka-connect-rest.zip && \
mkdir -p /u01/connectors/ && \
unzip -j kafka-connect-rest.zip -d /u01/connectors/kafka-connect-rest && \
echo Launching Connect… && \
/etc/confluent/docker/run'
You'll need to build your own Docker images and publish them to a resolvable Docker Registry for your Mesos cluster, and then edit the Mesos Service to pull these images instead of the Confluent one.
For example, in your Dockerfiles, you would have
ADD http://somepath.com/someJDBC-driver.jar /usr/share/java/kafka-connect-jdbc
Or curl rather than ADD, as shown in the Confluent docs (because it needs to extract that .tar.gz file).
FROM confluentinc/cp-kafka-connect
ENV MYSQL_DRIVER_VERSION 5.1.39
RUN curl -k -SL "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-${MYSQL_DRIVER_VERSION}.tar.gz" \
| tar -xzf - -C /usr/share/java/kafka-connect-jdbc/ --strip-components=1 mysql-connector-java-5.1.39/mysql-connector-java-${MYSQL_DRIVER_VERSION}-bin.jar
You can also use confluent-hub install to add other connectors that aren't JDBC JAR files

Resources