Kubernetes - How to run local image of jenkins - jenkins

I have config file like this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: aaa-aaa/jenkins.war.LTS.2.89.4
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
I have in same directory of this config file, an image of jenkins : jenkins.war.LTS.2.89.4
How can I deploy with using this image ?

You can not run a war file of Jenkins directly on kubernetes. You need to create a docker image of that war file to be able to run it on kubernetes.
Follow this guide to create a docker image of the war file.
Once you have a docker image you can push that image to a remote or local and private or public docker registry and refer that url in the kubernetes deployment yaml in image section.
Also I would suggest to use helm chart of Jenkins to deploy Jenkins on kubernetes.

Related

Docker build failing inside Azure Dev Ops Self Hosted Agent

Background
I have set up some Self Hosted, Azure Devops build agents, inside my AKS cluster. This is the documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
The agents have been successfully created, I can see them in my Agent Pools and target them from my pipeline.
One of the first things my pipeline does, is build and push some docker images. This is a problem inside a self hosted agent. The documentation includes the below warning and link:
In order to use Docker from within a Docker container, you bind-mount the Docker socket.
If you're sure you want to do this, see the bind mount documentation on Docker.com.
Files
Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: kubepodcreation
image: AKRTestcase.azurecr.io/kubepodcreation:5306
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
Error
Attempting to run the pipeline gives me the following error:
##errorUnhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Questions
Is it possible (and safe) to build and push docker images from an Azure Devops build agent running in a docker container?
How can I modify the Kubernetes deployment file, to bind mount the docker socket.
Any help will be greatly appreciated.

How to add "-v /var/run/docker.sock:/var/run/docker.sock" when running container from kubernetes deployment yaml

I'm setting up a kubernetes deployment with an image that will execute docker commands (docker ps etc.).
My yaml looks as the following:
kind: Deployment
apiVersion: apps/v1
metadata:
name: discovery
namespace: kube-system
labels:
discovery-app: kubernetes-discovery
spec:
selector:
matchLabels:
discovery-app: kubernetes-discovery
strategy:
type: Recreate
template:
metadata:
labels:
discovery-app: kubernetes-discovery
spec:
containers:
- image: docker:dind
name: discover
ports:
- containerPort: 8080
name: my-awesome-port
imagePullSecrets:
- name: regcred3
volumes:
- name: some-volume
emptyDir: {}
serviceAccountName: kubernetes-discovery
Normally I will run a docker container as following:
docker run -v /var/run/docker.sock:/var/run/docker.sock docker:dind
Now, kubernetes yaml supports commands and args but for some reason does not support options.
What is the right thing to do?
Perhaps I should configure a volume, but then, is it volumeMount or just a volume?
I am new with kubernetes so it is important for me to do it the right way.
Thank you
You want to add the volume to the container.
spec:
containers:
- name: discover
image: docker:dind
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
It seems like a bad idea to interact directly with containers on any nodes in Kubernetes. The whole point of Kubernetes is to orchestrate. If you add containers outside of the Pod construct, then Kubernetes will not be aware the processes running on the nodes. This will affect resource allocation.
It also needs to be said that directly working with containers bypasses security.

Run private docker image on minikube k8s

I want to run a private docker image on my minikube k8s .
But the pod is never able to pull my image from docker .
How can i pull private image in k8s and use it?
This my yaml for pod
{apiVersion: v1
kind: Pod
metadata:
name: privaterepo
spec:
containers:
- name: private-reg-container
image: raveena1/test
imagePullSecrets:
- name: regsecret}
The log is:-
container "private-reg-container" in pod "privaterepo" is waiting to start: trying and failing to pull image
You need to create a secret & use it in your YAML/JSON deployment file -
Create secret (Like for Docker registry, you can change the registry server URL) -
$ kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=$USERNM --docker-password=$PASSWD --docker-email=vivekyad4v#gmail.com
deployment.yaml (use regsecret)-
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: local-simple-python
spec:
replicas: 2
selector:
matchLabels:
app: local-simple-python
template:
metadata:
labels:
app: local-simple-python
spec:
containers:
- name: python
image: vivekyad4v/local-simple-python:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regsecret
Deploy -
$ kubectl create -f deployment.yml
Your pods should now be able to fetch docker images on private registry.
You can find more info on -
https://github.com/vivekyad4v/kubernetes/tree/master/kubernetes-for-beginners
Official doc - https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Volume mounting in Jenkins on Kubernetes

I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.
Here's my deployment.yml file. The image is based off jenkins/jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)
kubectl apply -f kubernetes (where my manifests are stored)
kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION
Am I misunderstanding how this volume mount is meant to work?
As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.
You should use PersistentVolumes along with StatefulSet instead of Deployment resource if you wish your data to survive re-deployments|restarts of your pod.
You have specified the volume type EmptyDir. This will essentially mount an empty directory on the kube node that runs your pod. Every time you restart your deployment, the pod could move between kube hosts and the empty dir isn't present, so your data isn't persisting across restarts.
I see you're pulling you image from an ECR repository, so I'm assuming you're running k8s in AWS.
You'll need to configure a StorageClass for AWS. If you've provisioned k8s using something like kops, this will already be configured. You can confirm this by doing kubectl get storageclass - the provisioner should be configured as EBS:
NAME PROVISIONER
gp2 (default) kubernetes.io/aws-ebs
Then, you need to specify a persistentvolumeclaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2 # must match your storageclass from above
resources:
requests:
storage: 30Gi
You can now the pv claim on your deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
persistentVolumeClaim:
claimName: jenkins-data # must match the claim name from above

how to centralized log file on docker container?

How to centralized log file on docker container?
This log file is not in /var/lib/docker/container/*/
This log file is like catalina.out or another log file in container.
(this file can be stdout/err or not).
Many solution is almost about stdout/err( /var/lib/docker/container/* ).
But I want to centralized log file in container to use ELK or Fluentd.
Help me please.
You could use a forwarder container inside your pod and share a volume for the log directory, as follows:
kind: ReplicationController
apiVersion: v1
metadata:
name: tomcat
labels:
app: tomcat
spec:
replicas: 1
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat
volumeMounts:
- name: tomcat-logs
mountPath: /tomcat/log
readOnly: false
- name: logstash-forwarder
image: apopelo/logstash-forwarder
volumeMounts:
- name: tomcat-logs
mountPath: /var/log/tomcat
readOnly: true
volumes:
- name: tomcat-logs
emptyDir: {}
The tomcat container runs the app, while logstash-forwarder forwards tomcat logs.

Resources