Kubernetes jenkins - Permission denied - jenkins

I have installed kubernetes cluster thus I have a deployment file for jenkins.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-vol
mountPath: /var/jenkins_vol
spec:
volumes:
- name: jenkins-vol
emptyDir: {}
The only thing I need is to install kubernetes client (Kubectl) through curl request.
The problem is that when I enter the pod and create curl request it returns Permission denied
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl
Warning: Failed to create the file kubectl: Permission denied

Try adding securityContext in your deployment
spec:
securityContext:
runAsUser: 0
If this doesnt work,( your jenkins deployment is failing or some other issue), then when you enter the pod ( pod exec) check what user is it by running id or whoami

Related

Jenkins, docker and kubernetes (minikube)

i'm using Jenkins to deploy pipeline, so first step i did it is deploying jenkins to minikube, and it's work at first, but each time i run minikube stop and restart it , jenkins restart too from first (unlock jenkins), i just followed this tutorial :
https://www.digitalocean.com/community/tutorials/how-to-install-jenkins-on-kubernetes
and this is jenkins everytime i run minikube :
Hope someone have an answer for me ! thank you
It looks like the secret is not mounted for deployment you can do it following
create secret using
kubectl create secret jenkins --from-literal jenkins_password="ADD YOUR SECRET TOKEN Which you will find in jenkins pod logs"
and mount it like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
env:
- name: JENKINS_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins_password
name: jenkins
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-data
Next time it will not ask your the token. Also I would highly recommend to use PVC for the data to persist. if you install plugin/or configure jobs etc. next time when you restart jenkin, the plugins/jobs will be gone.
so for pvc you can use it like this
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

Volume mounting in Jenkins on Kubernetes

I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.
Here's my deployment.yml file. The image is based off jenkins/jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)
kubectl apply -f kubernetes (where my manifests are stored)
kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION
Am I misunderstanding how this volume mount is meant to work?
As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.
You should use PersistentVolumes along with StatefulSet instead of Deployment resource if you wish your data to survive re-deployments|restarts of your pod.
You have specified the volume type EmptyDir. This will essentially mount an empty directory on the kube node that runs your pod. Every time you restart your deployment, the pod could move between kube hosts and the empty dir isn't present, so your data isn't persisting across restarts.
I see you're pulling you image from an ECR repository, so I'm assuming you're running k8s in AWS.
You'll need to configure a StorageClass for AWS. If you've provisioned k8s using something like kops, this will already be configured. You can confirm this by doing kubectl get storageclass - the provisioner should be configured as EBS:
NAME PROVISIONER
gp2 (default) kubernetes.io/aws-ebs
Then, you need to specify a persistentvolumeclaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2 # must match your storageclass from above
resources:
requests:
storage: 30Gi
You can now the pv claim on your deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
persistentVolumeClaim:
claimName: jenkins-data # must match the claim name from above

Cronjob in Kubernetes to restart (delete) the pod in a deployment

I am using Kubernetes to run a Docker service. This is a defective service that requires a restart everyday. For multiple reasons we can't programmatically solve the problem and just restarting the docker everyday will do.
When I migrated to Kubernetes I noticed I can't do "docker restart [mydocker]" but as the docker is a deployment with reCreate strategy I just need to delete the pod to have Kubernetes create a new one.
Can I automate this task of deleting the Pod, or an alternative one to restart it, using a CronTask in Kubernetes?
Thanks for any directions/examples.
Edit: My current deployment yml:
apiVersion: v1
kind: Service
metadata:
name: et-rest
labels:
app: et-rest
spec:
ports:
- port: 9080
targetPort: 9080
nodePort: 30181
selector:
app: et-rest
tier: frontend
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: et-rest
labels:
app: et-rest
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: et-rest
tier: frontend
spec:
containers:
- image: et-rest-image:1.0.21
name: et-rest
ports:
- containerPort: 9080
name: et-rest
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
You can use a scheduled job pod:
A scheduled job pod has build in cron behavior making it possible to restart jobs, combined with the time-out behavior, it leads to your required behavior or restarting your app every X hours.
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: app-with-timeout
spec:
schedule: 0 * * * ?
jobTemplate:
spec:
activeDeadlineSeconds: 3600*24
template:
spec:
containers:
- name: yourapp
image: yourimage

How to pass docker container flags via kubernetes pod

Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar

How to Run a script at the start of Container in Cloud Containers Engine with Kubernetes

I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this?
app/
...
Dockerfile
prod_start.sh
web-controller.yaml
Gemfile
...
Dockerfile
FROM ruby
RUN mkdir /backend
WORKDIR /backend
ADD Gemfile /backend/Gemfile
ADD Gemfile.lock /backend/Gemfile.lock
RUN bundle install
web-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
After a lot of experimentations I believe adding the script to the Dockerfile:
ADD prod_start.sh /backend/prod_start.sh
And then calling the command like this in the yaml controller file:
command: ['/bin/sh', './prod_start.sh']
Fixed it.
you can add a config map to your yaml instead of adding to your dockerfile.
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
- name: prod-start-config
configMap:
name: prod-start-config-script
defaultMode: 0744
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- name: prod-start-config
mountpath: /backend/
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
Then create another yaml file for your script:
script.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prod-start-config-script
data:
prod_start.sh: |
apt-get update
When deployed the script will be in the scripts directory

Resources