I have a Jenkins pipeline using the kubernetes plugin to run a docker in docker container and build images:
pipeline {
agent {
kubernetes {
label 'kind'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
name: dind
...
I also have a pool of persistent volumes in the jenkins namespace each labelled app=dind. I want one of these volumes to be picked for each pipeline run and used as /var/lib/docker in my dind container in order to cache any image pulls on each run. I want to have a pool and caches, not just a single one, as I want multiple pipeline runs to be able to happen at the same time. How can I configure this?
This can be achieved natively in kubernetes by creating a persistent volume claim as follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dind
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: dind
and mounting it into the Pod, but I'm not sure how to configure the pipeline to create and cleanup such a persistent volume claim.
First of all, I think the way you think it can be achieved natively in kubernetes - wouldn't work. You either have to re-use same PVC which will make build pods to access same PV concurrently, or if you want to have a PV per build - your PVs will be stuck in Released status and not automatically available for new PVCs.
There is more details and discussion available here https://issues.jenkins.io/browse/JENKINS-42422.
It so happens that I wrote two simple controllers - automatic PV releaser (that would find and make Released PVs Available again for new PVCs) and dynamic PVC provisioner (for Jenkins Kubernetes plugin specifically - so you can define a PVC as annotation on a Pod). Check it out here https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers. There is a full Jenkinsfile example here https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers/tree/main/examples/jenkins-kubernetes-plugin-with-build-cache.
Related
I am trying to avoid having to create three different images for separate deployment environments.
Some context on our current ci/cd pipeline:
For the CI portion, we build our app into a docker container and then submit that container to a security scan. Once the security scan is successful, the container gets put into a private container repository.
For the CD portion, using helm charts, we pull the container from the repository and then deploy to a company managed Kubernetes cluster.
There was an ask and the solution was to use a piece of software in the container. And for some reason (I'm the devops person and not the software engineer) the software needs environment variables (specific to the deployment environment) passed to it when it starts. How would we be able to start and pass environment variables to this software at deployment?
I could just create three different images with the environment variables but I feel like that is an anti-pattern. It takes away from the flexibility of having one image that can be deployed to different environments.
Can any one point me to resources that can accomplish starting an application with specific environment variables using Helm? I've looked but did not find a solution or anything that pointed me to the right direction. As a plan b, I'll just create three different images but I want to make sure that there is not a better way.
Depending on the container orchestration, you can pass the env in differnt ways:
Plain docker:
docker run -e MY_VAR=MY_VAL <image>
Docker compose:
version: '3'
services:
app:
image: '<image>'
environment:
- MY_VAR=my-value
Check on docker-compose docs
Kubernetes:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: "my value"
Check on kubernetes-docu
Helm:
Add the values in your values.yaml:
myKey: myValue
Then reference it in your helm template:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: {{ .Values.myKey }}
Check out the helm docs.
I have a docker file which I've written for a React application. This app takes a .json config file that it uses at run time. The file doesn't contain any secrets.
So I've built the image without the config file, and now I'm unsure how to transfer the JSON file when I run it up.
I'm looking at deploying this in production using a CI/CD process which would entail:
gitlab (actions) building the image
pushing this to a docker repository
Kubernetes picking this up and running/starting the container
I think it's at the last point that I want to add the JSON configuration.
My question is: how do I add the config file to the application when k8's starts it up?
If I understand correctly, k8s doesn't have any local storage to create a volume from to copy it in? Can I give docker run a separate git repo where I can hold the config files?
You should take a look at configmap.
From k8s documentation configmap:
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
In your case, you want as a volume to have a file.
apiVersion: v1
kind: ConfigMap
metadata:
name: your-app
data:
config.json: #you file name
<file-content>
A configmap can be create manually or generated from a file using:
Directly in the cluster:kubectl create configmap <name> --from-file <path-to-file>.
In a yaml file:kubectl create configmap <name> --from-file <path-to-file> --dry-run=client -o yaml > <file-name>.yaml.
When you got your configmap, you must modify your deployment/pod to add a volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: <your-name>
spec:
...
template:
metadata:
...
spec:
...
containers:
- name: <container-name>
...
volumeMounts:
- mountPath: '<path>/config.json'
name: config-volume
readOnly: true
subPath: config.json
volumes:
- name: config-volume
configMap:
name: <name-of-configmap>
To deploy to your cluster, you can use plain yaml or I suggest you take a look at Kustomize or Helm charts.
They are both popular system to deploy applications. If kustomize, there is the configmap generator feature that fit your case.
Good luck :)
I am new to Kubernetes but familiar with docker.
Docker Use Case
Usually, when I want to persist data I just create a volume with a name then attach it to the container, and even when I stop it then start another one with the same image I can see the data persisting.
So this is what i used to do in docker
docker volume create nginx-storage
run -it --rm -v nginx-storage:/usr/share/nginx/html -p 80:80 nginx:1.14.2
then I:
Create a new html file in /usr/share/nginx/html
Stop container
Run the same docker run command again (will create another container with same volume)
html file exists (which means data persisted in that volume)
Kubernetes Use Case
Usually, when I work with Kubernetes volumes I specify a PVC (PersistentVolumeClaim) and PV (PersistentVolume) using hostPath which will bind mount directory or a file from the host machine to the container.
what I want to do is reproduce the same behavior specified in the previous example (Docker Use Case) so how can I do that? Is Kubernetes creating volumes process is different from Docker? and if possible providing a YAML file would help me understand.
To a first approximation, you can't (portably) do this. Build your content into the image instead.
There are two big practical problems, especially if you're running a production-oriented system on a cloud-hosted Kubernetes:
If you look at the list of PersistentVolume types, very few of them can be used in ReadWriteMany mode. It's very easy to get, say, an AWSElasticBlockStore volume that can only be used on one node at a time, and something like this will probably be the default cluster setup. That means you'll have trouble running multiple pod replicas serving the same (static) data.
Once you do get a volume, it's very hard to edit its contents. Consider the aforementioned EBS volume: you can't edit it without being logged into the node on which it's mounted, which means finding the node, convincing your security team that you can have root access over your entire cluster, enabling remote logins, and then editing the file. That's not something that's actually possible in most non-developer Kubernetes setups.
The thing you should do instead is build your static content into a custom image. An image registry of some sort is all but required to run Kubernetes and you can push this static content server into the same registry as your application code.
FROM nginx:1.14.2
COPY . /usr/share/nginx/html
# Base image has a working CMD, no need to repeat it
Then in your deployment spec, set image: registry.example.com/nginx-frontend:20220209 or whatever you've chosen to name this build of this image, and do not use volumes at all. You'd deploy this the same way you deploy other parts of your application; you could use Helm or Kustomize to simplify the update process.
Correspondingly, in the plain-Docker case, I'd avoid volumes here. You don't discuss how files get into the nginx-storage named volume; if you're using imperative commands like docker cp or debugging tools like docker exec, those approaches are hard to script and are intrinsically local to the system they're running on. It's not easy to copy a Docker volume from one place to another. Images, though, can be pushed and pulled through a registry.
I managed to do that by creating a PVC only this is how I did it (with an Nginx image):
nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
nginx-deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template: # template for the pods
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Once I run kubectl apply on the PVC then on the deployment going to localhost:30080 will show 404 not found page means that all data in the /usr/share/nginx/html was deleted once the container gets started and that's because it's bind mounting a dir from the k8s cluster node to that container as a volume:
/usr/share/nginx/html <-- dir in volume
/var/lib/k8s-pvs/nginx2-data/pvc-9ba811b0-e6b6-4564-b6c9-4a32d04b974f <-- dir from node (was automatically created)
I tried adding a new file into that container in the html dir as a new index.html file, then deleted the container, a new container was created by the pod and checking localhost:30080 worked with the newly created home page
I tried deleting the deployment and reapplying it (without deleting the PVC) checked localhost:30080 and everything still persists.
An alternative solution specified in the comments kubernetes.io/docs/tasks/configure-pod-container/… by
larsks
I'm now using kubernetes to run the Docker container.I just create the container and i use SSH connect to my pods.
I need to do some system config change so i need to reboot the container but when i`reboot the container it will lose all the data in the pod. kubernetes will run a new pod just like the Docker image original.
So how can i reboot the pod and just keep the data in it?
The kubernetes was offered my Bluemix
You need to learn more about containers as your question suggests that you are not fully grasping the concepts.
Running SSH in a container is an anti-pattern, a container is not a virtual machine. So remove the SSH server from it.
the fact that you run SSH indicates that you may be running more than one process per container. This is usually bad practice. So remove that supervisor and call your main process directly in your entrypoint.
Setup your container image main process to use environment variables or configuration files for configuration at runtime.
The last item means that you can define environment variables in your Pod manifest or use Kubernetes configmaps to store configuration file. Your Pod will read those and your process in your container will get configured properly. If not your Pod will die or your process will not run properly and you can just edit the environment variable or config map.
My main suggestion here is to not use Kubernetes until you have your docker image properly written and your configuration thought through, you should not have to exec in the container to get your process running.
Finally, more generally, you should not keep state inside a container.
For you to store your data you need to set up persistent storage, if you're using for example Google Cloud as your platform, you would need to create a disk to store your data on and define the use of this disk in your manifest.
With Bluemix it looks like you just have to create the volumes and use them.
bx ic volume-create myapplication_volume ext4
bx ic run --volume myapplication_volume:/data --name myapplication registry.eu-gb.bluemix.net/<my_namespace>/my_image
Bluemix - Persistent storage documentation
I don't use Bluemix myself so i'll proceed with an example manifest using Google's persistent disks.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapplication
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: myapplication
template:
metadata:
labels:
app: myapplication
spec:
containers:
- name: myapplication
image: eu.gcr.io/myproject/myimage:latest
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: /data
name: myapplication-volume
volumes:
- name: myapplication-volume
gcePersistentDisk:
pdName: mydisk-1
fsType: ext4
Here the disk mydisk-1 is mapped to the /data mountpoint.
The only data that will persist after reboots will be inside that folder.
If you want to store your logs for example you could symlink the logs folder.
/var/log/someapplication -> /data/log/someapplication
It works, but this is NOT recommended!
It's not clear to me if you're sshing to the nodes or using some tool to execute a shell inside the containers. Even though running multiple processes per container is bad practice it seems to be working very well, if you keep tabs on memory and cpu use.
Running a ssh server and cronjobs in the same container for example will absolutely work though it's not the best of solutions.
We've been using supervisor with multiple (2-5) processses in production for over a year now and it's working surprisingly well.
For more information about persistent volumes in a variety of platforms.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
I have multi node kubernetes setup. I am trying to allocate a Persistent volume dynamically using storage classes with NFS volume plugin.
I found storage classes examples for glusterfs, aws-ebs, etc.but, I didn't find any example for NFS.
If I create PV and PVC only then NFS works very well(Without storage class).
I tried to write storage class file for NFS, by referring other plugins. please refer it below,
nfs-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/nfs
parameters:
path: /nfsfileshare
server: <nfs-server-ip>
nfs-pv-claim.yaml
apiVersion: v1
metadata:
name: demo-claim
annotations:
volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
It didn't worked. So, my question is, Can we write a storage class for NFS? Does it support dynamic provisioing?
As of August 2020, here's how things look for NFS persistence on Kubernetes:
You can
Put an NFS volume on a Pod directly:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
nfs:
path: /foo/bar
server: wherever.dns
Manually create a Persistent Volume backed by NFS, and mount it with a Persistent Volume Claim (PV spec shown below):
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
Use the (now deprecated) NFS PV provisioner from external-storage. This was last updated two years ago, and has been officially EOL'd, so good luck. With this route, you can make a Storage Class such as the one below to fulfill PVCs from your NFS server.
Update: There is a new incarnation of this provisioner as kubernetes-sigs/nfs-subdir-external-provisioner! It seems to work in a similar way to the old nfs-client provisioner, but is much more up-to-date. Huzzah!
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-nfs
provisioner: example.com/nfs
mountOptions:
- vers=4.1
Evidently, CSI is the future, and there is a NFS CSI driver. However, it doesn't support dynamic provisioning yet, so it's not really terribly useful.
Update (December 2020): Dynamic provisioning is apparently in the works (on master, but not released) for the CSI driver.
You might be able to replace external-storage's NFS provisioner with something from the community, or something you write. In researching this problem, I stumbled on a provisioner written by someone on GitHub, for example. Whether such provisioners perform well, are secure, or work at all is beyond me, but they do exist.
I'm looking into doing the same thing. I found https://github.com/kubernetes-incubator/external-storage/tree/master/nfs, which I think you based your provisioner on?
I think an nfs provider would need to create a unique directory under the path defined. I'm not really sure how this could be done.
Maybe this is better of as an github issue on the kubernetes repo.
Dynamic storage provisioning using NFS doesn't work, better use glusterfs. There's a good tutorial with fixed to common problems while setting up.
http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/
I also tried to enable the NFS provisioner on my kubernetes cluster and at first it didn't work, because the quickstart guide does not mention that you need to apply the rbac.yaml as well (I opened a PR to fix this).
The nfs provisioner works fine for me if I follow these steps on my cluster:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs#quickstart
$ kubectl create -f deploy/kubernetes/deployment.yaml
$ kubectl create -f deploy/kubernetes/rbac.yaml
$ kubectl create -f deploy/kubernetes/class.yaml
Then you should be able to create PVCs like this:
$ kubectl create -f deploy/kubernetes/claim.yaml
You might want to change the folders used for the volume mounts in deployment.yaml to match it with your cluster.
The purpose of StorageClass is to create storage, e.g. from cloud providers (or "Provisioner" as they call it in the kubernetes docs). In case of NFS you only want to get access to existing storage and there is no creation involved. Thus you don't need a StorageClass. Please refer to this blog.
If you are using AWS, I believe you can use this image to create a NFS server:
https://hub.docker.com/r/alphayax/docker-volume-nfs