add configMap to deployment config on openshift 4 - path

Ive recently started using openshift 4 and im a bit lost.
I have a running pod and I created config map for it but I cant find a way to conect the two.
Ive been told to add the config map to the deployment config of the pod in a specific path.
I tried editing the pod's yaml file to add the file as a volume but got an error when I tried to save the changes.
anyone has an idea how can I add the config map file so I can access it in a specific path in a pod?

The example of adding configmap as a volume to a pod is explained in official kubernetes documentation
Below is the sample
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config

Related

How to Transition Ephemeral Volumes to Persistent Volume Claims

Currently, some of my pods are using ephemeral volumes (those created by Docker at the container level). I want some form of persistent data stored in specific directories on the host (like bind mounts in Docker) such that my pods can restart without losing data. Persistent Volume Claims seem to be the best way to do this. How should I go about transitioning my existing pods to use PVCs that copy across data from the existing ephemeral volumes?
On the host, create the directory to hold your data. Eg. mkdir /local_data
Copy the data to the local directory. Eg. kubectl cp <namespace>/<pod>:/path/in/the/container /local_data
Check and check again all your data is intact in /local_data
Create a new pod with the following spec.
Example:
kind: Pod
...
spec:
...
nodeName: <name> # <-- if you have multiple nodes this ensure your pod always run on the host that hold the data
containers:
- name: ...
...
volumeMounts:
- name: local_data
mountPath: /path/in/the/container
...
volumes:
- name: local_data
hostPath: /local_data
type: Directory
Apply and check if your pod runs as expected

How to import a config file into a container from k8s

I have a docker file which I've written for a React application. This app takes a .json config file that it uses at run time. The file doesn't contain any secrets.
So I've built the image without the config file, and now I'm unsure how to transfer the JSON file when I run it up.
I'm looking at deploying this in production using a CI/CD process which would entail:
gitlab (actions) building the image
pushing this to a docker repository
Kubernetes picking this up and running/starting the container
I think it's at the last point that I want to add the JSON configuration.
My question is: how do I add the config file to the application when k8's starts it up?
If I understand correctly, k8s doesn't have any local storage to create a volume from to copy it in? Can I give docker run a separate git repo where I can hold the config files?
You should take a look at configmap.
From k8s documentation configmap:
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
In your case, you want as a volume to have a file.
apiVersion: v1
kind: ConfigMap
metadata:
name: your-app
data:
config.json: #you file name
<file-content>
A configmap can be create manually or generated from a file using:
Directly in the cluster:kubectl create configmap <name> --from-file <path-to-file>.
In a yaml file:kubectl create configmap <name> --from-file <path-to-file> --dry-run=client -o yaml > <file-name>.yaml.
When you got your configmap, you must modify your deployment/pod to add a volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: <your-name>
spec:
...
template:
metadata:
...
spec:
...
containers:
- name: <container-name>
...
volumeMounts:
- mountPath: '<path>/config.json'
name: config-volume
readOnly: true
subPath: config.json
volumes:
- name: config-volume
configMap:
name: <name-of-configmap>
To deploy to your cluster, you can use plain yaml or I suggest you take a look at Kustomize or Helm charts­.
They are both popular system to deploy applications. If kustomize, there is the configmap generator feature that fit your case.
Good luck :)

How to attache a volume to kubernetes pod container like in docker?

I am new to Kubernetes but familiar with docker.
Docker Use Case
Usually, when I want to persist data I just create a volume with a name then attach it to the container, and even when I stop it then start another one with the same image I can see the data persisting.
So this is what i used to do in docker
docker volume create nginx-storage
run -it --rm -v nginx-storage:/usr/share/nginx/html -p 80:80 nginx:1.14.2
then I:
Create a new html file in /usr/share/nginx/html
Stop container
Run the same docker run command again (will create another container with same volume)
html file exists (which means data persisted in that volume)
Kubernetes Use Case
Usually, when I work with Kubernetes volumes I specify a PVC (PersistentVolumeClaim) and PV (PersistentVolume) using hostPath which will bind mount directory or a file from the host machine to the container.
what I want to do is reproduce the same behavior specified in the previous example (Docker Use Case) so how can I do that? Is Kubernetes creating volumes process is different from Docker? and if possible providing a YAML file would help me understand.
To a first approximation, you can't (portably) do this. Build your content into the image instead.
There are two big practical problems, especially if you're running a production-oriented system on a cloud-hosted Kubernetes:
If you look at the list of PersistentVolume types, very few of them can be used in ReadWriteMany mode. It's very easy to get, say, an AWSElasticBlockStore volume that can only be used on one node at a time, and something like this will probably be the default cluster setup. That means you'll have trouble running multiple pod replicas serving the same (static) data.
Once you do get a volume, it's very hard to edit its contents. Consider the aforementioned EBS volume: you can't edit it without being logged into the node on which it's mounted, which means finding the node, convincing your security team that you can have root access over your entire cluster, enabling remote logins, and then editing the file. That's not something that's actually possible in most non-developer Kubernetes setups.
The thing you should do instead is build your static content into a custom image. An image registry of some sort is all but required to run Kubernetes and you can push this static content server into the same registry as your application code.
FROM nginx:1.14.2
COPY . /usr/share/nginx/html
# Base image has a working CMD, no need to repeat it
Then in your deployment spec, set image: registry.example.com/nginx-frontend:20220209 or whatever you've chosen to name this build of this image, and do not use volumes at all. You'd deploy this the same way you deploy other parts of your application; you could use Helm or Kustomize to simplify the update process.
Correspondingly, in the plain-Docker case, I'd avoid volumes here. You don't discuss how files get into the nginx-storage named volume; if you're using imperative commands like docker cp or debugging tools like docker exec, those approaches are hard to script and are intrinsically local to the system they're running on. It's not easy to copy a Docker volume from one place to another. Images, though, can be pushed and pulled through a registry.
I managed to do that by creating a PVC only this is how I did it (with an Nginx image):
nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
nginx-deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template: # template for the pods
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Once I run kubectl apply on the PVC then on the deployment going to localhost:30080 will show 404 not found page means that all data in the /usr/share/nginx/html was deleted once the container gets started and that's because it's bind mounting a dir from the k8s cluster node to that container as a volume:
/usr/share/nginx/html <-- dir in volume
/var/lib/k8s-pvs/nginx2-data/pvc-9ba811b0-e6b6-4564-b6c9-4a32d04b974f <-- dir from node (was automatically created)
I tried adding a new file into that container in the html dir as a new index.html file, then deleted the container, a new container was created by the pod and checking localhost:30080 worked with the newly created home page
I tried deleting the deployment and reapplying it (without deleting the PVC) checked localhost:30080 and everything still persists.
An alternative solution specified in the comments kubernetes.io/docs/tasks/configure-pod-container/… by
larsks

Kubernetes multiple identical app and database deployments with different config

The dilemma: Deploy multiple app and database container pairs with identical docker image and code, but different config (different clients using subdomains).
What are some logical ways to approach this, as it doesn't seem kubernetes has an integration that would support this kind of setup?
Possible Approaches
Use a single app service for all app deployments, a single database service for all database deployments. Have a single Nginx static file service and deployment running, that will serve static files from a static volume that is shared between the app deployments (all use the same set of static files). Whenever a new deployment is needed, have a bash script copy the app and database .yaml deployment files and sed text replace with the name of the client, and point to the correct configmap (which is manually written ofcourse) and kubectl apply them. A main nginx ingress will handle the incoming traffic and point to the correct pod through the app deployment service
Similar to the above except use a StatefulSet instead of separate deployments, and an init container to copy different configs to mounted volumes (only drawback is you cannot delete an item in the middle of a statefulset, which would be the case if you no longer need a specific container for a client, as well as this seems like a very hacky approach).
Ideally if a StatefulSet could use the downward api to dynamically choose a configmap name based on the index of the stateful set that would resolve the issue (where you would basically make your config files manually with the index in the name, and it would be selected appropriately). Something like:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: $(POD_NAME)-config
However that functionality isn't available in kubernetes.
A templating engine like Helm can help with this. (I believe Kustomize, which ships with current Kubernetes, can do this too, but I'm much more familiar with Helm.) The basic idea is that you have a chart that contains the Kubernetes YAML files but can use a templating language (the Go text/template library) to dynamically fill in content.
In this setup generally you'd have Helm create both the ConfigMap and the matching Deployment; in the setup you describe you'd install it separately (a Helm release) for each tenant. Say the Nginx configurations were different enough that you wanted to store them in external files; the core parts of your chart would include
values.yaml (overridable configuration, helm install --set nginxConfig=bar.conf):
# nginxConfig specifies the name of the Nginx configuration
# file to embed.
nginxConfig: foo.conf
templates/configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
data:
nginx.conf: |-
{{ .Files.Get .Values.nginxConfig | indent 4 }}
deployment.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-nginx
spec:
...
volumes:
- name: nginx-config
configMap:
name: {{ .Release.Name }}-{{ .Chart.Name }}-config
The {{ .Release.Name }}-{{ .Chart.Name }} is a typical convention that allows installing multiple copies of the chart in the same namespace; the first part is a name you give the helm install command and the second part is the name of the chart itself. You can also directly specify the ConfigMap content, referring to other .Values... settings from the values.yaml file, use the ConfigMap as environment variables instead of files, and so on.
While dynamic structural replacement isn't possible (plus or minus, see below for the whole story), I believe you were in the right ballpark with your initContainer: thought; you can use the serviceAccount to fetch the configMap from the API in an initContainer: and then source that environment on startup by the main container:
initContainers:
- command:
- /bin/bash
- -ec
- |
curl -o /whatever/env.sh \
-H "Authorization: Bearer $(cat /var/run/secret/etc/etc)" \
https://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${POD_NS}/configmaps/${POD_NAME}-config
volumeMounts:
- name: cfg # etc etc
containers:
- command:
- /bin/bash
- -ec
- "source /whatever/env.sh; exec /usr/bin/my-program"
volumeMounts:
- name: cfg # etc etc
volumes:
- name: cfg
emptyDir: {}
Here we have the ConfigMap fetching inline with the PodSpec, but if you had a docker container specialized for fetching ConfigMaps and serializing them into a format that your main containers could consume, I wouldn't expect the actual solution to be nearly that verbose
A separate, and a lot more complicated (but perhaps elegant) approach is a Mutating Admission Webhook, and it looks like they have even recently formalized your very use case with Pod Presets but it wasn't super clear from the documentation in which version that functionality first appeared, nor if there are any apiserver flags one must twiddle to take advantage of it.
PodPresets has been removed since v1.20, the more elegant solution, based on Mutating Admission Webhook, to solve this problem is available now https://github.com/spoditor/spoditor
Essentially, it uses a custom annotation on the PodSpec template, like:
annotations:
spoditor.io/mount-volume: |
{
"volumes": [
{
"name": "my-volume",
"secret": {
"secretName": "my-secret"
}
}
],
"containers": [
{
"name": "nginx",
"volumeMounts": [
{
"name": "my-volume",
"mountPath": "/etc/secrets/my-volume"
}
]
}
]
}
Now, nginx container in each Pod of the StatefulSet will try to mount its own dedicated secret in the pattern of my-secret-{pod ordinal}.
You will just need to make sure my-secret-0, my-secret-1, so on and so forth exists in the same namespace of the StatefulSet.
There're more advanced usage of the annotation in the documentation of the project.

how to modify pod memory limit of a running pod

I am using YML to create pod and have specified the resource request and limit. Now I am not aware of how to modify the resource limits of a running pod. For example
memory-demo.yml
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
Then I run the command oc create -f memory-demo.yml and a pod name memory-demo get created.
My Question is what should I do to modify the memory limits from 200Mi --> 600 Mi, Do I need to delete the existing pod and recreate the pod using the modified YML file?
I am a total newbie. Need help.
First and for most, it is very unlikely that you really want to be (re)creating Pods directly. Dig into what Deployment is and how it works. Then you can simply apply the change to the spec template in deployment and kubernetes will upgrade all the pods in that deployment to match new spec in a hands free rolling update.
Live change of the memory limit for a running container is certainly possible but not by means of kubernetes (and will not be reflected in kube state if you do so). Look at docker update.
You can use replace command to modify an existing object based on the contents of the specified configuration file (documentation)
oc replace -f memory-demo.yml
EDIT: However, some spec can not be update. The only way is delete and re-create the pod.

Resources