how to modify pod memory limit of a running pod - docker

I am using YML to create pod and have specified the resource request and limit. Now I am not aware of how to modify the resource limits of a running pod. For example
memory-demo.yml
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
Then I run the command oc create -f memory-demo.yml and a pod name memory-demo get created.
My Question is what should I do to modify the memory limits from 200Mi --> 600 Mi, Do I need to delete the existing pod and recreate the pod using the modified YML file?
I am a total newbie. Need help.

First and for most, it is very unlikely that you really want to be (re)creating Pods directly. Dig into what Deployment is and how it works. Then you can simply apply the change to the spec template in deployment and kubernetes will upgrade all the pods in that deployment to match new spec in a hands free rolling update.
Live change of the memory limit for a running container is certainly possible but not by means of kubernetes (and will not be reflected in kube state if you do so). Look at docker update.

You can use replace command to modify an existing object based on the contents of the specified configuration file (documentation)
oc replace -f memory-demo.yml
EDIT: However, some spec can not be update. The only way is delete and re-create the pod.

Related

add configMap to deployment config on openshift 4

Ive recently started using openshift 4 and im a bit lost.
I have a running pod and I created config map for it but I cant find a way to conect the two.
Ive been told to add the config map to the deployment config of the pod in a specific path.
I tried editing the pod's yaml file to add the file as a volume but got an error when I tried to save the changes.
anyone has an idea how can I add the config map file so I can access it in a specific path in a pod?
The example of adding configmap as a volume to a pod is explained in official kubernetes documentation
Below is the sample
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config

Unable to start pod under kubernetes namespace

I have created a namespace on my physical K8 cluster.
Now I'm trying to spin-up resources with the help of *dep.yaml and namespace mentioned in the file.
Also created secrets under the same namespace.
But Status showing 'ContainerCreating'.
application-dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-name
namespace: namespace-service
labels:
module: testmodule
...
Note: It's working in the default namespace.
If a Pod is stuck in state ContainerCreating, it may be good to check the Events with kubectl describe pod and check the Events in the bottom.
There is usually an Event that explain why the Pod fails to reach state Running.
Can you share the rest of the deployment file?
If you are currently running a service (assuming that it exposes a specific NodePort) in the default namespace, you cannot expose the same port again on a different namespace.
Also, as mentioned, you can always get more information using the "describe" command -
kubectl describe <resource> (can be a pod, deployment, service, etc)

Kubernetes make changes to annotation to force update deployment

Hey I have a wider problem as when I update secrets in kubernetes they are not implemented in pods unless they are ugprades/reschedules or just re-deployed; I saw the other stackoverflow post about it but noone of the solutions fit me Update kubernetes secrets doesn't update running container env vars
Also so the in-app solution of python script on pod to update its secret automatically https://medium.com/analytics-vidhya/updating-secrets-from-a-kubernetes-pod-f3c7df51770d but it seems like a long shot and I came up with solution to adding annotation to deployment manifest - and hoping it would re-schedule pods everytime a helm chart would put a new timestamp in it - it does put it but it doesn't reschedule - any thought how to force that behaviour ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
lastUpdate: {{ now }}
also I dont feel like adding this patch command to ci/cd deployment, as its arbitraty and - well doesnt feel like right solution
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
didn't anyone else find better solution to re-deploy pods on changed secrets ?
Kubernetes by itself does not do rolling update of a deployment automatically when a secret is changed. So there needs to a controller which will do that for you automatically. Take a look at Reloader which is a controller that watches if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset and Statefulset.
Add reloader.stakater.com/auto annotation to the deployment with name xxx and have a ConfigMap called xxx-configmap or Secret called xxx-secret.
This will discover deployments/daemonsets/statefulset automatically where xxx-configmap or xxx-secret is being used either via environment variable or from volume mount. And it will perform rolling upgrade on related pods when xxx-configmap or xxx-secret are updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
reloader.stakater.com/auto: "true"

Jenkins via Helm on GKE creates and does not remove slave pod for every build

I'm using Jenkins setup on GKE installed via standard Helm chart. My builds are consistently failing which I'm trying to troubleshoot, but in addition to that a new slave pod is created on every build attempt (with pod name like jenkins-slave-3wsb7). Almost all of them go to a Completed state after build fails, and then the pod lingers in my GKE dash and in list of pods from kubectl get pods. I currently have 80+ pods showing as a result.
Is this expected behavior? Is there a work around to clean up old Completed pods?
Thanks.
For the workaround to clean up completed pods :
kubectl delete pod NAME --grace-period=0 --force
If you are using Kubernetes 1.12 or later. The ttlSecondsAfterFinished Job spec was conveniently introduced. Note that it's 'alpha' in 1.12.
apiVersion: batch/v1
kind: Job
metadata:
name: job-with-ttl
spec:
ttlSecondsAfterFinished: 100 <====
template:
spec:
containers:
- name: myjob
image: myimage
command: ["run_some_batch_job"]
restartPolicy: Never

How to reboot kubernetes pod and keep the data

I'm now using kubernetes to run the Docker container.I just create the container and i use SSH connect to my pods.
I need to do some system config change so i need to reboot the container but when i`reboot the container it will lose all the data in the pod. kubernetes will run a new pod just like the Docker image original.
So how can i reboot the pod and just keep the data in it?
The kubernetes was offered my Bluemix
You need to learn more about containers as your question suggests that you are not fully grasping the concepts.
Running SSH in a container is an anti-pattern, a container is not a virtual machine. So remove the SSH server from it.
the fact that you run SSH indicates that you may be running more than one process per container. This is usually bad practice. So remove that supervisor and call your main process directly in your entrypoint.
Setup your container image main process to use environment variables or configuration files for configuration at runtime.
The last item means that you can define environment variables in your Pod manifest or use Kubernetes configmaps to store configuration file. Your Pod will read those and your process in your container will get configured properly. If not your Pod will die or your process will not run properly and you can just edit the environment variable or config map.
My main suggestion here is to not use Kubernetes until you have your docker image properly written and your configuration thought through, you should not have to exec in the container to get your process running.
Finally, more generally, you should not keep state inside a container.
For you to store your data you need to set up persistent storage, if you're using for example Google Cloud as your platform, you would need to create a disk to store your data on and define the use of this disk in your manifest.
With Bluemix it looks like you just have to create the volumes and use them.
bx ic volume-create myapplication_volume ext4
bx ic run --volume myapplication_volume:/data --name myapplication registry.eu-gb.bluemix.net/<my_namespace>/my_image
Bluemix - Persistent storage documentation
I don't use Bluemix myself so i'll proceed with an example manifest using Google's persistent disks.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapplication
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: myapplication
template:
metadata:
labels:
app: myapplication
spec:
containers:
- name: myapplication
image: eu.gcr.io/myproject/myimage:latest
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: /data
name: myapplication-volume
volumes:
- name: myapplication-volume
gcePersistentDisk:
pdName: mydisk-1
fsType: ext4
Here the disk mydisk-1 is mapped to the /data mountpoint.
The only data that will persist after reboots will be inside that folder.
If you want to store your logs for example you could symlink the logs folder.
/var/log/someapplication -> /data/log/someapplication
It works, but this is NOT recommended!
It's not clear to me if you're sshing to the nodes or using some tool to execute a shell inside the containers. Even though running multiple processes per container is bad practice it seems to be working very well, if you keep tabs on memory and cpu use.
Running a ssh server and cronjobs in the same container for example will absolutely work though it's not the best of solutions.
We've been using supervisor with multiple (2-5) processses in production for over a year now and it's working surprisingly well.
For more information about persistent volumes in a variety of platforms.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Resources