we are in process of migrating init-container to kubernates job . so I added init-container image location in containers section of job.yaml. but shell script execution within .dockerfile of init-container is not getting invoked. Could some one help what could be wrong here?
job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-init-job"
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
annotations:
linkerd.io/inject: disabled
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": pre-install,pre-upgrade,pre-delete
"helm.sh/hook-weight": "-5"
spec:
serviceAccountName: {{ .Release.Name }}-init-service-account
containers:
- name: app-installer
image: artifactorylocation/test-init-container:1.0.1
command:
- /bin/bash
- -c
- echo Hello executing k8s init-container
securityContext:
readOnlyRootFilesystem: true
restartPolicy: OnFailure
.dockerfile of test-init-container
FROM repository/java17-ol8-x64:adddd4c
WORKDIR /
ADD target/test-init-container-ms.jar ./
ADD target/lib ./lib
ADD start.sh /
RUN chmod +x /start.sh
CMD ["sh", "/start.sh"]
EXPOSE 8080
start.sh is not been executed by job.
Just replace your job.yml to avoid replacing the entrypoint(CMD):
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-init-job"
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
annotations:
linkerd.io/inject: disabled
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": pre-install,pre-upgrade,pre-delete
"helm.sh/hook-weight": "-5"
spec:
serviceAccountName: {{ .Release.Name }}-init-service-account
containers:
- name: app-installer
image: artifactorylocation/test-init-container:1.0.1
securityContext:
readOnlyRootFilesystem: true
restartPolicy: OnFailure
Related
I wanted to host a TDengine cluster in Kubernetes, then met an error when I enabled coredump in the container.
I've searched Stack Overflow and found the Docker solution, How to modify the `core_pattern` when building docker image, but not the Kubernetes one.
Here's an example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- image: my-image
name: my-app
...
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
Just some fix to the picked answer, the final worked yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: core
spec:
selector:
matchLabels:
app: core
template:
metadata:
labels:
app: core
spec:
containers:
- image: zitsen/enable-coredump:0.1.0
name: core
command: ["/bin/sleep", "3650d"]
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
I have deplyonment.yml file which looks like below :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber)
imagePullPolicy: Always
But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.
If I specify something like below
image: XXXX..azurecr.io/werepo:$(Build.BuildNumber)
It worked with the direct static and exact names. But I don't want to hard core registry and repository name.
Is there any way to replace this dynamically? just like the way I am passing these in task
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'XXXX-connection'
namespace: 'XXXX-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
containers: |
$(Registry)/$(webRepository):$(Build.BuildNumber)
You can do something like
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
in CI step or run sed command in ubuntu like
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
above will resolve your issue.
Above command simply find & replace TEST_IMAGE_NAME with variables that creating the docker image URI.
Option : 2 kustomization
If you want to do it with customization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
namespace: default
commonLabels:
app: myapp
images:
- name: myapp
newName: registry.gitlab.com/jkpl/kustomize-demo
newTag: IMAGE_TAG
sh file
#!/usr/bin/env bash
set -euo pipefail
# Set the image tag if not set
if [ -z "${IMAGE_TAG:-}" ]; then
IMAGE_TAG=$(git rev-parse HEAD)
fi
sed "s/IMAGE_TAG/${IMAGE_TAG}/g" k8s-base/kustomization.template.sed.yaml > location/kustomization.yaml
Demo github : https://gitlab.com/jkpl/kustomize-demo
I'm struggling to figure out what the solution might be, so I thought to ask here. I'm trying to use the code below to deploy a Jenkins pod to Kubernetes, but it fails with a Unknown resource kind: Deployment error:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts-alpine
ports:
- name: http-port
containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
The output of kubectl api-versions is:
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
Does anyone know what the problem might be?
If this is an indentation issue, I'm failing to see it.
It seems the apiVersion is deprecated. You can simply convert to current apiVersion and apply.
$ kubectl convert -f jenkins-dep.yml
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: jenkins
name: jenkins-deployment
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
app: jenkins
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: jenkins
spec:
containers:
- image: jenkins/jenkins:lts-alpine
imagePullPolicy: IfNotPresent
name: jenkins
ports:
- containerPort: 8080
name: http-port
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: jenkins-home
status: {}
$ kubectl convert -f jenkins-dep.yml -oyaml > jenkins-dep-latest.yml
Change the apiVersion from extensions/v1beta1 to apps/v1 and use kubectl version to check if the kubectl client and kube API Server version is matching and not too old.
I deploy Redis container via Kubernetes and get the following warning:
WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled
Is it possible to disable THP via Kubernetes? Perhaps via init-containers?
Yes, with init-containers it's quite straightforward:
apiVersion: v1
kind: Pod
metadata:
name: thp-test
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
Demo (notice that this is a system wide setting):
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
$ kubectl create -f thp-test.yaml
pod "thp-test" created
$ kubectl logs thp-test
always madvise [never]
$ kubectl delete pod thp-test
pod "thp-test" deleted
$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
Ay,
I don't know if what I did is a good idea but we needed to deactivate THP on all our K8S VMs for all our apps. So I used a DaemonSet instead of adding an init-container to all our stacks :
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: thp-disable
namespace: kube-system
spec:
selector:
matchLabels:
name: thp-disable
template:
metadata:
labels:
name: thp-disable
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 1
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: busybox
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: busybox
image: busybox
command: ["watch", "-n", "600", "cat", "/sys/kernel/mm/transparent_hugepage/enabled"]
I think it's a little dirty but it works.
I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this?
app/
...
Dockerfile
prod_start.sh
web-controller.yaml
Gemfile
...
Dockerfile
FROM ruby
RUN mkdir /backend
WORKDIR /backend
ADD Gemfile /backend/Gemfile
ADD Gemfile.lock /backend/Gemfile.lock
RUN bundle install
web-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
After a lot of experimentations I believe adding the script to the Dockerfile:
ADD prod_start.sh /backend/prod_start.sh
And then calling the command like this in the yaml controller file:
command: ['/bin/sh', './prod_start.sh']
Fixed it.
you can add a config map to your yaml instead of adding to your dockerfile.
apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
- name: prod-start-config
configMap:
name: prod-start-config-script
defaultMode: 0744
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- name: prod-start-config
mountpath: /backend/
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
Then create another yaml file for your script:
script.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prod-start-config-script
data:
prod_start.sh: |
apt-get update
When deployed the script will be in the scripts directory