I'm trying to create K8 yaml file that match to:
docker run --privileged
What I'm trying in my K8 yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
privileged: true
....
But when I'm trying to run kubectl apply -f my.yaml I got the following error:
error: error validating "my.yaml": error validating data: ValidationError(Deployment.spec):
unknown field "privileged" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate
=false
How can I create yaml deployment file with privileged flag?
privileged: true needs to be in securityContext in the spec section of the pod template.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
Related
I am trying to deploy few pods on GKE cluster created using image "Ubuntu with docker" and they are giving the error below. I did not find any solution on the internet. Any help would be greatly appreciated.
Error response from daemon: OCI runtime create failed: invalid mount {Destination:[/sys/fs/cgroup Type:bind Source:/var/lib/docker/volumes/d9e3b871f4cc210e3dba6471f326dcbf7b404daad7906ed9fc669e207c093ec2/_data Options:[rbind]}: mount destination [/sys/fs/cgroup not absolute: unknown
The spec file
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
diamanti.com/app: armada
diamanti.com/control-plane: 'true'
name: armada
namespace: diamanti-system
spec:
selector:
matchLabels:
diamanti.com/app: armada
diamanti.com/control-plane: 'true'
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
labels:
diamanti.com/app: armada
diamanti.com/control-plane: 'true'
spec:
containers:
- envFrom:
- configMapRef:
name: armada-config
image: 'diamanti/armada:v3.3.1-197'
name: armada
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
spektra.diamanti.io/node: "true"
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: diamanti-node-runner
serviceAccountName: diamanti-node-runner
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
The serviceaccount diamanti-node-runner is bound to cluster-admin role.
As Kubernetes is removing the support for docker runtime you can use the other container runtime. Use their default, it works fine. You do not need to change anything at your end related to docker images.
I wanted to host a TDengine cluster in Kubernetes, then met an error when I enabled coredump in the container.
I've searched Stack Overflow and found the Docker solution, How to modify the `core_pattern` when building docker image, but not the Kubernetes one.
Here's an example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- image: my-image
name: my-app
...
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
Just some fix to the picked answer, the final worked yaml is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: core
spec:
selector:
matchLabels:
app: core
template:
metadata:
labels:
app: core
spec:
containers:
- image: zitsen/enable-coredump:0.1.0
name: core
command: ["/bin/sleep", "3650d"]
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
I have deplyonment.yml file which looks like below :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber)
imagePullPolicy: Always
But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.
If I specify something like below
image: XXXX..azurecr.io/werepo:$(Build.BuildNumber)
It worked with the direct static and exact names. But I don't want to hard core registry and repository name.
Is there any way to replace this dynamically? just like the way I am passing these in task
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'XXXX-connection'
namespace: 'XXXX-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
containers: |
$(Registry)/$(webRepository):$(Build.BuildNumber)
You can do something like
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
in CI step or run sed command in ubuntu like
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
above will resolve your issue.
Above command simply find & replace TEST_IMAGE_NAME with variables that creating the docker image URI.
Option : 2 kustomization
If you want to do it with customization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
namespace: default
commonLabels:
app: myapp
images:
- name: myapp
newName: registry.gitlab.com/jkpl/kustomize-demo
newTag: IMAGE_TAG
sh file
#!/usr/bin/env bash
set -euo pipefail
# Set the image tag if not set
if [ -z "${IMAGE_TAG:-}" ]; then
IMAGE_TAG=$(git rev-parse HEAD)
fi
sed "s/IMAGE_TAG/${IMAGE_TAG}/g" k8s-base/kustomization.template.sed.yaml > location/kustomization.yaml
Demo github : https://gitlab.com/jkpl/kustomize-demo
when i run my command to apply the modification or just to create ( pods, service, Deployments)
kubectl apply -f hello-kubernetes-oliver.yml
I dont have an error.
But when i do docker ps to see if the container was downloaded from my private registery. i've nothing :(
If i run the command docker-all.attanea.net/hello_world:latestit download the container.
i dont understand why it doesn't download my container with the first command ?
you will find below my hello-kubernetes-oliver.yml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-oliver
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-oliver
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
In order to download Images from the Private registry, You need to create a Secret which is used in the Deployment Manifest.
kubectl create secret docker-registry regcred --docker-server= --docker-username="your-name" --docker-password="your-pword" --docker-email="your-email"
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
regcred is the name of the secret resources.
Then you attach regcred secret in your deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-kubernetes-oliver
spec:
replicas: 1
template:
metadata:
labels:
app: hello-kubernetes-oliver
spec:
containers:
- name: hello-kubernetes-oliver
image: private-registery.net/hello_world:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
I'm quite new to kubernetes, and I'm trying to deploy a docker container to kubernetes. I already have a docker container running on AWS. I am trying to deploy the yml file through the following command:
kops create -f deployment.yml --state=s3://mybucket
However whenever I try to deploy my yml file, I get a message saying:
error parsing file "deployment.yml": no kind "Cluster" is registered for version "v1"
My yml file looks like this:
apiVersion: v1
kind: Cluster
metadata:
name: containers
spec:
containers:
- name: container
image: [idnumber].dkr.ecr.eu-west-2.amazonaws.com/myfirstcontainer
ports:
- containerPort: 3000
Grateful for any help!
Thanks
There is no kind: Cluster in kubernetes API v1.
You should use kind: Pod if you want to run only one pod or use deployment, if you want to create controller which manages your pod:
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
Also, you have some issues with formatting in your deployment.yml file.
The final deployment.yml should be for pod:
apiVersion: v1
kind: Pod
metadata:
name: containers
spec:
containers:
- name: container
image: [idnumber].dkr.ecr.eu-west-2.amazonaws.com/myfirstcontainer
ports:
- containerPort: 3000
or for deployment:
apiVersion: apps/v1beta1 # for versions starting from 1.8.0 use apps/v1beta2
kind: Deployment
metadata:
name: containers
spec:
replicas: 1
selector:
matchLabels:
app: some_app
template:
metadata:
labels:
app: some_app
spec:
containers:
- name: container
image: [idnumber].dkr.ecr.eu-west-2.amazonaws.com/myfirstcontainer
ports:
- containerPort: 3000