Docker run to Kubernetes deployment yaml - docker

I have an application, which I deployed with docker run and it worked fine. Now I am trying to run that application on Kubernetes.
I have tried to build my deployment.yaml file but not able to complete it getting validation error.
Below is my docker run command
docker run -e LICENSE="accept" -d --name=container1 -p 10000:10000 -v /opt/app/install:/install/resources appCentre:6.0.0 appCentre_setup deploy_setup
and deployment.yaml file I am trying to build
apiVersion: apps/v1
kind: Deployment
metadata:
name: appCentre
labels:
app: appCentre
spec:
replicas: 1
selector:
matchLabels:
app: appCentre
template:
metadata:
labels:
app: appCentre
spec:
containers:
- name: appCentre
image: appCentre:6.0.0
args:
- "appCentre_setup"
- "deploy_setup"
ports:
- containerPort: 10000
volumnMounts:
- name: volumn-app-appCentre
mountPath: /install/resources
volumns:
- name: volumn-app-appCentre
hostPath:
path: /opt/app/install
type: Directory
How can I proceed?

The easiest way would be to generate the base yaml using:
kubectl run appCentre --image=appcentre:6.0.0 -l app=appCentre --expose --port=1000 --env=LICENSE="accept" --dry-run -o yaml
and then modify for your needs.
You should also check here as #mchawre point.
I hope this helps.

Related

How to define image name in Kubernetes manifest deployment.yml file dynamically or with variables?

I have deplyonment.yml file which looks like below :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber)
imagePullPolicy: Always
But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.
If I specify something like below
image: XXXX..azurecr.io/werepo:$(Build.BuildNumber)
It worked with the direct static and exact names. But I don't want to hard core registry and repository name.
Is there any way to replace this dynamically? just like the way I am passing these in task
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'XXXX-connection'
namespace: 'XXXX-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
containers: |
$(Registry)/$(webRepository):$(Build.BuildNumber)
You can do something like
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
in CI step or run sed command in ubuntu like
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
above will resolve your issue.
Above command simply find & replace TEST_IMAGE_NAME with variables that creating the docker image URI.
Option : 2 kustomization
If you want to do it with customization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
namespace: default
commonLabels:
app: myapp
images:
- name: myapp
newName: registry.gitlab.com/jkpl/kustomize-demo
newTag: IMAGE_TAG
sh file
#!/usr/bin/env bash
set -euo pipefail
# Set the image tag if not set
if [ -z "${IMAGE_TAG:-}" ]; then
IMAGE_TAG=$(git rev-parse HEAD)
fi
sed "s/IMAGE_TAG/${IMAGE_TAG}/g" k8s-base/kustomization.template.sed.yaml > location/kustomization.yaml
Demo github : https://gitlab.com/jkpl/kustomize-demo

Does kubernetes kubectl run with image creates deployment yaml file

I am trying to use Minikube and Docker to understand the concepts of Kubernetes architecture.
I created a spring boot application with Dockerfile, created tag and pushed to Dockerhub.
In order to deploy the image in K8s cluster, i issued the below command,
# deployed the image
$ kubectl run <deployment-name> --image=<username/imagename>:<version> --port=<port the app runs>
# exposed the port as nodeport
$ kubectl expose deployment <deployment-name> --type=NodePort
Everything worked and i am able to see the 1 pods running kubectl get pods
The Docker image i pushed to Dockerhub didn't had any deployment YAML file.
Below command produced an yaml output
Does kubectl command creates deployment Yaml file out of the box?
$ kubectl get deployments --output yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-24T14:59:14Z"
generation: 1
labels:
run: hello-service
name: hello-service
namespace: default
resourceVersion: "76195"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello-service
uid: 90950172-1c0b-4b9f-a339-b47569366f4e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-service
spec:
containers:
- image: thirumurthi/hello-service:0.0.1
imagePullPolicy: IfNotPresent
name: hello-service
ports:
- containerPort: 8800
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-24T14:59:19Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-24T14:59:14Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: ReplicaSet "hello-service-75d67cc857" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
I think the easiest way to understand whats going on under the hood when you create kubernetes resources using imperative commands (versus declarative approach by writing and applying yaml definition files) is to run a simple example with 2 additional flags:
--dry-run
and
--output yaml
Names of these flags are rather self-explanatory so I think there is no further need for comment explaining what they do. You can simply try out the below examples and you'll see the effect:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
As you can see it produces the appropriate yaml manifest without applying it and creating actual deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
replicas: 1
selector:
matchLabels:
run: nginx-example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx-example
spec:
containers:
- image: nginx:latest
name: nginx-example
ports:
- containerPort: 80
resources: {}
status: {}
Same with expose command:
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml
produces the following output:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx-example
type: NodePort
status:
loadBalancer: {}
And now the coolest part. You can use simple output redirection:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml > nginx-example-deployment.yaml
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml > nginx-example-nodeport-service.yaml
to save generated Deployment and NodePort Service definitions so you can further modify them if needed and apply using either kubectl apply -f filename.yaml or kubectl create -f filename.yaml.
Btw. kubectl run and kubectl expose are generator-based commands and as you may have noticed when creating your deployment (as you probably got the message: kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.) they use --generator flag. If you don't specify it explicitly it gets the default value which for kubectl run is --generator=deployment/apps.v1beta1 so by default it creates a Deployment. But you can modify it by providing --generator=run-pod/v1 nginx-example and instead of Deployment it will create a single Pod. When we go back to our previous example it may look like this:
kubectl run --generator=run-pod/v1 nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
I hope this answered your question and clarified a bit the mechanism of creating kubernetes resources using imperative commands.
Yes, kubectl run creates a deployment. If you look at the label field, you can see run: hello-service. This label is used later in the selector.

What command does kubernetes run to launch a container?

When I specify a Deployment in a kubernetes yaml file, for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabel:
deploy: example
template:
metadata:
labels:
deploy: example
spec:
containers:
- name: my-container
image: dockerhub.com/imagerepo:latest
I'm wondering what's going on in the backend to run my deploy:example pod. I'm guessing some sort of docker run <image> && docker start <image> command is executed on each node, but what exactly is the command?

How to use Local docker image in kubernetes via kubectl

I created customize Docker Image and stored in my local system Now I want use that Docker Image via kubectl .
Docker image:-
1:- docker build -t backend:v1 .
Then Kubernetes file:-
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
image: backend
imagePullPolicy: Never
name: backend
resources: {}
restartPolicy: Always
status: {}```
Command to run kubectl:- kubectl create -f backend-deployment.yaml
**getting Error:-**
error: error validating "backend-deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "image" in io.k8s.api.core.v1.EnvVar, ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
Local Registry
Set the local registry first using this command
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Image Tag
Given a Dockerfile, the image could be built and tagged this easy way:
docker build -t localhost:5000/my-image
Image Pull Policy
the field imagePullPolicy should then be changed to Never get the right image from the right repo.
given this sample pod template
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: app
image: localhost:5000/my-image
imagePullPolicy: Never
Deploy Pod
The pod can be deployed using:
kubectl create -f pod.yml
Hope this comes in handy :)
As the error specifies unknown field "image" and unknown field "imagePullPolicy"
There is syntax error in your kubernetes deployment file.
Make these changes in your yaml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Never
env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
resources: {}
restartPolicy: Always
status: {}
Validate your kubernetes yaml file online using https://kubeyaml.com/
Or with kubectl apply --validate=true --dry-run=true -f deployment.yaml
Hope this helps.

How to pass docker container flags via kubernetes pod

Hi I am running kubernetes cluster where I run mailhog container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run mailhog/mailhog -auth-file=./auth.file
But I need to run it via Kubernetes pod. My pod looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.
I tried adding under containers
command: ["-auth-file", "/data/mailhog/auth.file"]
but then I get
Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
thanks to #lang2
here is my deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
In kubernetes, command is equivalent of ENTRYPOINT. In your case, args should be used.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core
You are on the right track. It's just that you also need to include the name of the binary in the command array as the first element. You can find that out by looking​ in the respective Dockerfile (CMD and/or ENTRYPOINT).
In this case:
command: ["Mailhog", "-auth-file", "/data/mailhog/auth.file"]
I needed similar task (my aim was passing the application profile to app) and what I did is the following:
Setting an environment variable in Deployment section of the kubernetes yml file.
env:
- name: PROFILE
value: "dev"
Using this environment variable in dockerfile as command line argument.
CMD java -jar -Dspring.profiles.active=${PROFILE} /opt/app/xyz-service-*.jar

Resources