I created customize Docker Image and stored in my local system Now I want use that Docker Image via kubectl .
Docker image:-
1:- docker build -t backend:v1 .
Then Kubernetes file:-
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
image: backend
imagePullPolicy: Never
name: backend
resources: {}
restartPolicy: Always
status: {}```
Command to run kubectl:- kubectl create -f backend-deployment.yaml
**getting Error:-**
error: error validating "backend-deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "image" in io.k8s.api.core.v1.EnvVar, ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
Local Registry
Set the local registry first using this command
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Image Tag
Given a Dockerfile, the image could be built and tagged this easy way:
docker build -t localhost:5000/my-image
Image Pull Policy
the field imagePullPolicy should then be changed to Never get the right image from the right repo.
given this sample pod template
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: app
image: localhost:5000/my-image
imagePullPolicy: Never
Deploy Pod
The pod can be deployed using:
kubectl create -f pod.yml
Hope this comes in handy :)
As the error specifies unknown field "image" and unknown field "imagePullPolicy"
There is syntax error in your kubernetes deployment file.
Make these changes in your yaml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Never
env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
resources: {}
restartPolicy: Always
status: {}
Validate your kubernetes yaml file online using https://kubeyaml.com/
Or with kubectl apply --validate=true --dry-run=true -f deployment.yaml
Hope this helps.
Related
I have setup a private registry (Kubernetes) using the following configuration based on this repo https://github.com/sleighzy/k8s-docker-registry:
Create the password file, see the Apache htpasswd documentation for more information on this command.
htpasswd -b -c -B htpasswd docker-registry registry-password!
Adding password for user docker-registry
Create namespace
kubectl create namespace registry
Add the generated password file as a Kubernetes secret.
kubectl create secret generic basic-auth --from-file=./htpasswd -n registry
secret/basic-auth created
registry-secrets.yaml
---
# https://kubernetes.io/docs/concepts/configuration/secret/
apiVersion: v1
kind: Secret
metadata:
name: s3
namespace: registry
data:
REGISTRY_STORAGE_S3_ACCESSKEY: Y2hlc0FjY2Vzc2tleU1pbmlv
REGISTRY_STORAGE_S3_SECRETKEY: Y2hlc1NlY3JldGtleQ==
registry-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: registry
spec:
ports:
- protocol: TCP
name: registry
port: 5000
selector:
app: registry
I am using my MinIO (already deployed and running)
registry-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- name: registry
containerPort: 5000
volumeMounts:
- name: credentials
mountPath: /auth
readOnly: true
env:
- name: REGISTRY_LOG_ACCESSLOG_DISABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.mydomain.io:5000"
- name: REGISTRY_LOG_LEVEL
value: info
- name: REGISTRY_HTTP_SECRET
value: registry-http-secret
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: homelab
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: /auth/htpasswd
- name: REGISTRY_STORAGE
value: s3
- name: REGISTRY_STORAGE_S3_REGION
value: ignored-cos-minio
- name: REGISTRY_STORAGE_S3_REGIONENDPOINT
value: charity.api.com -> This is the valid MinIO API
- name: REGISTRY_STORAGE_S3_BUCKET
value: "charitybucket"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED
value: "false"
- name: REGISTRY_STORAGE_S3_ACCESSKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_ACCESSKEY
- name: REGISTRY_STORAGE_S3_SECRETKEY
valueFrom:
secretKeyRef:
name: s3
key: REGISTRY_STORAGE_S3_SECRETKEY
volumes:
- name: credentials
secret:
secretName: basic-auth
I have created an entry in /etc/hosts
192.168.xx.xx registry.mydomain.io
registry-IngressRoute.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`)
kind: Rule
services:
- name: registry
port: 5000
tls:
certResolver: tlsresolver
I have accees to the private registry using http://registry.mydomain.io:5000/ and it obviously returns a blank page.
I have already pushed some images and http://registry.mydomain.io:5000/v2/_catalog returns:
{"repositories":["console-image","hello-world","hello-world-2","hello-world-ha","myfirstimage","ubuntu-my"]}
Above configuration seems to work.
Then I tried to add a registry-ui provide by joxit with the following configuration:
registry-ui-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
namespace: registry
spec:
ports:
- protocol: TCP
name: registry-ui
port: 80
selector:
app: registry-ui
registry-ui-deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: registry
name: registry-ui
labels:
app: registry-ui
spec:
replicas: 1
selector:
matchLabels:
app: registry-ui
template:
metadata:
labels:
app: registry-ui
spec:
containers:
- name: registry-ui
image: joxit/docker-registry-ui:1.5-static
ports:
- name: registry-ui
containerPort: 80
env:
- name: REGISTRY_URL
value: https://registry.mydomain.io
- name: SINGLE_REGISTRY
value: "true"
- name: REGISTRY_TITLE
value: "CHARITY Registry UI"
- name: DELETE_IMAGES
value: "true"
registry-ui-ingress-route.yaml
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: registry-ui
namespace: registry
spec:
entryPoints:
- websecure
routes:
- match: Host(`registry.mydomain.io`) && PathPrefix(`/ui/`)
kind: Rule
services:
- name: registry-ui
port: 80
middlewares:
- name: stripprefix
tls:
certResolver: tlsresolver
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: stripprefix
namespace: registry
spec:
stripPrefix:
prefixes:
- /ui/
I have access to the browser ui at https://registry.mydomain.io/ui/, however it returns nothing.
Am I missing something here?
As the owner of that repository there may be something missing here. Your IngressRoute rule has an entryPoint of websecure and certResolver of tlsresolver. This is intended to be the https entrypoint for Traefik, see my other repository https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd and associated Traefik document of which this Docker Registry repo is based on.
Can you review your Traefik deployment to ensure that you have this entrypoint, and you also have this certificate resolver along with a generated https certificate that this is using. Can you also check the traefik logs to see if there are any errors there during startup, e.g. missing certs etc. and any access log information in there as well which may indicate why this is not routing to there.
If you don't have these items setup you could help narrow this down further by changing this IngressRoute config to use just the web entrypoint and remove the tls section as well in your registry-ui-ingress-route.yaml manifest file and then reapply that. This will mean you can access this over http to at least rule out any https issues.
I have deplyonment.yml file which looks like below :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber)
imagePullPolicy: Always
But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.
If I specify something like below
image: XXXX..azurecr.io/werepo:$(Build.BuildNumber)
It worked with the direct static and exact names. But I don't want to hard core registry and repository name.
Is there any way to replace this dynamically? just like the way I am passing these in task
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'XXXX-connection'
namespace: 'XXXX-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
containers: |
$(Registry)/$(webRepository):$(Build.BuildNumber)
You can do something like
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
in CI step or run sed command in ubuntu like
steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
above will resolve your issue.
Above command simply find & replace TEST_IMAGE_NAME with variables that creating the docker image URI.
Option : 2 kustomization
If you want to do it with customization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
namespace: default
commonLabels:
app: myapp
images:
- name: myapp
newName: registry.gitlab.com/jkpl/kustomize-demo
newTag: IMAGE_TAG
sh file
#!/usr/bin/env bash
set -euo pipefail
# Set the image tag if not set
if [ -z "${IMAGE_TAG:-}" ]; then
IMAGE_TAG=$(git rev-parse HEAD)
fi
sed "s/IMAGE_TAG/${IMAGE_TAG}/g" k8s-base/kustomization.template.sed.yaml > location/kustomization.yaml
Demo github : https://gitlab.com/jkpl/kustomize-demo
all!!
I'm deploying private registry within K8S cluster with following yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: registry
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/registry/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: Service
metadata:
name: registry
labels:
app: registry
spec:
ports:
- port: 5000
targetPort: 5000
nodePort: 30400
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: registry-ui
labels:
app: registry
spec:
ports:
- port: 8080
targetPort: 8080
name: registry
selector:
app: registry
tier: registry
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
- name: registryui
image: hyper/docker-registry-web:latest
ports:
- containerPort: 8080
env:
- name: REGISTRY_URL
value: http://localhost:5000/v2
- name: REGISTRY_NAME
value: cluster-registry
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: registry-persistent-storage
persistentVolumeClaim:
claimName: registry-claim
I'm just wondering that there is no option to delete docker images after pushing them to the local registry. I found the way how it suppose to work here: https://github.com/byrnedo/docker-reg-tool. I can list docker images inside local repository, see all tags via command line, but unable delete them. After reading the docker registry documentation, I've found that registry docker need to be run with following env: REGISTRY_STORAGE_DELETE_ENABLED=true.
I tried to add this variable into yaml file:
.........
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
tier: registry
spec:
containers:
- image: registry:2
name: registry
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
- name: registry-persistent-storage
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: true
But applying this yaml file with command kubectl apply -f manifests/registry.yaml return following error message:
Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true}],"ima|..., bigger context ...|"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":true}],"image":"registry:2","name":"registry","port|...
After I find another suggestion:
The registry accepts configuration settings either via a file or via
environment variables. So the environment variable
REGISTRY_STORAGE_DELETE_ENABLED=true is equivalent to this in your
config file:
storage:
delete:
enabled: true
I've tried this option as well in my yaml file but still no luck...
Any suggestions how to enable docker images deletion in my yaml file are highly appreciated.
The value of true in yaml is parsed into a boolean data type and the syntax calls for a string. You'll need to explicitly quote it:
value: "true"
To use a docker container from a private docker repo, kubernetes recommends creating a secret of type 'docker-registry' and referencing it in your deployment.
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then in your helm chart or kubernetes deployment file, use imagePullSecrets
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: foo
image: foo.example.com
This works, but requires that all containers be sourced from the same registry.
How would you pull 2 containers from 2 registries (e.g. when using a sidecar that is stored separate from the primary container) ?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: foo
image: foo.example.com
imagePullSecrets:
- name: foo-secret
- name: bar
image: bar.example.com
imagePullSecrets:
- name: bar-secret
I've tried creating 2 secrets foo-secret and bar-secret and referencing each appropriately, but I find it fails to pull both containers.
You have to include imagePullSecrets: directly at the pod level, but you can have multiple secrets there.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
imagePullSecrets:
- name: foo-secret
- name: bar-secret
containers:
- name: foo
image: foo.example.com/foo-image
- name: bar
image: bar.example.com/bar-image
The Kubernetes documentation on this notes:
If you need access to multiple registries, you can create one secret for each registry. Kubelet will merge any imagePullSecrets into a single virtual .docker/config.json when pulling images for your Pods.
I have two containers (GitLab and PostgreSQL) running on RancherOS v1.0.3. I would like to make them part of Kubernetes cluster.
[rancher#rancher-agent-1 ~]$ cat postgresql.sh
docker run --name gitlab-postgresql -d \
--env 'POSTGRES_DB=gitlabhq_production' \
--env 'POSTGRES_USER=gitlab' --env 'POSTGRES_PASSWORD=password' \
--volume /srv/docker/gitlab/postgresql:/var/lib/postgresql \
postgres:9.6-2
[rancher#rancher-agent-1 ~]$ cat gitlab.sh
docker run --name gitlab -d \
--link gitlab-postgresql:postgresql \
--publish 443:443 --publish 80:80 \
--env 'GITLAB_PORT=80' --env 'GITLAB_SSH_PORT=10022' \
--env 'GITLAB_SECRETS_DB_KEY_BASE=64-char-key-A' \
--env 'GITLAB_SECRETS_SECRET_KEY_BASE=64-char-key-B' \
--env 'GITLAB_SECRETS_OTP_KEY_BASE=64-char-key-C' \
--volume /srv/docker/gitlab/gitlab:/home/git/data \
sameersbn/gitlab:9.4.5
Queries:
1) I have some idea about how to use YAML files to provision pods, replication controller etc. but i am not sure how to pass the above docker run parameters to Kubernetes so that it can apply the same to image(s) correctly.
2) I'm not sure whether --link argument (used in gitlab.sh above) also need to be passed in Kubernetes. Although i am currently deploying both containers on single host but will be creating cluster of each (PostgreSQL and GitLab) later, so just wanted to confirm whether inter-host communication will automatically be taken care of by Kubernetes. If not, then what options can be explored?
You should first try to represent your run statements into a docker-compose.yml file. Which is quite easy and it would turn something like below
version: '3'
services:
postgresql:
image: postgres:9.6-2
environment:
- "POSTGRES_DB=gitlabhq_production"
- "POSTGRES_USER=gitlab"
- "POSTGRES_PASSWORD=password"
volumes:
- /srv/docker/gitlab/postgresql:/var/lib/postgresql
gitlab:
image: sameersbn/gitlab:9.4.5
ports:
- "443:443"
- "80:80"
environment:
- "GITLAB_PORT=80"
- "GITLAB_SSH_PORT=10022"
- "GITLAB_SECRETS_DB_KEY_BASE=64-char-key-A"
- "GITLAB_SECRETS_SECRET_KEY_BASE=64-char-key-B"
- "GITLAB_SECRETS_OTP_KEY_BASE=64-char-key-C"
volumes:
- /srv/docker/gitlab/gitlab:/home/git/data
Now there is a amazing tool name kompose from kompose.io which does the conversion part for you. If you convert the above you will get the related files
$ kompose convert -f docker-compose.yml
WARN Volume mount on the host "/srv/docker/gitlab/gitlab" isn't supported - ignoring path on the host
WARN Volume mount on the host "/srv/docker/gitlab/postgresql" isn't supported - ignoring path on the host
INFO Kubernetes file "gitlab-service.yaml" created
INFO Kubernetes file "postgresql-service.yaml" created
INFO Kubernetes file "gitlab-deployment.yaml" created
INFO Kubernetes file "gitlab-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "postgresql-deployment.yaml" created
INFO Kubernetes file "postgresql-claim0-persistentvolumeclaim.yaml" created
Now you have to fix the volume mount part as per kubernetes. This completes 80% of the work and you just need figure out the rest 20%
Here is a cat of all the generate files, so that you can just see what kind of files are generated
==> gitlab-claim0-persistentvolumeclaim.yaml <==
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab-claim0
name: gitlab-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
==> gitlab-deployment.yaml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
name: gitlab
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
spec:
containers:
- env:
- name: GITLAB_PORT
value: "80"
- name: GITLAB_SECRETS_DB_KEY_BASE
value: 64-char-key-A
- name: GITLAB_SECRETS_OTP_KEY_BASE
value: 64-char-key-C
- name: GITLAB_SECRETS_SECRET_KEY_BASE
value: 64-char-key-B
- name: GITLAB_SSH_PORT
value: "10022"
image: sameersbn/gitlab:9.4.5
name: gitlab
ports:
- containerPort: 443
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /home/git/data
name: gitlab-claim0
restartPolicy: Always
volumes:
- name: gitlab-claim0
persistentVolumeClaim:
claimName: gitlab-claim0
status: {}
==> gitlab-service.yaml <==
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: gitlab
name: gitlab
spec:
ports:
- name: "443"
port: 443
targetPort: 443
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: gitlab
status:
loadBalancer: {}
==> postgresql-claim0-persistentvolumeclaim.yaml <==
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql-claim0
name: postgresql-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
==> postgresql-deployment.yaml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
spec:
containers:
- env:
- name: POSTGRES_DB
value: gitlabhq_production
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: gitlab
image: postgres:9.6-2
name: postgresql
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgresql-claim0
restartPolicy: Always
volumes:
- name: postgresql-claim0
persistentVolumeClaim:
claimName: postgresql-claim0
status: {}
==> postgresql-service.yaml <==
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: postgresql
name: postgresql
spec:
clusterIP: None
ports:
- name: headless
port: 55555
targetPort: 0
selector:
io.kompose.service: postgresql
status:
loadBalancer: {}