How to import an external file on k8 Manifest - docker

I have a docker-compose.yml file that has my configuration on importing an external file that installs a postgis configuration when creating a docker image for Postgres,
This is the docker file
services:
postgres:
container_name: postgres_db
build:
context: .
dockerfile: Dockerfile-db
image: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5454:5454"
networks:
- postgres
the file am importing is called Dockerfile-db .
FROM postgres:14.1
RUN apt-get update && apt-get install -y postgresql-14-postgis-3
CMD ["/usr/local/bin/docker-entrypoint.sh","postgres"]
How can I do the same import on a K8 manifest file. this is where I add the database
spec:
serviceName: zone-service-db-service
selector:
matchLabels:
app: zone-service-db
replicas: 1
template:
metadata:
labels:
app: zone-service-db
spec:
tolerations:
- key: "podType"
operator: "Equal"
value: "isDB"
effect: "NoSchedule"
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: zone-service-db
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
resources:
requests:
memory: '256Mi'
cpu: '100m'
limits:
memory: '256Mi'
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: zone-service-pv-claim
How can I import the Dockerfile-db on the k8 manifest file and be called during the creating of the Postgres container and have the extensions available on the docker-image? Any help is appreciated

I believe you are getting this error
ERROR: type "geometry" does not exist
The file you have added above will mostly work with docker-compose but for Kubernetes, to have both Postgress and Postgis work together you will have to us the postgis image instead of the postgres image like this
spec:
serviceName: zone-service-db-service
selector:
matchLabels:
app: zone-service-db
replicas: 1
template:
metadata:
labels:
app: zone-service-db
spec:
tolerations:
- key: "podType"
operator: "Equal"
value: "isDB"
effect: "NoSchedule"
containers:
- name: postgres
image: postgis/postgis:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: zone-service-db
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
resources:
requests:
memory: '256Mi'
cpu: '100m'
limits:
memory: '256Mi'
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: zone-service-pv-claim
Try this and advise. No need to import external files.

Related

Binary not found in Kubernetes deployment

I'm trying to deploy rocketmq on my testing cluster. I started from the scripts provided in the apache/rocketmq-docker repo on github, but they do not work. I created my own yaml deployment starting from the one in the repo I previously cited, and it works for mqnamsrv, but not for broker. In the following the 2 deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rocketmq-namesrv
spec:
replicas: 1
selector:
matchLabels:
app: rocketmq-namesrv
template:
metadata:
labels:
app: rocketmq-namesrv
spec:
containers:
- name: namesrv
image: myrepo/rocketmq:4.9.3-alpine
command: ["sh", "mqnamesrv"]
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "128Mi"
cpu: "400m"
ports:
- containerPort: 9876
volumeMounts:
- name: namesrv-log
mountPath: /var/log
volumes:
- name: namesrv-log
persistentVolumeClaim:
claimName: rocketmq-namesrv-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rocketmq-broker
spec:
replicas: 1
selector:
matchLabels:
app: rocketmq-broker
template:
metadata:
labels:
app: rocketmq-broker
spec:
containers:
- name: broker
image: myrepo/rocketmq:4.9.3-alpine
command: ["sh", "mqbroker", "-n", "localhost:9876"]
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "128Mi"
cpu: "400m"
ports:
- containerPort: 10909
- containerPort: 10911
volumeMounts:
- name: broker-log
mountPath: /var/log
- name: broker-store
mountPath: /home/rocketmq
volumes:
- name: broker-log
persistentVolumeClaim:
claimName: rocketmq-broker-log-pvc
- name: broker-store
persistentVolumeClaim:
claimName: rocketmq-broker-store-pvc
The image rocketmq:4.9.3-alpine was created following the procedure on the apache/rocketmq-docker repo.
After the deployment the rocketmq-namesrv works, but the broker's pod logs: sh: can't open 'mqbroker': No such file or directory. ut if I try to run manually the container with kubectl run -ti rocketmq-broker --image=myrepo/rocketmq:4.9.3-alpine --restart=Never -- sh mqbroker -n localhost:9876 it works...
What could it be the problem in the yaml? Am I making something wrong?
I think the problem is with the mount path.
- name: broker-store
mountPath: /home/rocketmq
So your binaries won't be there and so the error

elasticsearch.yml is read-only when loaded using Kubernetes ConfigMap

I am trying to load elasticsearch.yml file using ConfigMap while installing ElasticSearch using Kubernetes.
kubectl create configmap elastic-config --from-file=./elasticsearch.yml
The elasticsearch.yml file is loaded in the container with root as its owner and read-only permission (https://github.com/kubernetes/kubernetes/issues/62099). Since, ElasticSearch will not start with root ownership, the pod crashes.
As a work-around, I tried to mount the ConfigMap to a different file and then copy it to the config directory using an initContainer. However, the file in the config directory does not seem to be updated.
Is there anything that I am missing or is there any other way to accomplish this?
ElasticSearch Kubernetes StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
labels:
app: elasticservice
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: docker-elastic
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elastic-service"
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: ES_JAVA_OPTS
value: "-Xmx256m -Xms256m"
volumes:
- name: elastic-config-vol
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: elastic-config-dir
emptyDir: {}
- name: elastic-storage
emptyDir: {}
initContainers:
# elasticsearch will not run as non-root user, fix permissions
- name: fix-vol-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
- name: fix-config-vol-permission
image: busybox
command:
- sh
- -c
- cp /tmp/elasticsearch/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
securityContext:
privileged: true
volumeMounts:
- name: elastic-config-dir
mountPath: /usr/share/elasticsearch/config
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
# increase default vm.max_map_count to 262144
- name: increase-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
I use:
...
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name : config
configMap:
name: es-configmap
without any permissions problem, but you can set permissions with defaultMode

pod stuck in pending state after kubectl apply?

my pod stays in pending state after kubectl apply. I am currently trying to deploy 3 services which are postgres database,api server and the ui of application.The postgres pod is running fine but the remaining 2 services are stuck in pending state.
I tried creating yaml files like this
api server persistant volume
kind: PersistentVolume
apiVersion: v1
metadata:
name: api-initdb-pv-volume
labels:
type: local
app: api
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/pagedesigneryamls/api"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: api-initdb-pv-claim-one
labels:
app: api
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
api server
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:wootz-backend
name: apiserver
ports:
- containerPort: 8000
name: myport
volumeMounts:
- name: api-persistent-storage-one
mountPath: /usr/src/app
- name: api-persistent-storage-two
mountPath: /usr/src/app/node_modules
volumes:
- name: api-persistent-storage-one
persistentVolumeClaim:
claimName: api-initdb-pv-claim-one
- name: api-persistent-storage-two
persistentVolumeClaim:
claimName: api-initdb-pv-claim-two
docker-compose file (just for refernce)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
when I tried kubectl apply on the api server yaml file, the pod for the api server was stuck in pending state for ever.how do i solve this.
For your future questions, if you need to get more information on what is happening you should be using kubectl describe pod_name as this would give you and us more information and would increase a chance for a proper answer. I used your yaml and after describing the pod:
persistentvolumeclaim "api-initdb-pv-claim-two" not found
After adding second one:
pod has unbound PersistentVolumeClaims (repeated 3 times)
After you add the second PV it should start working.
You have two persistent volume claims and only one persistent volume. You can't bind two PVC to a PV. So in this case you need to add another PV and another PVC to the manifests.
You can read more about it here.
A PersistentVolume (PV) is an atomic abstraction. You can not
subdivide it across multiple claims.
More information about Persistent Volumes and how they work can be found in the official documentation.
Also if you are trying to deploy PostgresSQL here is a good guide on how to do that. And another one which will be easier as it is using managed Kubernetes service - how to run HA PostgreSQL on GKE.
api-initdb-pv-claim-two pvc doesnt exist.
you need to create pv's and bound it using one pvc each

I need information about docker-compose.yml - how to configure/export to kubernetes ingress

I'm building an application written in PHP/Symfony4. I have prepared an API service and some services written in NodeJS/Express.
I'm configuring server structure with Google Cloud Platform. The best idea, for now, is to have multizone multi-clusters configuration with the load balancer.
I was using this link https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/tree/master/examples/zone-printer as a source for my configuration. But now I don't know how to upload/build docker-compose.yml do GCR which can be used in Google Kubernetes.
version: '3'
services:
php:
image: gcr.io/XXX/php
build: build/php
expose:
- '9000'
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log
web:
image: gcr.io/XXX/nginx
build: build/nginx
restart: always
ports:
- "81:80"
depends_on:
- php
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log/nginx
I need to have a single container GCR.io/XXX/XXX/XXX for kubernetes-ingress configuration. Should I use docker-compose.yml or find something else? Which solution will be best?
docker-compose and Kubernetes declarations are not compatible with each other. If you want to use Kubernetes you can use a Pod with 2 containers (according to your example). If you want to take it a step further, you can use a Kubernetes Deployment that can manage your pod replicas, in case you are using multiple replicas.
Something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: php
image: gcr.io/XXX/php
ports:
- containerPort: 9000
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
- name: web
image: gcr.io/XXX/nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
volumes:
- name: symphony
hostPath:
path: /home/symphony
- name: logs
hostPath:
path: /home/logs
Even further, you can remove your web container and use nginx ingress controller. More about Kubernetes Ingresses here

how to sharing volume each containers in Kubernetes?

For example, run Drupal app using nginx:stable-alpine + drupal:8.6-fpm-alpine.
nginx container needs to share /var/www/html from drupal container to deliver static contents.
drupal container should persist or mount site-data /var/www/html/sites from external storage using volume such as GCP-PD.
in this case, locally docker-compose.yml is below.
version: "3"
volumes:
www-data:
services:
drupal:
image: "drupal:8.6-fpm-alpine"
volumes:
- "www-data:/var/www/html"
- "./sites:/var/www/html/sites"
# ...
nginx;
image: "nginx:stable-alpine"
depends_on:
- drupal
volumes:
- "www-data:/var/www/html"
# ...
# ...
how to translate to k8s deployment.yml?
(EDIT) I tried following and it didn't work.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydrupal
labels:
app.kubernetes.io/name: mydrupal
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: mydrupal
template:
metadata:
labels:
app.kubernetes.io/name: mydrupal
spec:
volumes:
- name: drupal-data
persistentVolumeClaim:
claimName: "drupal-pvc"
# keep default files for the drupal installer, and chown.
initContainers:
- name: init-drupal-data
image: drupal:8.6-fpm-alpine
imagePullPolicy: IfNotPresent
command: ['sh', '-c']
args: ['cp -r -u /var/www/html/sites/* /tmp/drupal; chown -R www-data:www-data /tmp/drupal']
volumeMounts:
- name: drupal-data
mountPath: /tmp/drupal
subPath: sites
securityContext:
# www-data
fsGroup: 33
containers:
- name: drupal
image: drupal:8.6-fpm-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
# I want to sharing this directory with nginx container.
- name: drupal-data
mountPath: /var/www/html
# I want to persist this directory using external managed storage.
- name: drupal-data
mountPath: /var/www/html/sites
subPath: sites
resources:
limits:
cpu: 800m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
- name: nginx
image: nginx:1.14-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: drupal-data
mountPath: /usr/share/nginx/html
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 120
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
I read the volume, pv, pvc docs.
but not found any solutions for how to expose directory inside container as volume.
any ideas?
Take a look at hostPath that will allow you to use a local folder as persistent storage. https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
There are some learning curve and configuration differences based on how your kubernetes cluster was setup for other types of pv/pvc.

Resources