I have a docker-compose.yaml file, that has the following content:
keycloak:
image: jboss/keycloak:11.0.2
container_name: keycloak
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
PROXY_ADDRESS_FORWARDING: "true"
TZ: UTC
KEYCLOAK_DEFAULT_THEME: theme-minimal
KEYCLOAK_WELCOME_THEME: theme-minimal
#KEYCLOAK_LOGLEVEL: DEBUG
ports:
- 8088:8080
command:
- "-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
# - "-Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/opt/jboss/keycloak/export-dir -Dkeycloak.migration.usersPerFile=1000 -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
volumes:
- ./_resources/demo-config/standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
- ./_resources/demo-config/import-dir:/opt/jboss/keycloak/import-dir
- ./_resources/demo-config/export-dir:/opt/jboss/keycloak/export-dir
#- ./theme-minimal/src/main/resources/theme/theme-minimal:/opt/jboss/keycloak/themes/theme-minimal
- ./theme-minimal/target/theme-minimal-0.0.1-SNAPSHOT.jar:/opt/jboss/keycloak/standalone/deployments/theme-minimal-0.0.1-SNAPSHOT.jar
- ./provider-domain/target/provider-domain-0.0.1-SNAPSHOT.jar:/opt/jboss/keycloak/standalone/deployments/provider-domain-0.0.1-SNAPSHOT.jar
- ./spi-registration-profile/target/spi-registration-profile-0.0.1-SNAPSHOT.jar:/opt/jboss/keycloak/standalone/deployments/spi-registration-profile-0.0.1-SNAPSHOT.jar
- ./spi-resource/target/spi-resource-0.0.1-SNAPSHOT.jar:/opt/jboss/keycloak/standalone/deployments/spi-resource-0.0.1-SNAPSHOT.jar
- ./spi-event-listener/target/spi-event-listener-0.0.1-SNAPSHOT.jar:/opt/jboss/keycloak/standalone/deployments/spi-event-listener-0.0.1-SNAPSHOT.jar
- ./spi-mail-template-override/target/spi-mail-template-override-0.0.1-SNAPSHOT.jar:/opt/jboss/keycloak/standalone/deployments/spi-mail-template-override-0.0.1-SNAPSHOT.jar
Now I would like deploy Keycloak on Kubernetes and do not know, how to bind and provide volumes with content in Kubernetes like I do it above in Docker.
I read the doc, how to create storage in Kubernetes but it does not say, how to provide a storage with content.
My Kubernetes cluster is managed by Digital Ocean.
If your files are on your node you can use hostPath. You will need following fields in your pod manifest:
volumeMounts:
- mountPath: /<directory_with_files>
name: volume
volumes:
- name: volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
mountPath is path within the container at which the volume should be mounted.
path underhostPath field is path of the directory on the host. If the path is a symlink, it will follow the link to the real path.
Other option for digital ocean might be to use Block Storage Volumes. You can follow official documentation on how to add volumes. First of all you will need to define a PersistentVolumeClaim object:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: do-block-storage
After PVC is created you can define it in your pod manifest, for example:
volumeMounts:
- mountPath: "/data" #defines where it should be mounted
name: my-do-volume
volumes:
- name: my-do-volume
persistentVolumeClaim:
claimName: csi-pvc
Related
I have an nginx web server deployed with docker swarm, and I want to be able to deploy it also with kubernetes.
Right now I'm having trouble inserting the nginx configuration files into the container.
I'll first post here what I already do in docker swarm, and then what I tried in kubernetes.
Dockerfile:
FROM "nginx:1.19.1-alpine" AS nginx
[...]
RUN \
rm -fv /etc/nginx/nginx.conf && \
ln -svT /usr/local/etc/nginx/nginx.conf \
/etc/nginx/nginx.conf && \
rm -frv /etc/nginx/conf.d && \
ln -svT /usr/local/etc/nginx/conf.d \
/etc/nginx/conf.d
[...]
Basically I set up the image so that I can place my custom nginx config files into /usr/local/etc/ instead of /etc/
Docker swarm:
docker-compose.yaml:
configs:
nginx_conf:
file: /run/configs/www/etc/nginx/nginx.conf
nginx_conf_security-parameters:
file: /run/configs/www/etc/nginx/conf.d/security-parameters.conf
nginx_conf_server:
file: /run/configs/www/etc/nginx/conf.d/server.conf
networks:
alejandro-colomar:
services:
www:
configs:
-
mode: 0440
source: nginx_conf
target: /usr/local/etc/nginx/nginx.conf
-
mode: 0440
source: nginx_conf_security-parameters
target: /usr/local/etc/nginx/conf.d/security-parameters.conf
-
mode: 0440
source: nginx_conf_server
target: /usr/local/etc/nginx/conf.d/server.conf
deploy:
placement:
constraints:
- node.role == worker
replicas: 1
restart_policy:
condition: any
image: "alejandrocolomar/www:0.16-a6-kube"
networks:
-
"alejandro-colomar"
ports:
- "32001:8080"
version: "3.8"
Here I take the nginx custom config files, which I first put in /run/configs/ with a script so that they are in ram, and introduce them in the container as configs in the right place (/usr/local/etc/nginx/ and its subdir conf.d/).
I'd like to do the same in kubernetes, and I read that I should use a projected volume for that (if it's doable with normal volumes or in some other way, without having to use any dirty workarounds, I'm also open to that), so I tried the following (after seeing some examples which weren't very clear to me):
config.sh:
kubectl create configmap "nginx-conf-cm" \
--from-file "/run/configs/www/etc/nginx/nginx.conf"
kubectl create configmap "nginx-conf-security-parameters-cm" \
--from-file "/run/configs/www/etc/nginx/conf.d/security-parameters.conf"
kubectl create configmap "nginx-conf-server-cm" \
--from-file "/run/configs/www/etc/nginx/conf.d/server.conf"
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: www-deploy
spec:
replicas: 1
selector:
matchLabels:
service: www-svc
template:
metadata:
labels:
service: www-svc
spec:
containers:
-
image: "alejandrocolomar/www:0.16-a6-kube"
name: www-container
volumeMounts:
-
mountPath: /usr/local/etc/nginx/
name: nginx-volume
readOnly: true
volumes:
-
name: nginx-volume
projected:
sources:
-
configMap:
name: nginx-conf-cm
path: "nginx.conf"
-
configMap:
name: nginx-conf-security-parameters-cm
path: "conf.d/security-parameters.conf"
-
configMap:
name: nginx-conf-server-cm
path: "conf.d/server.conf"
service.yaml: (This one I put it here only for completeness)
apiVersion: v1
kind: Service
metadata:
name: www
spec:
ports:
-
nodePort: 32001
port: 8080
selector:
service: www-svc
type: NodePort
The deployment failed, of course, but it wasn't very wrong I think. When I entered in the container to debug it, the problem was that the three files were placed into /usr/local/etc/nginx/, and the subdir conf.d/ was not created:
/usr/local/etc/nginx/nginx.conf
/usr/local/etc/nginx/security-parameters.conf
/usr/local/etc/nginx/server.conf
How should I fix that (presumably in deployment.yaml) so that I have the following files in the container?:
/usr/local/etc/nginx/nginx.conf
/usr/local/etc/nginx/conf.d/security-parameters.conf
/usr/local/etc/nginx/conf.d/server.conf
There are multiple ways to handle this. One solution would be to create 3 separate volumes from these individual config maps and mount each one into its appropriate destination file/folder.
volumes:
- name: nginx-cm
configMap:
name: nginx-conf-cm
- name: nginx-sec
configMap:
name: nginx-conf-security-parameters-cm
- name: nginx-serv
configMap:
name: nginx-conf-server-cm
...
containers:
-
image: "alejandrocolomar/www:0.16-a6-kube"
name: www-container
volumeMounts:
- mountPath: /usr/local/etc/nginx/nginx.conf
name: nginx-cm
subPath: nginx.conf
readOnly: true
- mountPath: /usr/local/etc/nginx/conf.d/security-parameters.conf
name: nginx-sec
subPath: security-parameters.conf
readOnly: true
- mountPath: /usr/local/etc/nginx/conf.d/server.conf
name: nginx-serv
subPath: server.conf
readOnly: true
using mountPath and subPath in volumeMounts allows you to pick specific file from a given config map (doesn't matter much since you have on file per cm) and mount it as a file (not overriding the other content in the existing folder).
To explain a bit the code above:
- mountPath: /usr/local/etc/nginx/nginx.conf
name: nginx-cm
subPath: nginx.conf
tells kubernetes to use volume with the name of nignx-cm (defined in a volume section). Pick file nginx.conf (via subpath) that can be found in the associated config map and expose it in the container (mountPath) at the location /usr/local/etc/nginx/nginx.conf.
I have not run the code so there might be typos
PS: note that it might be better to create a custom configuration file inside of ../etc/nginx/conf.d/ instead of overriding ../etc/nginx/nginx.conf. That way you wouldn't need to worry about destroying the original files ../etc/nginx/ and you could mount the whole volume instead of using subpath (to avoid issues with config updates)
I am converting a docker-compose file to kubernetes using kompose running the follwing command:
$kompose convert -f docker-compose.yml -o kubernetes_image.yaml
After the command finish the ouput is the following.
WARN Volume mount on the host "/usr/docker/adpater/dbdata" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.crt" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/certificates/ssl.key" isn't supported - ignoring path on the host
WARN Volume mount on the host "/usr/docker/adpater/server.xml" isn't supported - ignoring path on the host
INFO Network integration is detected at Source, shall be converted to equivalent NetworkPolicy at Destination
To push the converted file to kubernetes I run the follwoing command:
$kubectl apply -f kubernetes_image.yaml
NAME READY STATUS RESTARTS AGE
mysql-557dd849c8-bsdq7 1/1 Running 1 17h
tomcat-7cd65d4556-spjbl 0/1 CrashLoopBackOff 76 18h
if I run:
$ kubectl describe pod tomcat-7cd65d4556-spjbl
I get the following message:
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/usr/docker/adapter/server.xml\\\" to rootfs \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged\\\" at \\\"/var/lib/docker/overlay2/a6df90a0ef4cbe8b2a3fa5352be5f304cd7b648fb1381492308f0a7fceb931cc/merged/usr/local/tomcat/conf/server.xml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Exit Code: 127
Started: Sun, 31 May 2020 13:35:00 +0100
Finished: Sun, 31 May 2020 13:35:00 +0100
Ready: False
Restart Count: 75
Environment: <none>
Mounts:
/run/secrets/rji_license.json from tomcat-hostpath0 (rw)
/usr/local/tomcat/conf/server.xml from tomcat-hostpath3 (rw)
/usr/local/tomcat/conf/ssl.crt from tomcat-hostpath1 (rw)
/usr/local/tomcat/conf/ssl.key from tomcat-hostpath2 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8dhnk (ro)
This is my docker-compose.yml file:
version: '3.6'
networks:
integration:
services:
mysql:
environment:
MYSQL_USER: 'integrationdb'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
image: db:poc
networks:
- integration
ports:
- '3306:3306'
restart: always
volumes:
- ./dbdata:/var/lib/mysql
tomcat:
image: adapter:poc
networks:
- integration
ports:
- '8080:8080'
- '8443:8443'
restart: always
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
Versions of the tools:
kompose: 1.21.0 (992df58d8)
docker: 19.03.9
kubectl:Major:"1", Minor:"18"
I think my challange here is whithin this type of volumes or files, I dont know how can I migrate or convert them to kubernetes and put the tomcat pod running fine.
Could someone give me a hand?
volumes:
- ./license.json:/run/secrets/rji_license.json
- ./certificates/ssl.crt:/usr/local/tomcat/conf/ssl.crt
- ./certificates/ssl.key:/usr/local/tomcat/conf/ssl.key
- ./server.xml:/usr/local/tomcat/conf/server.xml
thanks in advance.
When Kompose warns you:
WARN Volume mount on the host "/usr/docker/adpater/license.json" isn't supported - ignoring path on the host
It means that it can't translate this fragment of the docker-compose.yml file into Kubernetes syntax:
volumes:
- ./license.json:/run/secrets/rji_license.json
In native Kubernetes, you'd need to provide this content in ConfigMap or Secret objects, and then mount the file into the pod. You can't directly access content on the system from which you're launching the containers.
You can't really get around directly working with the Kubernetes YAML files here. You could run kompose convert to generate the skeleton files, but then you'll need to edit those to add the ConfigMaps, PersistentVolumeClaims (for the database storage), and relevant volume and mount declarations, and then run kubectl apply -f to actually run them. I'd check the Kubernetes YAML files into source control, and maintain them in parallel with your Docker Compose setup.
Move2Kube (which does support docker-compose translation), can handle this case and tries to convert the volumes by interacting with you.
? 6. [] What type of container registry login do you want to use?
Hints:
[Docker login from config mode, will use the default config from your local machine.]
No authentication
? 7. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/dbdata]?:
Hints:
[Use PVC for persistent storage wherever applicable]
Yes
? 8. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/license.json]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 9. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 10. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 11. Do you want to create PVC for host path [/Users/ashok/wksps/hc/temp/test2/src/server.xml]?:
Hints:
[Use PVC for persistent storage wherever applicable]
No
? 12. Which storage class to use for persistent volume claim [vol17655897939759777588] used by [mysql]
Hints:
[If you have a custom cluster, you can use collect to get storage classes from it.]
default
? 13. Provide the ingress host domain
Hints:
[Ingress host domain is part of service URL]
myproject.com
? 14. Provide the TLS secret for ingress
Hints:
[Enter TLS secret name]
If the above choices were made Move2Kube creates the following artifacts:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: tomcat
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: tomcat
name: tomcat
spec:
containers:
- image: adapter:poc
imagePullPolicy: Always
name: tomcat
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /run/secrets/rji_license.json
name: vol16871681589659214643
- mountPath: /usr/local/tomcat/conf/ssl.crt
name: vol12635587774184387470
- mountPath: /usr/local/tomcat/conf/ssl.key
name: vol7446232639477381794
- mountPath: /usr/local/tomcat/conf/server.xml
name: vol4920239289720818926
restartPolicy: Always
volumes:
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/license.json
name: vol16871681589659214643
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.crt
name: vol12635587774184387470
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/certificates/ssl.key
name: vol7446232639477381794
- hostPath:
path: /Users/ashok/wksps/hc/temp/test2/src/server.xml
name: vol4920239289720818926
status: {}
and
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
replicas: 2
selector:
matchLabels:
move2kube.konveyor.io/service: mysql
strategy: {}
template:
metadata:
annotations:
move2kube.konveyor.io/service.expose: "true"
creationTimestamp: null
labels:
move2kube.konveyor.io/network/integration: "true"
move2kube.konveyor.io/service: mysql
name: mysql
spec:
containers:
- env:
- name: MYSQL_USER
value: integrationdb
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_ROOT_PASSWORD
value: password
image: db:poc
imagePullPolicy: Always
name: mysql
ports:
- containerPort: 3306
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /var/lib/mysql
name: vol17655897939759777588
restartPolicy: Always
volumes:
- name: vol17655897939759777588
persistentVolumeClaim:
claimName: vol17655897939759777588
status: {}
and
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: vol17655897939759777588
spec:
resources:
requests:
storage: 100Mi
storageClassName: default
volumeName: vol17655897939759777588
status: {}
Essentially depending on your choice Move2Kube will create the appropriate artifacts for you.
You can check out how it works in https://konveyor.github.io/move2kube/tutorials/docker-compose/.
my pod stays in pending state after kubectl apply. I am currently trying to deploy 3 services which are postgres database,api server and the ui of application.The postgres pod is running fine but the remaining 2 services are stuck in pending state.
I tried creating yaml files like this
api server persistant volume
kind: PersistentVolume
apiVersion: v1
metadata:
name: api-initdb-pv-volume
labels:
type: local
app: api
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/pagedesigneryamls/api"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: api-initdb-pv-claim-one
labels:
app: api
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
api server
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:wootz-backend
name: apiserver
ports:
- containerPort: 8000
name: myport
volumeMounts:
- name: api-persistent-storage-one
mountPath: /usr/src/app
- name: api-persistent-storage-two
mountPath: /usr/src/app/node_modules
volumes:
- name: api-persistent-storage-one
persistentVolumeClaim:
claimName: api-initdb-pv-claim-one
- name: api-persistent-storage-two
persistentVolumeClaim:
claimName: api-initdb-pv-claim-two
docker-compose file (just for refernce)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
when I tried kubectl apply on the api server yaml file, the pod for the api server was stuck in pending state for ever.how do i solve this.
For your future questions, if you need to get more information on what is happening you should be using kubectl describe pod_name as this would give you and us more information and would increase a chance for a proper answer. I used your yaml and after describing the pod:
persistentvolumeclaim "api-initdb-pv-claim-two" not found
After adding second one:
pod has unbound PersistentVolumeClaims (repeated 3 times)
After you add the second PV it should start working.
You have two persistent volume claims and only one persistent volume. You can't bind two PVC to a PV. So in this case you need to add another PV and another PVC to the manifests.
You can read more about it here.
A PersistentVolume (PV) is an atomic abstraction. You can not
subdivide it across multiple claims.
More information about Persistent Volumes and how they work can be found in the official documentation.
Also if you are trying to deploy PostgresSQL here is a good guide on how to do that. And another one which will be easier as it is using managed Kubernetes service - how to run HA PostgreSQL on GKE.
api-initdb-pv-claim-two pvc doesnt exist.
you need to create pv's and bound it using one pvc each
I am trying to deploy a redis sentinel deployment in Kubernetes. I have accomplished that but want to use ConfigMaps to allow us to change the IP address of the master in the sentinel.conf file. I started this but redis cant write to the config file because the mount point for configMaps are readOnly.
I was hoping to run an init container and copy the redis conf to a different dir just in the pod. But the init container couldn't find the conf file.
What are my options? Init Container? Something other than ConfigMap?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: IP/redis-sentinel
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: sentinel-redis-config
items:
- key: redis-config-sentinel
path: sentinel.conf
According to #P Ekambaram proposal, you can try this one:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: redis-sentinel
spec:
replicas: 3
template:
metadata:
labels:
app: redis-sentinel
spec:
hostNetwork: true
containers:
- name: redis-sentinel
image: redis:5.0.4
ports:
- containerPort: 63790
- containerPort: 26379
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /usr/local/etc/redis/conf
name: config
initContainers:
- name: copy
image: redis:5.0.4
command: ["bash", "-c", "cp /redis-master/redis.conf /redis-master-data/"]
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
In this example initContainer copy the file from ConfigMap into writable dir.
Note:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
Create a startup script. In that copy the configMap file that is mounted in a volume to writable location. Then run the container process.
I'm building an application written in PHP/Symfony4. I have prepared an API service and some services written in NodeJS/Express.
I'm configuring server structure with Google Cloud Platform. The best idea, for now, is to have multizone multi-clusters configuration with the load balancer.
I was using this link https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/tree/master/examples/zone-printer as a source for my configuration. But now I don't know how to upload/build docker-compose.yml do GCR which can be used in Google Kubernetes.
version: '3'
services:
php:
image: gcr.io/XXX/php
build: build/php
expose:
- '9000'
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log
web:
image: gcr.io/XXX/nginx
build: build/nginx
restart: always
ports:
- "81:80"
depends_on:
- php
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log/nginx
I need to have a single container GCR.io/XXX/XXX/XXX for kubernetes-ingress configuration. Should I use docker-compose.yml or find something else? Which solution will be best?
docker-compose and Kubernetes declarations are not compatible with each other. If you want to use Kubernetes you can use a Pod with 2 containers (according to your example). If you want to take it a step further, you can use a Kubernetes Deployment that can manage your pod replicas, in case you are using multiple replicas.
Something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: php
image: gcr.io/XXX/php
ports:
- containerPort: 9000
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
- name: web
image: gcr.io/XXX/nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
volumes:
- name: symphony
hostPath:
path: /home/symphony
- name: logs
hostPath:
path: /home/logs
Even further, you can remove your web container and use nginx ingress controller. More about Kubernetes Ingresses here