I'm trying to build docker container for laravel with docker-compose.yml.
I hove to build database container for mysql5.7.
Mysql8 cannot be used on my server witch connected.
There is my docker-compose.yml file.
version: "3"
services:
app:
build:
context: ./docker/php
args:
- TZ=${TZ}
ports:
- ${APP_PORT}:8000
volumes:
- ${PROJECT_PATH}:/work
- ./docker/ash:/etc/profile.d
- ./docker/php/psysh:/root/.config/psysh
- ./logs:/var/log/php
- ./docker/php/php.ini:/usr/local/etc/php/php.ini
working_dir: /work
environment:
- DB_CONNECTION=mysql
- DB_HOST=db
- DB_DATABASE=${DB_NAME}
- DB_USERNAME=${DB_USER}
- DB_PASSWORD=${DB_PASS}
- TZ=${TZ}
- MAIL_HOST=${MAIL_HOST}
- MAIL_PORT=${MAIL_PORT}
- CACHE_DRIVER=redis
- SESSION_DRIVER=redis
- QUEUE_DRIVER=redis
- REDIS_HOST=redis
web:
image: nginx:1.17-alpine
depends_on:
- app
ports:
- ${WEB_PORT}:80
volumes:
- ${PROJECT_PATH}:/work
- ./logs:/var/log/nginx
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
environment:
- TZ=${TZ}
db:
image: mysql:5.7
volumes:
- db-store:/var/lib/mysql
- ./logs:/var/log/mysql
- ./docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
environment:
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASS}
- MYSQL_ROOT_PASSWORD=${DB_PASS}
- TZ=${TZ}
ports:
- ${DB_PORT}:3306
db-testing:
image: mysql:5.7
volumes:
- ./docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
tmpfs:
- /var/lib/mysql
- /var/log/mysql
environment:
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASS}
- MYSQL_ROOT_PASSWORD=${DB_PASS}
- TZ=${TZ}
ports:
- ${DB_TESTING_PORT}:3306
node:
image: node:12.13-alpine
tty: true
volumes:
- ${PROJECT_PATH}:/work
working_dir: /work
redis:
image: redis:5.0-alpine
volumes:
- redis-store:/data
mail:
image: mailhog/mailhog
ports:
- ${MAILHOG_PORT}:8025
volumes:
db-store:
redis-store:
When I execute "docker-compose build" in terminal, it's successfully done, but db container and db-testing container has status "EXIT: 1" or "EXIT: 2".
So, Could you teach me what's wrong.
Related
im trying to run adguard with docker compose. I created a lot more containers with docker compose but this one is not creating any files into the mapped folder.
I tried to rebuild the docker command of the official instruction but any time i recreate the container i end up at the setup page and all settings are deleted.
Any ideas?
This is my compose file:
version: "3"
volumes:
homematic_data:
external: true
networks:
homematic:
services:
samba:
image: dperson/samba
container_name: samba
restart: always
ports:
- "137:137/udp"
- "138:138/udp"
- "139:139/tcp"
- "445:445/tcp"
healthcheck:
disable: true
environment:
- TZ='Europe/Berlin'
- WORKGROUP=workgroup
- RECYCLE=false
- USER1=pi;PASSWORD;1000
- SHARE1=homematic_docker;/shares/homematic_docker;yes;no;yes;pi;pi
volumes:
- /home/pi:/shares/homematic_docker
networks:
- homematic
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- /var/log:/var/log
- ./promtail:/etc/promtail
restart: unless-stopped
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- homematic
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- /:/host:ro,rslave
command:
- '--path.rootfs=/host'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
ports:
- 9100:9100
networks:
- homematic
restart: always
###################### portainer
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./portainer:/data
ports:
- 9000:9000
adguard:
image: adguard/adguardhome
container_name: adguard
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 67:67/udp
- 69:68/udp
- 80:80/tcp
- 443:443/tcp
- 443:443/udp
- 3000:3000/tcp
- 853:853/tcp
- 784:784/udp
- 853:853/udp
- 8853:8853/udp
- 5443:5443/tcp
- 5443:5443/udp
# environment:
# - TZ=Europe/Berlin
volumes:
- /home/pi/homematicDocker/adguard/work:/opt/adguardhome/work\
- /home/pi/homematicDocker/adguard/conf:/opt/adguardhome/conf\
# network_mode: host
raspberrymatic:
image: ghcr.io/jens-maus/raspberrymatic:3.67.10.20230117-27abde9
container_name: homematic
hostname: homematic-raspi
privileged: true
restart: unless-stopped
stop_grace_period: 30s
volumes:
- homematic_data:/usr/local:rw
- /lib/modules:/lib/modules:ro
- /run/udev/control:/run/udev/control
ports:
- "8080:80"
- "2001:2001"
- "2010:2010"
- "9292:9292"
- "8181:8181"
networks:
- homematic
Within the folder "/opt/adguardhome/work" I see a folder data with a database inside. After i finished the setup also the folder conf inside the container has a yaml file.
Unfortunately i copied the backslashes of the docker command into the volume mapping, thats was the problem why i didnt get any data. Thank you Mike!
I have had nextjs website running on nextjs, traefik 1.7 and docker. Website was working allright but because of a SSL certificate I had to change traefik version to 2.4 so I can load my bought SSL. Since that website is working as before but images won't load. Anyone who could help?
OLD docker-compose
version: '3'
services:
loadbalancer:
restart: unless-stopped
image: traefik:1.7
command: --docker
ports:
- "80:80"
- "443:443"
- "3000:3000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
- ./acme.json:/acme.json:rw
- ./traefik.toml:/traefik.toml:rw
- ./certs:/certs:rw
command:
- --debug=false
- --logLevel=ERROR
- --defaultentrypoints=https,http
- "--entryPoints=Name:http Address::80"
- "--entryPoints=Name:https Address::443 TLS"
- --docker.endpoint=unix:///var/run/docker.sock
- --docker.watch=true
- --docker.exposedbydefault=false
- --acme.email=admin#ssupat.sk
- --acme.storage=acme.json
- --acme.entryPoint=https
- --acme.onHostRule=true
- --acme.httpchallenge.entrypoint=https
security_opt:
- no-new-privileges:true
networks:
- ssupat
cms-postgresql:
restart: unless-stopped
image: 'bitnami/postgresql:latest'
environment:
- POSTGRESQL_USERNAME=ssupat_user
- POSTGRESQL_PASSWORD=password
- POSTGRESQL_DATABASE=ssupat_cms
ports:
- '5432'
networks:
- ssupat
volumes:
- ./db/:/bitnami/postgresql
ssupat-cms-strapi:
restart: unless-stopped
build:
context: ssupat-cms-strapi/
dockerfile: Dockerfile
environment:
DATABASE_CLIENT: postgres
DATABASE_NAME: ssupat_cms
DATABASE_HOST: cms-postgresql
DATABASE_PORT: 5432
DATABASE_USERNAME: ssupat_user
DATABASE_PASSWORD: password
networks:
- ssupat
security_opt:
- no-new-privileges:true
volumes:
- ./app:/srv/app
- ./public:/public/uploads
depends_on:
- "cms-postgresql"
labels:
traefik.frontend.rule: 'Host:cms.ssupat.sk'
traefik.frontend.redirect.regex: ^http?://cms.ssupat.sk/(.*)
traefik.frontend.redirect.replacement: https://cms.ssupat.sk/$${1}
traefik.frontend.redirect.permanent: true
traefik.http.routers.some-name.entryPoints: 'Port:80'
traefik.http.routers.ssupat-cms-strapi.rule: 'Host:cms.ssupat.sk'
traefik.http.routers.my-app.tls: true
traefik.http.routers.my-app.tls.certresolver: 'le-ssl'
traefik.http.middlewares.test-redirectscheme.redirectscheme.permanent: true
traefik.enable: true
traefik.port: 80
traefik.protocol: http
security_opt:
- no-new-privileges:true
ssupat-web-nextjs:
restart: unless-stopped
build:
context: ssupat-web-nextjs/
dockerfile: Dockerfile
networks:
- ssupat
depends_on:
- "ssupat-cms-strapi"
- "cms-postgresql"
labels:
traefik.frontend.rule: 'Host:ssupat.sk,www.ssupat.sk'
traefik.frontend.redirect.regex: ^http?://ssupat.sk/(.*)
traefik.frontend.redirect.replacement: https://ssupat.sk/$${1}
traefik.frontend.redirect.regex: ^http?://www.ssupat.sk/(.*)
traefik.frontend.redirect.replacement: https://ssupat.sk/$${1}
traefik.frontend.redirect.permanent: true
traefik.http.routers.my-app.tls: true
traefik.http.routers.my-app.tls.certresolver: 'le-ssl'
traefik.enable: true
traefik.port: 3000
traefik.protocol: http
security_opt:
- no-new-privileges:true
networks:
ssupat:
driver: bridge
NEW docker-compose
version: '3.3'
networks:
ssupat:
driver: bridge
#networks:
#ssupat:
#external: true
services:
traefik:
#image: traefik:2.4
image: traefik:latest
container_name: traefik
volumes:
- ./certs/traefik-certs/:/etc/traefik/:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- ssupat
ports:
- 80:80
- 443:443
- 8080:8080
#- 3000:3000
command:
- '--api.insecure=true'
- '--api.dashboard=true'
- '--api.debug=true'
- '--providers.docker=true'
- '--providers.docker.exposedByDefault=false'
- '--providers.file=true'
- '--providers.file.directory=/etc/traefik/'
- '--entrypoints.http=true'
- '--providers.docker.network=proxy'
- '--entrypoints.web.address=:80'
- '--entrypoints.websecure.address=:443'
- '--entrypoints.http.http.redirections.entrypoint.to=https'
- '--entrypoints.http.http.redirections.entrypoint.scheme=https'
#- '--entrypoints.http.http.redirections.entrypoint.permanent=true'
- '--entrypoints.https=true'
- '--log=true'
- '--log.level=DEBUG'
cms-postgresql:
restart: unless-stopped
image: 'bitnami/postgresql:latest'
environment:
- POSTGRESQL_USERNAME=ssupat_user
- POSTGRESQL_PASSWORD=password
- POSTGRESQL_DATABASE=ssupat_cms
#- POSTGRESQL_ENABLE_TLS=yes
#- POSTGRESQL_TLS_CERT_FILE=/opt/bitnami/postgresql/certs/certs.crt
#- POSTGRESQL_TLS_KEY_FILE=/opt/bitnami/postgresql/certs/private.key
#- POSTGRESQL_TLS_CA_FILE=/opt/bitnami/postgresql/certs/ssupat.sk.ca
ports:
- '5432'
networks:
- ssupat
volumes:
- ./db/:/bitnami/postgresql
#- ./certs/traefik-certs/certs:/opt/bitnami/postgresql/certs
#- ./pg_hba.conf:/opt/bitnami/postgresql/conf/pg_hba.conf
ssupat-cms-strapi:
restart: unless-stopped
build:
context: ssupat-cms-strapi/
dockerfile: Dockerfile
environment:
DATABASE_CLIENT: postgres
DATABASE_NAME: ssupat_cms
DATABASE_HOST: cms-postgresql
DATABASE_PORT: 5432
DATABASE_USERNAME: ssupat_user
DATABASE_PASSWORD: password
networks:
- ssupat
security_opt:
- no-new-privileges:true
volumes:
- ./app/:/srv/app
- ./public/:/public/uploads
depends_on:
- "cms-postgresql"
labels:
- 'traefik.enable=true'
- 'traefik.http.routers.ssupat-cms-strapi.rule=Host(`cms.ssupat.sk`)'
- 'traefik.http.routers.ssupat-cms-strapi.entrypoints=websecure'
- 'traefik.http.routers.ssupat-cms-strapi.tls=true'
- 'traefik.http.routers.ssupat-cms-strapi.tls.options=default'
#- 'traefik.http.routers.ssupat-cms-strapi.middlewares=authelia#docker'
- 'traefik.http.services.ssupat-cms-strapi.loadbalancer.server.port=80'
#- 'traefik.port=80'
- 'traefik.docker.network=ssupat'
- 'traefik.http.middlewares.ssupat-cms-strapi.redirectregex.regex=^http://www.cms.ssupat.sk/(.*)'
- 'traefik.http.middlewares.ssupat-cms-strapi.redirectregex.replacement=https://cms.ssupat.sk/$${1}'
- 'traefik.http.middlewares.ssupat-cms-strapi.redirectregex.permanent=true'
ssupat-web-nextjs:
restart: unless-stopped
build:
context: ssupat-web-nextjs/
dockerfile: Dockerfile
networks:
- ssupat
security_opt:
- no-new-privileges:true
depends_on:
- "ssupat-cms-strapi"
- "cms-postgresql"
labels:
- 'traefik.enable=true'
- 'traefik.http.routers.ssupat-web-nextjs.rule=Host(`ssupat.sk`) || Host(`www.ssupat.sk`)'
#- 'traefik.http.routers.ssupat-web-nextjs.rule=Host(`ssupat.sk`, `www.ssupat.sk`)'
- 'traefik.http.routers.ssupat-web-nextjs.entrypoints=web'
#- 'traefik.http.middlewares.force_https.redirectscheme.scheme=https
- 'traefik.http.routers.ssupat-web-nextjs-secure.rule=Host(`ssupat.sk`) || Host(`www.ssupat.sk`)'
- 'traefik.http.routers.ssupat-web-nextjs-secure.entrypoints=websecure'
- 'traefik.http.routers.ssupat-web-nextjs-secure.tls=true'
- 'traefik.http.routers.ssupat-web-nextjs-secure.tls.options=default'
- 'traefik.http.services.ssupat-web-nextjs-secure.loadbalancer.server.port=3000'
#- 'traefik.port=3000'
- 'traefik.docker.network=ssupat'
#- 'traefik.http.routers.ssupat-web-nextjs-secure.middlewares=ssupat-web-nextjs-redirect'
- 'traefik.http.middlewares.ssupat-web-nextjs-secure.redirectregex.regex=^http://ssupat.sk/(.*)'
- 'traefik.http.middlewares.ssupat-web-nextjs-secure.redirectregex.replacement="https://ssupat.sk/$${1}"'
- 'traefik.http.middlewares.ssupat-web-nextjs-secure.redirectregex.permanent=true'
version: '3.3'
services:
#InfluxDB server
influx-db:
image: influxdb:1.8-alpine
container_name: influx-db
ports:
- 8086:8086
restart: always
volumes:
- db-data:/var/lib/influxdb
networks:
- local
#PostgreSQL Database for the application
postgresdb:
image: "postgres:12.0-alpine"
container_name: postgresdb
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
restart: always
networks:
- local
#Fron-end Angular Application
fe:
build: './Frontend-Asset'
ports:
- 4201:4201
links:
- sm_abc_be
- sm_um_be
depends_on:
- sm_abc_be
- sm_um_be
networks:
- local
um_fe:
build: './Frontend-User'
ports:
- 4202:4202
links:
- sm_abc_be
- sm_um_be
depends_on:
- sm_abc_be
- sm_um_be
networks:
- local
#Back-end Spring Boot Application
sm_um_be:
build: './um_be'
ports:
- 8081:8081
restart: always
volumes:
- db-data/
links:
- postgresdb
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgresdb:5432/abcd
- SPRING_DATASOURCE_USERNAME=abc_user
- SPRING_DATASOURCE_PASSWORD=abcpassword
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
depends_on:
- postgresdb
networks:
- local
sm_am_be:
build: './am_be'
ports:
- 8082:8082
restart: always
volumes:
- db-data/
links:
- postgresdb
- influx-db
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgresdb:5432/am_uuid?currentSchema=abc
- SPRING_DATASOURCE_USERNAME=am_db_user
- SPRING_DATASOURCE_PASSWORD=abcpassword
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
depends_on:
- postgresdb
- influx-db
networks:
- local
#Volumes for DB data
volumes:
db-data:
networks:
local:
driver: bridge
I'm facing a problem on my production server with a container that contains the latest version of the image but when I'm executing it, the content is not the last one.
To update docker images, I execute a little script with theses commands
docker-compose pull
docker-compose up -d --remove-orphans
docker-compose prune -fa
Of course, the image used in the docker service is with the latest tag
image: registry.gitlab.com/xxxxx/api:latest
Here is two screenshots with the container and the image content to see the differences
Here is my docker-compose.yml
version: '3.3'
services:
traefik:
image: "traefik:v2.4"
container_name: "traefik"
command:
- "--api"
- "--providers.docker"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.web.address=:80"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=com#xxxxx.com"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
- "--pilot.token=xxxxx"
ports:
- 80:80
- 443:443
volumes:
- "./letsencrypt:/letsencrypt"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
labels:
# dashboard
- "traefik.http.routers.monitor.service=api#internal"
- "traefik.http.routers.monitor.rule=Host(`monitor.xxxxx.com`)"
- "traefik.http.routers.monitor.entrypoints=websecure"
- "traefik.http.routers.monitor.tls.certresolver=myresolver"
# global redirect to https
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=web"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
# middleware redirect
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
api:
image: registry.gitlab.com/xxxxx/api:latest
ports:
- 4200:8080
volumes:
- api-data:/app
depends_on:
- db
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`api.xxxxx.com`)"
- "traefik.http.routers.api.entrypoints=websecure"
- "traefik.http.routers.api.tls.certresolver=myresolver"
front:
image: registry.gitlab.com/xxxxx/front:latest
ports:
- 3000:3000
labels:
- "traefik.enable=true"
- "traefik.http.routers.front.rule=Host(`dev.xxxxx.com`)"
- "traefik.http.routers.front.entrypoints=websecure"
- "traefik.http.routers.front.tls.certresolver=myresolver"
panel:
image: registry.gitlab.com/xxxxx/panel:latest
ports:
- 3001:3000
depends_on:
- api
labels:
- "traefik.enable=true"
- "traefik.http.routers.panel.rule=Host(`admin.xxxxx.com`)"
- "traefik.http.routers.panel.entrypoints=websecure"
- "traefik.http.routers.panel.tls.certresolver=myresolver"
coming-soon:
image: registry.gitlab.com/xxxxx/coming-soon:latest
ports:
- 3002:3000
labels:
- "traefik.enable=true"
- "traefik.http.routers.coming-soon.rule=Host(`xxxxx.com`) || Host(`www.xxxxx.com`)"
- "traefik.http.routers.coming-soon.entrypoints=websecure"
- "traefik.http.routers.coming-soon.tls.certresolver=myresolver"
db:
image: postgres
ports:
- 5432:5432
volumes:
- db-data:/var/lib/postgresql/data/
env_file:
- .env
restart: always
adminer:
image: dpage/pgadmin4
ports:
- 5000:80
volumes:
- adminer-data:/root/.pgadmin
env_file:
- .env
depends_on:
- db
labels:
- "traefik.enable=true"
- "traefik.http.routers.adminer.rule=Host(`adminer.xxxxx.com`)"
- "traefik.http.routers.adminer.entrypoints=websecure"
- "traefik.http.routers.adminer.tls.certresolver=myresolver"
gitlab-runner:
image: gitlab/gitlab-runner:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: replicated
replicas: 2
update_config:
parallelism: 4
delay: 30s
volumes:
db-data:
api-data:
adminer-data:
I'm currently deploying a project on a kubernetes cluster by using Kompose (http://www.kompose.io) to convert the docker-compose configuration to kubernetes configuration files.
This is a project for a master class at my university and they took care of the kubernetes cluster, so I'm almost certain that the configuration for it is done properly. FYI, this is the version of that kubernetes cluster;
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
This is the version of Kompose;
$ kompose version
1.20.0 (f3d54d784)
The problem that I have is as follows. I use the command kompose convert and this works without any problem, but when I try deploying it by using the command kompose up, it fails with the following error message.
FATA Error while deploying application: Get http://localhost:8080/api: dial tcp [::1]:8080: connect: connection refused
This is my first time using kubernetes and kompose. I've looked for others who also have this problem but nothing really helped for what I've found.
This is my docker-compose file at the moment:
(I'm aware I shouldn't put passwords in my docker-compose file but it's not part of the problem)
version: "3"
services:
zookeeper-container:
image: confluentinc/cp-zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
kafka-container:
image: confluentinc/cp-kafka
depends_on:
- zookeeper-container
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper-container:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-container:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
route-db:
image: neo4j:3.5.6
environment:
- NEO4J_AUTH=neo4j/route
ports:
- 7687:7687
delay-request-db:
image: redis
staff-db:
image: mongo
train-db:
image: mongo
maintenance-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=maintenancedatabase
- MYSQL_DATABASE=Maintenance
station-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=stationdatabase
- MYSQL_DATABASE=Station
ticket-sale-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=ticketsaledatabase
- MYSQL_DATABASE=TicketSale
ticket-validation-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=ticketvalidationdatabase
- MYSQL_DATABASE=TicketValidation
timetable-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=timetabledatabase
- MYSQL_DATABASE=Timetable
delay-service:
build: ./railway-app-delay
image: gilliswerrebrouck/railway-app-delay-service
volumes:
- ./railway-app-delay/target:/app
links:
- kafka-container
- zookeeper-container
depends_on:
- kafka-container
- zookeeper-container
maintenance-service:
build: ./railway-app-maintenance
image: gilliswerrebrouck/railway-app-maintenance-service
volumes:
- ./railway-app-maintenance/target:/app
links:
- kafka-container
- zookeeper-container
- maintenance-db
depends_on:
- kafka-container
- zookeeper-container
- maintenance-db
route-service:
build: ./railway-app-route-management
image: gilliswerrebrouck/railway-app-route-management-service
volumes:
- ./railway-app-route-management/target:/app
links:
- kafka-container
- zookeeper-container
- route-db
depends_on:
- kafka-container
- zookeeper-container
- route-db
staff-service:
build: ./railway-app-staff
image: gilliswerrebrouck/railway-app-staff-service
volumes:
- ./railway-app-staff/target:/app
links:
- kafka-container
- zookeeper-container
- staff-db
depends_on:
- kafka-container
- zookeeper-container
- staff-db
station-service:
build: ./railway-app-station
image: gilliswerrebrouck/railway-app-station-service
volumes:
- ./railway-app-station/target:/app
links:
- kafka-container
- zookeeper-container
- station-db
- delay-request-db
depends_on:
- kafka-container
- zookeeper-container
- station-db
- delay-request-db
ticket-sale-service:
build: ./railway-app-ticket-sale
image: gilliswerrebrouck/railway-app-ticket-sale-service
volumes:
- ./railway-app-ticket-sale/target:/app
links:
- kafka-container
- zookeeper-container
- ticket-sale-db
depends_on:
- kafka-container
- zookeeper-container
- ticket-sale-db
ticket-validation-service:
build: ./railway-app-ticket-validation
image: gilliswerrebrouck/railway-app-ticket-validation-service
volumes:
- ./railway-app-ticket-validation/target:/app
links:
- kafka-container
- zookeeper-container
- ticket-validation-db
depends_on:
- kafka-container
- zookeeper-container
- ticket-validation-db
timetable-service:
build: ./railway-app-timetable
image: gilliswerrebrouck/railway-app-timetable-service
volumes:
- ./railway-app-timetable/target:/app
links:
- kafka-container
- zookeeper-container
- timetable-db
- route-service
- station-service
- train-service
depends_on:
- kafka-container
- zookeeper-container
- timetable-db
- route-service
- station-service
- train-service
train-service:
build: ./railway-app-train
image: gilliswerrebrouck/railway-app-train-service
volumes:
- ./railway-app-train/target:/app
links:
- kafka-container
- zookeeper-container
- train-db
depends_on:
- kafka-container
- zookeeper-container
- train-db
apigateway:
build: ./railway-app-api-gateway
image: gilliswerrebrouck/railway-app-api-gateway-service
volumes:
- ./railway-app-api-gateway/target:/app
links:
- kafka-container
- zookeeper-container
- delay-service
- maintenance-service
- route-service
- staff-service
- station-service
- ticket-sale-service
- ticket-validation-service
- timetable-service
- train-service
depends_on:
- kafka-container
- zookeeper-container
- delay-service
- maintenance-service
- route-service
- staff-service
- station-service
- ticket-sale-service
- ticket-validation-service
- timetable-service
- train-service
ports:
- 8080:8080
frontend:
build: ./railway-app-frontend
image: gilliswerrebrouck/railway-app-frontend
volumes:
- ./railway-app-frontend/target:/app
links:
- apigateway
- route-db
depends_on:
- apigateway
- route-db
ports:
- 80:80
Anyone has any tips on how to troubleshoot this issue or how to fix it?
UPDATE:
These are the files generated by the kompose convert command
I've solved it by replacing all apiversions in the deployment files from v1beta2 to apps/v1 and by adding a selector to each deployment.
selector:
matchLabels:
app: ...
I then didn't use the command Kompose up to deploy since this gives me an error, but I used the command kubectl create -f <file(s)> to deploy and this succeeded without the connection error. There are still some pods crashing but I don't think it has anything to do with this original problem.