Not able to deploy to Kubernetes cluster using Kompose - docker

I'm currently deploying a project on a kubernetes cluster by using Kompose (http://www.kompose.io) to convert the docker-compose configuration to kubernetes configuration files.
This is a project for a master class at my university and they took care of the kubernetes cluster, so I'm almost certain that the configuration for it is done properly. FYI, this is the version of that kubernetes cluster;
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
This is the version of Kompose;
$ kompose version
1.20.0 (f3d54d784)
The problem that I have is as follows. I use the command kompose convert and this works without any problem, but when I try deploying it by using the command kompose up, it fails with the following error message.
FATA Error while deploying application: Get http://localhost:8080/api: dial tcp [::1]:8080: connect: connection refused
This is my first time using kubernetes and kompose. I've looked for others who also have this problem but nothing really helped for what I've found.
This is my docker-compose file at the moment:
(I'm aware I shouldn't put passwords in my docker-compose file but it's not part of the problem)
version: "3"
services:
zookeeper-container:
image: confluentinc/cp-zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
kafka-container:
image: confluentinc/cp-kafka
depends_on:
- zookeeper-container
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper-container:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-container:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
route-db:
image: neo4j:3.5.6
environment:
- NEO4J_AUTH=neo4j/route
ports:
- 7687:7687
delay-request-db:
image: redis
staff-db:
image: mongo
train-db:
image: mongo
maintenance-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=maintenancedatabase
- MYSQL_DATABASE=Maintenance
station-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=stationdatabase
- MYSQL_DATABASE=Station
ticket-sale-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=ticketsaledatabase
- MYSQL_DATABASE=TicketSale
ticket-validation-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=ticketvalidationdatabase
- MYSQL_DATABASE=TicketValidation
timetable-db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=timetabledatabase
- MYSQL_DATABASE=Timetable
delay-service:
build: ./railway-app-delay
image: gilliswerrebrouck/railway-app-delay-service
volumes:
- ./railway-app-delay/target:/app
links:
- kafka-container
- zookeeper-container
depends_on:
- kafka-container
- zookeeper-container
maintenance-service:
build: ./railway-app-maintenance
image: gilliswerrebrouck/railway-app-maintenance-service
volumes:
- ./railway-app-maintenance/target:/app
links:
- kafka-container
- zookeeper-container
- maintenance-db
depends_on:
- kafka-container
- zookeeper-container
- maintenance-db
route-service:
build: ./railway-app-route-management
image: gilliswerrebrouck/railway-app-route-management-service
volumes:
- ./railway-app-route-management/target:/app
links:
- kafka-container
- zookeeper-container
- route-db
depends_on:
- kafka-container
- zookeeper-container
- route-db
staff-service:
build: ./railway-app-staff
image: gilliswerrebrouck/railway-app-staff-service
volumes:
- ./railway-app-staff/target:/app
links:
- kafka-container
- zookeeper-container
- staff-db
depends_on:
- kafka-container
- zookeeper-container
- staff-db
station-service:
build: ./railway-app-station
image: gilliswerrebrouck/railway-app-station-service
volumes:
- ./railway-app-station/target:/app
links:
- kafka-container
- zookeeper-container
- station-db
- delay-request-db
depends_on:
- kafka-container
- zookeeper-container
- station-db
- delay-request-db
ticket-sale-service:
build: ./railway-app-ticket-sale
image: gilliswerrebrouck/railway-app-ticket-sale-service
volumes:
- ./railway-app-ticket-sale/target:/app
links:
- kafka-container
- zookeeper-container
- ticket-sale-db
depends_on:
- kafka-container
- zookeeper-container
- ticket-sale-db
ticket-validation-service:
build: ./railway-app-ticket-validation
image: gilliswerrebrouck/railway-app-ticket-validation-service
volumes:
- ./railway-app-ticket-validation/target:/app
links:
- kafka-container
- zookeeper-container
- ticket-validation-db
depends_on:
- kafka-container
- zookeeper-container
- ticket-validation-db
timetable-service:
build: ./railway-app-timetable
image: gilliswerrebrouck/railway-app-timetable-service
volumes:
- ./railway-app-timetable/target:/app
links:
- kafka-container
- zookeeper-container
- timetable-db
- route-service
- station-service
- train-service
depends_on:
- kafka-container
- zookeeper-container
- timetable-db
- route-service
- station-service
- train-service
train-service:
build: ./railway-app-train
image: gilliswerrebrouck/railway-app-train-service
volumes:
- ./railway-app-train/target:/app
links:
- kafka-container
- zookeeper-container
- train-db
depends_on:
- kafka-container
- zookeeper-container
- train-db
apigateway:
build: ./railway-app-api-gateway
image: gilliswerrebrouck/railway-app-api-gateway-service
volumes:
- ./railway-app-api-gateway/target:/app
links:
- kafka-container
- zookeeper-container
- delay-service
- maintenance-service
- route-service
- staff-service
- station-service
- ticket-sale-service
- ticket-validation-service
- timetable-service
- train-service
depends_on:
- kafka-container
- zookeeper-container
- delay-service
- maintenance-service
- route-service
- staff-service
- station-service
- ticket-sale-service
- ticket-validation-service
- timetable-service
- train-service
ports:
- 8080:8080
frontend:
build: ./railway-app-frontend
image: gilliswerrebrouck/railway-app-frontend
volumes:
- ./railway-app-frontend/target:/app
links:
- apigateway
- route-db
depends_on:
- apigateway
- route-db
ports:
- 80:80
Anyone has any tips on how to troubleshoot this issue or how to fix it?
UPDATE:
These are the files generated by the kompose convert command

I've solved it by replacing all apiversions in the deployment files from v1beta2 to apps/v1 and by adding a selector to each deployment.
selector:
matchLabels:
app: ...
I then didn't use the command Kompose up to deploy since this gives me an error, but I used the command kubectl create -f <file(s)> to deploy and this succeeded without the connection error. There are still some pods crashing but I don't think it has anything to do with this original problem.

Related

Adguard Home docker compose config and db missing

im trying to run adguard with docker compose. I created a lot more containers with docker compose but this one is not creating any files into the mapped folder.
I tried to rebuild the docker command of the official instruction but any time i recreate the container i end up at the setup page and all settings are deleted.
Any ideas?
This is my compose file:
version: "3"
volumes:
homematic_data:
external: true
networks:
homematic:
services:
samba:
image: dperson/samba
container_name: samba
restart: always
ports:
- "137:137/udp"
- "138:138/udp"
- "139:139/tcp"
- "445:445/tcp"
healthcheck:
disable: true
environment:
- TZ='Europe/Berlin'
- WORKGROUP=workgroup
- RECYCLE=false
- USER1=pi;PASSWORD;1000
- SHARE1=homematic_docker;/shares/homematic_docker;yes;no;yes;pi;pi
volumes:
- /home/pi:/shares/homematic_docker
networks:
- homematic
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- /var/log:/var/log
- ./promtail:/etc/promtail
restart: unless-stopped
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- homematic
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- /:/host:ro,rslave
command:
- '--path.rootfs=/host'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
ports:
- 9100:9100
networks:
- homematic
restart: always
###################### portainer
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./portainer:/data
ports:
- 9000:9000
adguard:
image: adguard/adguardhome
container_name: adguard
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 67:67/udp
- 69:68/udp
- 80:80/tcp
- 443:443/tcp
- 443:443/udp
- 3000:3000/tcp
- 853:853/tcp
- 784:784/udp
- 853:853/udp
- 8853:8853/udp
- 5443:5443/tcp
- 5443:5443/udp
# environment:
# - TZ=Europe/Berlin
volumes:
- /home/pi/homematicDocker/adguard/work:/opt/adguardhome/work\
- /home/pi/homematicDocker/adguard/conf:/opt/adguardhome/conf\
# network_mode: host
raspberrymatic:
image: ghcr.io/jens-maus/raspberrymatic:3.67.10.20230117-27abde9
container_name: homematic
hostname: homematic-raspi
privileged: true
restart: unless-stopped
stop_grace_period: 30s
volumes:
- homematic_data:/usr/local:rw
- /lib/modules:/lib/modules:ro
- /run/udev/control:/run/udev/control
ports:
- "8080:80"
- "2001:2001"
- "2010:2010"
- "9292:9292"
- "8181:8181"
networks:
- homematic
Within the folder "/opt/adguardhome/work" I see a folder data with a database inside. After i finished the setup also the folder conf inside the container has a yaml file.
Unfortunately i copied the backslashes of the docker command into the volume mapping, thats was the problem why i didnt get any data. Thank you Mike!

OSError: Function not implemented - Odoo Docker - command: ["odoo","--dev","xml,reload"]

version: '3.4'
services:
db:
image: postgres:13
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/db-files/
volumes:
- ./data/db:/var/lib/postgresql/data
odoo:
build: .
depends_on:
- db
ports:
- "8069:8069"
volumes:
- ./extra-addons:/mnt/custom-addons
- ./data/odoo:/var/lib/odoo
command: ["odoo","--dev","xml,reload"]
This is my docker-compose.yml file
When I use Ubuntu, it works. But when I use MacOS, it doesn't work. The problem is command: ["odoo","--dev","xml,reload"]. When I comment it, I can run on MacOS
Please help me fix it

Nextcloud on Raspberry Pi via docker compose

I'm trying to run a Nextcloud instance on my Raspbery Pi 3B+ using a docker-compose file from this source: https://blog.ssdnodes.com/blog/installing-nextcloud-docker/
This works out of the box without any issues on a Ubuntu Server.
I've replaced the following images to be compatible with the arm infrastructure of the Pi:
jwilder/nginx-proxy:alpine with braingamer/nginx-proxy-arm or budry/jwilder-nginx-proxy-arm (I tried both)
jrcs/letsencrypt-nginx-proxy-companion with budry/jrcs-letsencrypt-nginx-proxy-companion-arm
mariadb with linuxserver/mariadb
nextcloud:latest with linuxserver/nextcloud
Unfortunately this doesn't work on the Pi, the Pi returns first a 502 Bad Gateway, then after some time the error ERR_TOO_MANY_REDIRECTS.
What am I doing wrong?
Thanks
My docker-compose.yml:
version: '3'
services:
proxy:
image: braingamer/nginx-proxy-arm
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: budry/jrcs-letsencrypt-nginx-proxy-companion-arm
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: linuxserver/mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=***PASSWORD***
- MYSQL_PASSWORD=***PASSWORD***
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
ports:
- 3306:3306
restart: unless-stopped
app:
image: linuxserver/nextcloud
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- VIRTUAL_HOST=nextcloud.domain.tld
- LETSENCRYPT_HOST=nextcloud.domain.tld
- LETSENCRYPT_EMAIL=mail#nextcloud.domain.tld
volumes:
nextcloud:
db:
networks:
nextcloud_network:
The tutorial used a Nginx reverse proxy and Let’s Encrypt, for the latter you need a valid domain. If you look at your compose file for linuxserver/nextcloud under environment, it asks for a domain for VIRTUAL_HOST, LETSENCRYPT_HOST and LETSENCRYPT_EMAIL. It then tries to create a ssl certificate for the specified domain (nextcloud.domain.tld), which is not valid, so it doesn't work.
This was the case for me, so I just removed the proxy and ssl from my compose file and nextcloud works now :)
Here is my current working compose file:
version: '3'
services:
db:
image: tobi312/rpi-mariadb:10.5
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=very_secure_password
- MYSQL_PASSWORD=very_secure_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
restart: unless-stopped
ports:
- 80:80
volumes:
nextcloud:
db:
networks:
nextcloud_network:
driver: bridge
Hope it helps.

My docker-compose.yml couldn't build mysql5.7 container

I'm trying to build docker container for laravel with docker-compose.yml.
I hove to build database container for mysql5.7.
Mysql8 cannot be used on my server witch connected.
There is my docker-compose.yml file.
version: "3"
services:
app:
build:
context: ./docker/php
args:
- TZ=${TZ}
ports:
- ${APP_PORT}:8000
volumes:
- ${PROJECT_PATH}:/work
- ./docker/ash:/etc/profile.d
- ./docker/php/psysh:/root/.config/psysh
- ./logs:/var/log/php
- ./docker/php/php.ini:/usr/local/etc/php/php.ini
working_dir: /work
environment:
- DB_CONNECTION=mysql
- DB_HOST=db
- DB_DATABASE=${DB_NAME}
- DB_USERNAME=${DB_USER}
- DB_PASSWORD=${DB_PASS}
- TZ=${TZ}
- MAIL_HOST=${MAIL_HOST}
- MAIL_PORT=${MAIL_PORT}
- CACHE_DRIVER=redis
- SESSION_DRIVER=redis
- QUEUE_DRIVER=redis
- REDIS_HOST=redis
web:
image: nginx:1.17-alpine
depends_on:
- app
ports:
- ${WEB_PORT}:80
volumes:
- ${PROJECT_PATH}:/work
- ./logs:/var/log/nginx
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
environment:
- TZ=${TZ}
db:
image: mysql:5.7
volumes:
- db-store:/var/lib/mysql
- ./logs:/var/log/mysql
- ./docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
environment:
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASS}
- MYSQL_ROOT_PASSWORD=${DB_PASS}
- TZ=${TZ}
ports:
- ${DB_PORT}:3306
db-testing:
image: mysql:5.7
volumes:
- ./docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
tmpfs:
- /var/lib/mysql
- /var/log/mysql
environment:
- MYSQL_DATABASE=${DB_NAME}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASS}
- MYSQL_ROOT_PASSWORD=${DB_PASS}
- TZ=${TZ}
ports:
- ${DB_TESTING_PORT}:3306
node:
image: node:12.13-alpine
tty: true
volumes:
- ${PROJECT_PATH}:/work
working_dir: /work
redis:
image: redis:5.0-alpine
volumes:
- redis-store:/data
mail:
image: mailhog/mailhog
ports:
- ${MAILHOG_PORT}:8025
volumes:
db-store:
redis-store:
When I execute "docker-compose build" in terminal, it's successfully done, but db container and db-testing container has status "EXIT: 1" or "EXIT: 2".
So, Could you teach me what's wrong.

Hadoop namenode not recognizing datanodes in docker

I'm new and learning to start a hadoop system using Docker but I have been stuck at one point for weeks and finally I have to ask here. I could launch the containers individually without any problems and the namenode always recognized the running namenodes. However, when I tried to set up a Docker-compose.yml file to launch all containers at once, I got into multiple problems, one of which is that the namenode never recognized the running datanodes. Do you have any suggestions to help me fix it?
Here's my docker-compose file:
version: "3"
services:
base:
image: hpcnube-base-image
hpcnube-namenode:
image: hpcnube-namenode-image
depends_on:
- base
hostname: hpcnube-namenode
container_name: hpcnube-namenode
networks:
- dockerfiles1_hpcnube-net
ports:
- "9870:9870"
hpcnube-resourcemanager:
image: hpcnube-resourcemanager-image
container_name: hpcnube-resourcemanager
depends_on:
- hpcnube-namenode
- hpcnube-dnnm2
- hpcnube-dnnm1
hostname: hpcnube-resourcemanager
networks:
- dockerfiles1_hpcnube-net
ports:
- "8088:8088"
hpcnube-dnnm1:
image: hpcnube-dnnm-image
container_name: hpcnube-dnnm1
hostname: hpcnube-dnnm1
depends_on:
- base
- hpcnube-namenode
networks:
- dockerfiles1_hpcnube-net
#command: "/opt/bd/start-daemons.sh"
hpcnube-dnnm2:
#build: ./DataNode-NodeManager
image: hpcnube-dnnm-image
container_name: hpcnube-dnnm2
hostname: hpcnube-dnnm2
depends_on:
- base
- hpcnube-namenode
networks:
- dockerfiles1_hpcnube-net
#command: "/opt/bd/start-daemons.sh"
hpcnube-checkpoint:
image: hpcnube-checkpointnode-image
hostname: hpcnube-checkpointnode
depends_on:
- base
- hpcnube-namenode
- hpcnube-resourcemanager
- hpcnube-dnnm2
- hpcnube-dnnm1
networks:
- dockerfiles1_hpcnube-net
hpcnube-timeline:
image: hpcnube-timelineserver-image
hostname: hpcnube-timelineserver
depends_on:
- base
- hpcnube-namenode
- hpcnube-resourcemanager
- hpcnube-dnnm2
- hpcnube-dnnm1
- hpcnube-checkpoint
networks:
- dockerfiles1_hpcnube-net
hpcnube-frontend:
build: ./FrontEnd
hostname: hpcnube-frontend
depends_on:
- base
- hpcnube-namenode
- hpcnube-resourcemanager
- hpcnube-dnnm2
- hpcnube-dnnm1
- hpcnube-checkpoint
- hpcnube-timeline
hostname: hpcnube-frontend
networks:
- dockerfiles1_hpcnube-net
ports:
- "2345:22"
networks:
dockerfiles1_hpcnube-net:
driver: bridge

Resources