Converting docker-compose using Kompose to deploy workloads on GKE - docker

I have project written in Django Restframework, Celery for executing long running task, Redis as a broker and Flower for monitoring Celery task. I have written a Dockerfile & docker-compose.yaml to create a network and run this services inside containers.
Dockerfile
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /ibdax
WORKDIR /ibdax
COPY ./requirements.txt /requirements.txt
COPY . /ibdax
EXPOSE 80
EXPOSE 5555
ENV ENVIRONMENT=LOCAL
#install dependencies
RUN pip install -r /requirements.txt
RUN pip install django-phonenumber-field[phonenumbers]
RUN pip install drf-yasg[validation]
docker-compose.yaml
version: "3"
services:
redis:
container_name: redis-service
image: "redis:latest"
ports:
- "6379:6379"
restart: always
command: "redis-server"
ibdax-backend:
container_name: ibdax
build:
context: .
dockerfile: Dockerfile
image: "ibdax-django-service"
volumes:
- .:/ibdax
ports:
- "80:80"
expose:
- "80"
restart: always
env_file:
- .env.staging
command: >
sh -c "daphne -b 0.0.0.0 -p 80 ibdax.asgi:application"
links:
- redis
celery:
container_name: celery-container
image: "ibdax-django-service"
command: "watchmedo auto-restart -d . -p '*.py' -- celery -A ibdax worker -l INFO"
volumes:
- .:/ibdax
restart: always
env_file:
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
flower:
container_name: flower
image: "ibdax-django-service"
command: "flower -A ibdax --port=5555 --basic_auth=${FLOWER_USERNAME}:${FLOWER_PASSWORD}"
volumes:
- .:/ibdax
ports:
- "5555:5555"
expose:
- "5555"
restart: always
env_file:
- .env
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
This Dockerfile & docker-compose is working just fine and now I want to deploy this application to GKE. I came across Kompose which translate the docker-compose to kubernetes resources. I read the documentation and started following the steps and the first step was to run kompose convert. This returned few warnings and created few files as show below -
WARN Service "celery" won't be created because 'ports' is not specified
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
INFO Kubernetes file "flower-service.yaml" created
INFO Kubernetes file "ibdax-backend-service.yaml" created
INFO Kubernetes file "redis-service.yaml" created
INFO Kubernetes file "celery-deployment.yaml" created
INFO Kubernetes file "env-dev-configmap.yaml" created
INFO Kubernetes file "celery-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "flower-deployment.yaml" created
INFO Kubernetes file "flower-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "ibdax-backend-deployment.yaml" created
INFO Kubernetes file "ibdax-backend-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
I ignored the warnings and moved to the next step i.e running command
kubectl apply -f flower-service.yaml, ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml
but I get this error -
error: Unexpected args: [ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml]
Hence I planned to apply one by one like this -
kubectl apply -f flower-service.yaml
but I get this error -
The Service "flower" is invalid: spec.ports[1]: Duplicate value: core.ServicePort{Name:"", Protocol:"TCP", AppProtocol:(*string)(nil), Port:5555, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0}
Not sure where am I going wrong.
Also the prerequisites of Kompose is to have a Kubernetes cluster so I created an Autopilot cluster with public network. Now I am not sure how this apply command will identify the cluster I created and deploy my application on it.

After kompose convert your flower-service.yaml file have duplicate ports - that's what the error is saying.
...
ports:
- name: "5555"
port: 5555
targetPort: 5555
- name: 5555-tcp
port: 5555
targetPort: 5555
...
You can either delete port name: "5555" or name: 5555-tcp.
For example, replace ports block with
ports:
- name: 5555-tcp
port: 5555
targetPort: 5555
and deploy the service again.
I would also recommend changing port name to something more descriptive.
Same thing happens with ibdax-backend-service.yaml file.
...
ports:
- name: "80"
port: 80
targetPort: 80
- name: 80-tcp
port: 80
targetPort: 80
...
You can delete one of the definitions, and redeploy the service (changing port name to something more descriptive is also recommended).
kompose is not a perfect tool, that will always give you a perfect result. You should check the generated files for any possible conflicts and/or missing fields.

Related

Can't access web container from outside (Windows Docker-Desktop)

i'm using Docker-Desktop on Windows and i'm trying to get running 3 containers inside docker-desktop.
After few research and test, i get the 3 container running [WEB - API - DB], everything seems to compile/run without issue in the logs but i'can't access my web container from outside.
Here's my dockerfile and docker-compose, what did i miss or get wrong ?
[WEB] dockerfile
FROM node:16.17.0-bullseye-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
CMD ["npm", "run", "start"]
[API] dockerfile
FROM openjdk:17.0.1-jdk-slim
WORKDIR /app
COPY ./target/test-0.0.1-SNAPSHOT.jar /app
#EXPOSE 2022 (the issue is the same with or without this line)
CMD ["java", "-jar", "test-0.0.1-SNAPSHOT.jar"]
Docker-compose file
version: "3.8"
services:
### FRONTEND ###
web:
container_name: wallet-web
restart: always
build: ./frontend
ports:
- "80:4200"
depends_on:
- "api"
networks:
customnetwork:
ipv4_address: 172.20.0.12
#networks:
# - "api"
# - "web"
### BACKEND ###
api:
container_name: wallet-api
restart: always
build: ./backend
ports:
- "2022:2022"
depends_on:
- "db"
networks:
customnetwork:
ipv4_address: 172.20.0.11
#networks:
# - "api"
# - "web"
### DATABASE ###
db:
container_name: wallet-db
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
customnetwork:
ipv4_address: 172.20.0.10
#networks:
# - "api"
# - "web"
networks:
customnetwork:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
# api:
# web:
Listening on:
enter image description here
I found several issue similar to mine but the solution didn't worked for me.
If i understand you are trying to access on port 80. To do that, you have to map your container port 4200 to 80 in yaml file 80:4200 instead of 4200:4200.
https://docs.docker.com/config/containers/container-networking/
Have you looked in the browsers development console, if there comes any error. Your docker-compose seems not to have any issue.
How ever lets try to debug it:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6245eaffd67e nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:4200->80/tcp test-api-1
copy the container id then execute:
docker exec -it 6245eaffd67e bin/bash
Now you are inside the container. Instead of the id you can use also the containers name.
curl http://localhost:80
Note: in my case here i just create a container from an nginx image.
In your case use the port where your app is running. Control it in your code if you arent sure. A lot of Javascript-frameworks start default on 3000.
If you get an error: curl command not found, install it in your image:
FROM node:16.17.0-bullseye-slim
USER root # to install dependencies you need sudo permissions su we tell the image that it is root
RUN apt update -y && apt install curl -y
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
USER node # we dont want to execute the image as root so we put user node (this user is defined in the node:16.17.0-bullseye-slim image)
CMD ["npm", "run", "start"]
Now the curl should work (if it doesnt already).
The same should work from your host.
Here is an important thing:
The localhost, always refers to the fisical computer, or the container itselfs where you are refering. Every container and your PC have localhost and they are not the same.
In the docker-compose you just map the port host/container, so your PC (host) where docker is running can access the docker network from the host on the host port you defined, inside the port of the container.
If you cant still access from your host, try to change the host ports 2022, 4200 ecc. Could be possible that something conflicts on your Windows machine.
It happens sometimes that the docker networks can create some conflicts.
Execute a docker-compose down, so it should be delete and recreated.
Still not working?
Reset docker-desktop to factory settings, control if you have last version (this is always better).
If all this doesnt help, let me know so we can debugg further.
For the sake of clarity i post you here the docker-compose which i used to check. I just used nginx to test the ports as i dont have your images.
version: "3.8"
services:
### FRONTEND ###
web:
restart: always
image: nginx
ports:
- "4200:80"
depends_on:
- "api"
networks:
- "web"
### BACKEND ###
api:
restart: always
image: nginx
ports:
- "2022:80"
depends_on:
- "db"
networks:
- "api"
- "web"
### DATABASE ###
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
- "api"
networks:
api:
web:
```
Update:
You can log what happens in the conatiner like so:
```
docker logs containerid/name
```
If you are using Visualcode there is excellent extension for docker build also by Microsoft:
Just search docker in the extensions. Has something like 20.000.000 downloads and can help you a lot debugging containers ecc. After installing it you see the dockericon on the left toolbar.
If you can see directly the errors that occurs in the logs, maybe you can post them partially. So it would be possible to understand. Please tell also something about your Frontendapp architecture, (react-app, angular). There are some frameworks that need to be startet on 0.0.0.0 instead of 127.0.0.1 or they dont work.

'(root) additional property nginx is not allowed' while installing an app in Salesforce using Docker

I'm following this tutorial https://trailhead.salesforce.com/en/content/learn/modules/user-interface-api/install-sample-app?trail_id=force_com_dev_intermediate and I have never used docker before.
Steps I followed:
Cloned the repo
Installed docker for windows and it is perfectly installed.
Tried to run this cmd on the repo docker-compose build && docker-compose up -d
While running this cmd, I'm getting the same error.
E:\Salesforce\RecordViewer>docker-compose build && docker-compose up -d
(root) Additional property nginx is not allowed
I found this answer: https://stackoverflow.com/a/38717336/279771
Basically I needed to add services: to the docker-compose.yml so it looks like this:
services:
web:
build: .
command: 'bash -c ''node app.js'''
working_dir: /usr/src/app
environment:
PORT: 8050
NGINX_PORT: 8443
volumes:
- './views:/app/user/views:ro'
nginx:
build: nginx
ports:
- '8080:80'
- '8443:443'
links:
- web:web
volumes_from:
- web

Host to Docker Container to Docker Container communication

I'm building 2 docker containers, "app" and "db", via a docker-compose file.
The app server just installs java/tomcat via a Dockerfile which is what docker-compose uses to build.
The db server uses an MS SQL image.
When I run:
docker-compose up
I follow that with a build process of software I need to load which deploys a war to the tomcat directory in the app server and builds the database in the database server.
My problem is: The build process can reference localhost:8080 to install/patch the software to the app server and reference localhost:1433 to install/patch the database portion of the software to the database server. However, when I start Tomcat the system doesn't come online because the app server can't connect to the database server via "localhost:1433" so it requires me to jump in and update the properties file after the build to the docker internal IP address and THEN it works.
My question is: How am I able to get my localhost and my app container to reference the DB in the same manner in a database url?
Dockerfile for app server:
FROM centos:centos7
COPY apache-tomcat-9.0.20.tar.gz /tmp/
WORKDIR /tmp/
RUN yum -y update
RUN yum -y install java-11-openjdk-devel
RUN tar -xf apache-tomcat-9.0.20.tar.gz
RUN mv apache-tomcat-9.0.20 /opt/tomcat/
RUN export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/
RUN export PATH=$PATH:$JAVA_HOME/jre/bin
RUN export CATALINA_HOME=/opt/tomcat/
RUN export PATH=$PATH:$CATALINA_HOME/bin
WORKDIR /opt/tomcat/webapps
RUN mkdir testapp
enter code here
enter code here
Docker-Compose File:
version: '3.3'
services:
db:
image: "mcr.microsoft.com/mssql/server:2017-latest"
restart: always
volumes:
- db_data:/var/lib/mssql
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=Test123
network_mode: bridge
hostname: db
ports:
- "1433:1433"
app:
build: './testapp'
volumes:
- './system/build:/opt/tomcat/webapps/testapp/'
ports:
- "8080:8080"
- "8009:8009"
network_mode: bridge
tty: true
depends_on:
- db
volumes:
db_data:
Bring your service to the same network and target the service by service name. For that you need to define a docker network like below. For the following example I can access DB with http://mongo:27017.
mongo:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
networks:
- my-net
spring:
depends_on:
- mongo
image: docker-spring-http-alpine
ports:
- "8080:8080"
networks:
- my-net
networks:
my-net:

Docker for Mac | Docker Compose | Cannot access containers using localhost

I've been trying to figure out why I cannot containers using "localhost:3000" from host. I've tried installing Docker via Homebrew, as well as the Docker for Mac installer. I believe I have the docker-compose file configured correctly.
Here is the output from docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------------
ecm-datacontroller_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
ecm-datacontroller_kafka_1 supervisord -n Up 0.0.0.0:2181->2181/tcp, 0.0.0.0:9092->9092/tcp
ecm-datacontroller_redis_1 docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp
ecm-datacontroller_web_1 npm start Up 0.0.0.0:3000->3000/tcp
Here is my docker-compose.yml
version: '2'
services:
web:
ports:
- "3000:3000"
build: .
command: npm start
env_file: .env
depends_on:
- db
- redis
- kafka
volumes:
- .:/app/user
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"
kafka:
image: heroku/kafka
ports:
- "2181:2181"
- "9092:9092"
I cannot access any ports that are exposed by docker-compose with curl localhost:3000 I get the following result from that
curl: (52) Empty reply from server
I should be getting {"hello":"world"}.
Dockerfile:
FROM heroku/heroku:16-build
# Which version of node?
ENV NODE_ENGINE 10.15.0
# Locate our binaries
ENV PATH /app/heroku/node/bin/:/app/user/node_modules/.bin:$PATH
# Create some needed directories
RUN mkdir -p /app/heroku/node /app/.profile.d
WORKDIR /app/user
# Install node
RUN curl -s https://s3pository.heroku.com/node/v$NODE_ENGINE/node-v$NODE_ENGINE-linux-x64.tar.gz | tar --strip-components=1 -xz -C /app/heroku/node
# Export the node path in .profile.d
RUN echo "export PATH=\"/app/heroku/node/bin:/app/user/node_modules/.bin:\$PATH\"" > /app/.profile.d/nodejs.sh
ADD package.json /app/user/
RUN /app/heroku/node/bin/npm install
ADD . /app/user/
EXPOSE 3000
Anyone have any ideas?
Ultimately, I ended up having a service that was listening on 127.0.0.1 instead of 0.0.0.0. Updating this resolved the connectivity issue I was having.

Docker app not able to communicate with database container

I have mysql DB running in a container and web app running in another container. My use case is once the DB container is up and running app container has to insert some initial data to DB using Liquibase and start the app. My docker yml looks like below.
db:
build: kdb
user: "1000:50"
volumes:
- /data/mysql:/var/lib/mysql
container_name: kdb
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
image: kdb
ports:
- "3307:3306"
k-api:
container_name: k_api
hostname: k-api
domainname: i.com
image: k_api
volumes:
- /Users/agu/work:/data
build:
context: ./api
args:
KB_API_WAR: k-web-1.2.9.war
KB_API_URL: https://artifactory.b-aws.i.com
ports:
- "8097:8080"
depends_on:
- db
#command: [/usr/local/bin/wait-for-it.sh, "db:3306","-s","-t","0","--","/bin/sh" "wait_for_liquibase.sh"]
links:
- "db:kdb_docker_host"
And in my Dockerfile for api i have entry point for a shell script called "wait_for_liquibase.sh"
CMD ["wait_for_liquibase.sh"]
wait_for_liquibase.sh:
#!/bin/sh
set -e
#RUN liquibase
mvn clean install -X -PdropAll -Dcontexts=test -Dliquibase.user=XX -Dliquibase.pass=XX -Dliquibase.host=db -Dliquibase.port=3306 -Dliquibase.schema=knowledgebasedb -DpromptOnNonLocalDatabase=false -Dcontexts=test -f k/k-liquibase
/usr/local/tomcat/bin/catalina.sh run
The issue is once the DB container is up and running app container is not able to reach the DB server to perform liquibase setup for database. I see the below error.
Communication failure: Unknown database host -Dliquibase.host=db.
I am assuming you are using version 1.
You giving an alias to your "db" service, you will need to use that alias, kdb_docker_host
Also, the ports are mapping to the host machine, to expose ports between containers yuo will need to use the expose property.
expose:
- 3306
I used this in Docker file RUN apt-get update
RUN apt-get install netcat -y
ADD wait-for-base.sh /wait-for-base.sh
CMD ["/wait-for-base.sh"]
and in wait-for-base.sh :
#!/bin/bash
while ! nc -z db 3306; do sleep 3; done
[my command to run]
In your case /usr/local/tomcat/bin/catalina.sh run

Resources