I'm dockerizing my existing Django application.
I have an entrypoint.sh script which run as entrypoint by the Dockerfile
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
It's content contains script to run migration when environment variable is set to migrate
#!/bin/sh
#set -e
# Run the command and exit with the custom message when the comamnd fails to run
safeRunCommand() {
cmnd="$*"
echo cmnd="$cmnd"
eval "$cmnd"
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error : [code: %d] when executing command: '$cmnd'\n" $ret_code
exit $ret_code
else
echo "Command run successfully: $cmnd"
fi
}
runDjangoMigrate() {
echo "Migrating database"
cmnd="python manage.py migrate --noinput"
safeRunCommand "$cmnd"
echo "Done: Migrating database"
}
# Run Django migrate command.
# The command is run only when environment variable `DJANGO_MANAGE_MIGRATE` is set to `on`.
if [ "x$DJANGO_MANAGE_MIGRATE" = 'xon' ] && [ ! "x$DEPLOYMENT_MODE" = 'xproduction' ]; then
runDjangoMigrate
fi
# Accept other commands
exec "$#"
Now, in the docker-compose file, I have the services like
version: '3.7'
services:
database:
image: mysql:5.7
container_name: 'qcg7_db_mysql'
restart: always
web:
build: .
command: ["./wait_for_it.sh", "database:3306", "--", "./docker_start.sh"]
volumes:
- ./src:/app
depends_on:
- database
environment:
DJANGO_MANAGE_MIGRATE: 'on'
But when I build the image using
docker-compose up --build
It fails to run the migration command from entrypoint script with error
(2002, "Can't connect to MySQL server on 'database' (115)")
This is due to the fact that the database server has not still started.
How can I make web service to wait untill the database service is completely started and is ready to accept connections?
Unfortunately, there is not a native way in Docker to wait for the database service to be ready before Django web app attempts to connect. Depends_on will only ensure that the web app is started after the database container is launched.
Because of this limitation you will need to solve this problem in how your container runs. The easiest solution is to modify the entrypoint.sh to sleep for 10-30 seconds so that your database has time to initialize before executing any additional commands. This official MySQL entrypoint.sh shows an example of how to block until the database is ready.
Related
I deployed a MERN app to a digital ocean droplet with Docker. If I run my docker-compose.yml file local on my PC it works well. I have 2 containers: 1 backend, 1 frontend. If I try to compose-up on droplet, it seems the frontend is ok but can't communicate with backend.
I use http-proxy-middleware, my setupProxy.js file:
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function (app) {
app.use(
'/api',
createProxyMiddleware({
target: 'http://0.0.0.0:5001',
changeOrigin: true,
})
);
};
I tried target: 'http://main-be:5001', too, as main-be is the name of my backend container, but get the same error. Just the Request URL is http://main-be:5001/api/auth/login in the chrome/devops/network.
...also another page:
My docker-compose.yml file:
version: '3.4'
networks:
main:
services:
main-be:
image: main-be:latest
container_name: main-be
ports:
- '5001:5001'
networks:
main:
volumes:
- ./backend/config.env:/app/config.env
command: 'npm run prod'
main-fe:
image: main-fe:latest
container_name: main-fe
networks:
main:
volumes:
- ./frontend/.env:/app/.env
ports:
- '3000:3000'
command: 'npm start'
My Dockerfile in the frontend folder:
FROM node:12.2.0-alpine
COPY . .
RUN npm ci
CMD ["npm", "start"]
My Dockerfile in the backend folder:
FROM node:12-alpine3.14
WORKDIR /app
COPY . .
RUN npm ci --production
CMD ["npm", "run", "prod"]
backend/package.json file:
"scripts": {
"start": "nodemon --watch --exec node --experimental-modules server.js",
"dev": "nodemon server.js",
"prod": "node server.js"
},
frontend/.env file:
SKIP_PREFLIGHT_CHECK=true
HOST=0.0.0.0
backend/config.env file:
DE_ENV=development
PORT=5001
My deploy.sh script to build images, copy to droplet...
#build and save backend and frontend images
docker build -t main-be ./backend & docker build -t main-fe ./frontend
docker save -o ./main-be.tar main-be & docker save -o ./main-fe.tar main-fe
#deploy services
ssh root#46.111.119.161 "pwd && mkdir -p ~/apps/first && cd ~/apps/first && ls -al && echo 'im in' && rm main-be.tar && rm main-fe.tar &> /dev/null"
#::scp file
#scp ./frontend/.env root#46.111.119.161:~/apps/first/frontend
#upload main-be.tar and main-fe.tar to VM via ssh
scp ./main-be.tar ./main-fe.tar root#46.111.119.161:~/apps/thesis/
scp ./docker-compose.yml root#46.111.119.161:~/apps/first/
ssh root#46.111.119.161 "cd ~/apps/first && ls -1 *.tar | xargs --no-run-if-empty -L 1 docker load -i"
ssh root#46.111.119.161 "cd ~/apps/first && sudo docker-compose up"
frontend/src/utils/axios.js:
import axios from 'axios';
export const baseURL = 'http://localhost:5001';
export default axios.create({ baseURL });
frontend/src/utils/constants.js:
const API_BASE_ORIGIN = `http://localhost:5001`;
export { API_BASE_ORIGIN };
I have been trying for days but can't see where the problem is so any help highly appreciated.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.
I don't suppose anyone knows if it's possible to call the docker run or docker compose up commands from a web app?
I have the following scenario in which I have a react app that uses openlayers for it's maps. I have it so that when the user loses internet connection it fallback onto making the requests to a map server running locally on docker. The issue is that the user needs to manually start the server via the command line. To make things easier for the user, I added the following bash script and docker compose file to boot up the server with a single command, but was wondering if I could incorporate that functionality into the web app and have the user boot the map server by the click of a button?
Just for references sake these are my bash and compose files.
#!/bin/sh
dockerDown=`docker info | grep -qi "ERROR" && echo "stopped"`
if [ $dockerDown ]
then
echo "\n ********* Please start docker before running this script ********* \n"
exit 1
fi
skipInstall="no"
read -p "Have you imported the maps already and just want to run the app (y/n)?" choice
case "$choice" in
y|Y ) skipInstall="yes";;
n|N ) skipInstall="no";;
* ) skipInstall="no";;
esac
pbfUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei-latest.osm.pbf'
#polyUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei.poly'
#-e DOWNLOAD_POLY=$polyUrl \
docker volume create openstreetmap-data
docker volume create openstreetmap-rendered-tiles
if [ $skipInstall = "no" ]
then
echo "\n ***** IF THIS IS THE FIRST TIME, YOU MIGHT WANT TO GO GET A CUP OF COFFEE WHILE YOU WAIT ***** \n"
docker run \
-e DOWNLOAD_PBF=$pbfUrl \
-v openstreetmap-data:/var/lib/postgresql/12/main \
-v openstreetmap-rendered-tiles:/var/lib/mod_tile \
overv/openstreetmap-tile-server \
import
echo "Finished Postgres container!"
fi
echo "\n *** BOOTING UP SERVER CONTAINER *** \n"
docker compose up
My docker compose file
version: '3'
services:
map:
image: overv/openstreetmap-tile-server
volumes:
- openstreetmap-data:/var/lib/postgresql/12/main
- openstreetmap-rendered-tiles:/var/lib/mod_tile
environment:
- THREADS=24
- OSM2PGSQL_EXTRA_ARGS=-C 4096
- AUTOVACUUM=off
ports:
- "8080:80"
command: "run"
volumes:
openstreetmap-data:
external: true
openstreetmap-rendered-tiles:
external: true
There is the Docker API, and you are able to start containers,
In the Docker documentation,
https://docs.docker.com/engine/api/
To start the containers using the Docker API
https://docs.docker.com/engine/api/v1.41/#operation/ContainerStart
i have couple of container running in sequence.
i am using depends on to make sure the next one only starts after current one running.
i realize one of container has some cron job to be finished ,
so the next container has the proper data to be imported....
in this case, i cannot just rely on depends on parameter.
how do i delay the next container to starts? say wait for 5 minutes.
sample docker compose:
test1:
networks:
- test
image: test1
ports:
- "8115:8115"
container_name: test1
test2:
networks:
- test
image: test2
depends_on:
- test1
ports:
- "8160:8160"
You can use entrypoint script, something like this (need to install netcat):
until nc -w 1 -z test1 8115; do
>&2 echo "Service is unavailable - sleeping"
sleep 1
done
sleep 2
>&2 echo "Service is up - executing command"
And execute it by command instruction in service (in docker-compose file) or in the Dockerfile (CMD directive).
I added this in the Dockerfile (since it was just for a quick test):
CMD sleep 60 && node server.js
A 60 seconds sleep did the trick, since the node.js part was executing before a database dump init script could finish executing fully.
I am trying to use Consul as discovery service, and another two spring boot app to register with Consul; and put them into docker;
following are my codes:
app:
server:
port: 3333
spring:
application:
name: adder
cloud:
consul:
host: consul
port: 8500
discovery:
preferIpAddress: true
healthCheckPath: /health
healthCheckInterval: 15s
instanceId: ${spring.application.name}:${spring.application.instance_id:${server.port}}
2 docker-compose.yml
consul1:
image: "progrium/consul:latest"
container_name: "consul1"
hostname: "consul1"
command: "-server -bootstrap -ui-dir /ui"
adder:
image: wsy/adder
ports:
- "3333:3333"
links:
- consul1
environment:
WAIT_FOR_HOSTS: consul1:8500
There is another similar question Cannot link Consul and Spring Boot app in Docker;
the answer suggests, the app should wait for consul to fully work by using depends_on, which I tried, but didn't work;
the error message is as following:
adder_1 | com.ecwid.consul.transport.TransportException: java.net.ConnectException: Connection refused
adder_1 | at com.ecwid.consul.transport.AbstractHttpTransport.executeRequest(AbstractHttpTransport.java:80) ~[consul-api-1.1.8.jar!/:na]
adder_1 | at com.ecwid.consul.transport.AbstractHttpTransport.makeGetRequest(AbstractHttpTransport.java:39) ~[consul-api-1.1.8.jar!/:na]
besides spring boot application.yml and docker-compose.yml, following is App's Dockerfile
FROM java:8
VOLUME /tmp
ADD adder-0.0.1-SNAPSHOT.jar app.jar
RUN bash -c 'touch /app.jar'
ADD start.sh start.sh
RUN bash -c 'chmod +x /start.sh'
EXPOSE 3333
ENTRYPOINT ["/start.sh", " java -Djava.security.egd=file:/dev/./urandom -jar /app.jar"]
and the start.sh
#!/bin/bash
set -e
wait_single_host() {
local host=$1
shift
local port=$1
shift
echo "waiting for TCP connection to $host:$port..."
while ! nc ${host} ${port} > /dev/null 2>&1 < /dev/null
do
echo "TCP connection [$host] not ready, will try again..."
sleep 1
done
echo "TCP connection ready. Executing command [$host] now..."
}
wait_all_hosts() {
if [ ! -z "$WAIT_FOR_HOSTS" ]; then
local separator=':'
for _HOST in $WAIT_FOR_HOSTS ; do
IFS="${separator}" read -ra _HOST_PARTS <<< "$_HOST"
wait_single_host "${_HOST_PARTS[0]}" "${_HOST_PARTS[1]}"
done
else
echo "IMPORTANT : Waiting for nothing because no $WAIT_FOR_HOSTS env var defined !!!"
fi
}
wait_all_hosts
exec $1
I can infer that your Consul configuration is located in your application.yml instead of bootstrap.yml, that's the problem.
According to this answer, bootstrap.yml is loaded before application.yml and Consul client has to check its configuration before the application itself and therefore look at the bootstrap.yml.
Example of a working bootstrap.yml :
spring:
cloud:
consul:
host: consul
port: 8500
discovery:
prefer-ip-address: true
Run Consul server and do not forget the name option to match with your configuration:
docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
Consul server is now running, run your application image (builded previously with your artifact) and link it to the Consul container:
docker run -d -name=my-consul-client-app --link consul:consul acme/spring-app
Your problem is that depends_on does only control the startup order of your services. You have to wait until the consul servers are up and running before starting your spring app. You can do this with this script:
#!/bin/bash
set -e
default_host="database"
default_port="3306"
host="${2:-$default_host}"
port="${3:-$default_port}"
echo "waiting for TCP connection to $host:$port..."
while ! (echo >/dev/tcp/$host/$port) &>/dev/null
do
sleep 1
done
echo "TCP connection ready. Executing command [$1] now..."
exec $1
Usage in you docker file:
COPY wait.sh /wait.sh
RUN chmod +x /wait.sh
CMD ["/wait.sh", "java -jar yourApp-jar" , "consulURL" , "ConsulPort" ]
I just want to clarify that, at last I still don't have a solution, and can't understand the situation here; I tried the suggestion from Ohmen, in APP container, I am able to ping consul1; But the APP still fails to connect consul;
If I only start the consul by following command:
docker-compose -f docker-compose-discovery.yml up discovery
Then I can run the APP directly(through Main), and it is able to connect with spring.cloud.consul.host: discovery;
But if I try to run APP in docker container, like following:
docker run --name adder --link discovery-consul:discovery wsy/adder
It fails again with connection refused;
I am very new to docker & docker-compose; I thought it would be a good example to start, but it seems not that easy for me;