I just started to dockerize my app. I've built my Dockerfile and docker-compose.yml and everything seems to work fine except one thing. There are times my flask app will start too quick and throw a connection refused error (because the MySQL db is not fully up). I am using healthcheck to check if the db is up but this seems to not be reliable (I'm even making sure I can see show databases, but mysql apparently initializes more things after the healthcheck passes? not sure what the healthcheck is for then). In my output, I see that the db does get created first but it is still initializing when the flask app starts up. Ideally, when I run docker-compose up I want to be able to see this line first,
db_1_eae741771281 | 2018-11-10T00:50:21.473098Z 0 [Note] mysqld: ready for connections.
and then start my flask app entry point. Currently, it doesn't do this.
Is there a more reliable way to ensure the MySQL is fully up before starting my start.sh?
Dockerfile:
FROM python:3.5-alpine
RUN apk update && apk upgrade
RUN apk add --no-cache curl python build-base openldap-dev python2-dev python3-dev pkgconfig python-dev libffi-dev musl-dev make gcc
RUN pip install --upgrade pip
RUN adduser -D user
WORKDIR /home/user
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
COPY app app
COPY start.sh ./
RUN chmod +x start.sh
RUN chown -R user:user ./
USER user
EXPOSE 5000
ENTRYPOINT ["./start.sh"]
docker-compose.yml:
version: "2.1"
services:
db:
image: mysql:5.7
ports:
- "32000:3306"
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=user
- MYSQL_PASSWORD=user123
- MYSQL_ROOT_PASSWORD=user123
volumes:
- ./db:/docker-entrypoint-initdb.d/:ro
healthcheck:
test: "mysql --user=user --password=user123 --execute \"SHOW DATABASES;\""
timeout: 20s
retries: 20
app:
build: ./
ports:
- "5000:5000"
depends_on:
db:
condition: service_healthy
start.sh
#!/bin/sh
source venv/bin/activate
# Start Gunicorn processes
echo Starting Gunicorn.
exec gunicorn -b 0.0.0.0:5000 wsgi --chdir my_app --timeout 9999 --workers 3 --access-logfile - --error-logfile - --capture-output --log-level debug
OK I also had problems with health_check...
Maybe not the most optimal, but the most reliable solution is to use a MySQL client (mysqladmin) to ping your MySQL server before starting your application.
1 - Create a wait.sh script (db is your MySQL service name here) :
#!/bin/sh
# Wait until MySQL is ready
while ! mysqladmin ping -h"db" -P"3306" --silent; do
echo "Waiting for MySQL to be up..."
sleep 1
done
2 - Get a MySQL client from your app Dockerfile :
# install mysql client, will be used to ping mysql
apt-get -y install mysql-client
3 - In your docker-compose.yml file, just add scripts to your container (I used volumes but you can keep using COPY) and run wait.sh before start.sh :
app:
build: ./
ports:
- "5000:5000"
depends_on:
db:
command: bash -c "/usr/local/bin/wait.sh && /usr/local/bin/start.sh"
volumes:
- ./start.sh:/usr/local/bin/start.sh
- ./wait.sh:/usr/local/bin/wait.sh
This should work.
If you really don't want having to download a MySQL client, try this (again db is your MySQL service name here). It has worked in most of my project but not in all (may depend of the distribution?) :
#!/bin/sh
# Wait until MySQL is ready
while ! exec 6<>/dev/tcp/db/3306; do
echo "Trying to connect to MySQL at 3306..."
sleep 5
done
PS : avoid naming your services "app" or "db", you may have problems later if you have other containers with those same service names (even in different networks).
While using a health check is easier, it totally depends on how reliable the check is.
Another approach would be to rely on projects like wait-for-it or wait-for, in your app container.
Since you are getting a connection refused, these scripts could return only once the connection is possible and your app can start only after.
Also, in case that doesn't work too, you could have a separate script (python in your case) to check until the DB is ready and you can call this script in your start.sh before starting the flask app.
This is a common problem with multi containers. It is difficult to control the speed at which different containers start. Container orchestration solution like Kubernetes might be able to help you in such cases.
Kubernetes has the concept of init containers which run to completion before your dependent container can start. You can find sample of init container here
https://www.handsonarchitect.com/2018/08/understand-kubernetes-object-init.html
This youtube vide might be helpful for you as well
https://www.youtube.com/watch?v=n2FPsunhuFc
Related
I have just started using Docker as it has been recommended to me as something that makes development easy, but so far it has been nothing but pain. I have installed docker engine (v20.10.12) and docker composer (v 2.2.3) as per the documentation given by docker for Ubuntu OS. Both work as intended.
Whenever I new up a new container with docker compose, no matter the source, I have writing privilege issues to files generated by the docker container (for example a laravel application where I have used php artisan to create a controller file). I have so far pinpointed the issue to be as follows:
By default docker runs as root within the container. It "bridges" the root user to the root user on the local machine and uses root:root to create files on the Ubuntu filesystem (my workspace is placed in ~/workspace/laravel). Then when opening the files in an IDE (vscode in this instance) I get the error:
Failed to save to '<file_name>': insufficient permissions. Select 'Retry as Sudo' to retry as superuser
If I try to parse my own local user into the machine and tells it to use that specific userid and groupid it's all good when I'm using the first user created on the machine (1000:1000) since that will match with the containers default user if we look at the bitnami/laravel docker image for example.
All of this can be fixed by running chown -R yadayada . on the workspace directory every time I use php artisan to create a file. But I do not think this is sustainable or smart in any way shape or form.
How can I tell my docker container to, on startup, to check if a user with my UID and GID exists and if not, then make a user with that id and assign it as a system user?
My docker-compose.yml for this example
version: '3.8'
services:
api_php-database:
image: postgres
container_name: api_php-database
restart: unless-stopped
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: laravel_docker
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- '5432:5432'
api_php-apache:
container_name: api_php-apache
build:
context: ./php
ports:
- '8080:80'
volumes:
- ./src:/var/www/laravel_docker
- ./apache/default.conf:/etc/apache2/sites-enabled/000-default.conf
depends_on:
- api_php-database
My Dockerfile for this example
FROM php:8.0-apache
RUN apt update && apt install -y g++ libicu-dev libpq-dev libzip-dev zip zlib1g-dev && docker-php-ext-install intl opcache pdo pdo_pgsql pgsql
WORKDIR /var/www/laravel_docker
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
In general, this is not possible, but there are workarounds (I do not recommend them for production).
The superuser UID is always 0, this is written in the kernel code.
It is not possible to automatically change the ownership of non-root files.
In this case, when developing, you can use these methods:
If superuser rights are not required:
You can create users dynamically, then docker-compose.yml:
version: "3.0"
services:
something:
image: example-image
volumes:
- /user/path1:/container/path1
- /user/path2:/container/path2
# The double $ is needed to indicate that the variable is in the container
command: ["bash", "-c", "chown -R $$HOST_UID:$$HOST_GID /container/path1 /container/path2; useradd -g $$HOST_GID -u $$HOST_UID user; su -s /bin/bash user"]
environment:
HOST_GID: 100
HOST_UID: 1000
Otherwise, if you run commands in a container as root in Bash:
Bash will run the script from the PROMPT_COMMAND variable after each command is executed
This can be used in development by changing docker-compose.yaml:
version: "3.0"
services:
something:
image: example-image
volumes:
- /user/path1:/container/path1
- /user/path2:/container/path2
command: ["bash"]
environment:
HOST_UID: 1000
HOST_GID: 100
# The double $ is needed to indicate that the variable is in the container
PROMPT_COMMAND: "chown $$HOST_UID:$$HOST_GID -R /container/path1 /container/path2"
I want to start a Python container dependent on a database container. But I would like the Python container to start only after the sql server container has fully executed. I built this docker-compose.yml file ...
version: "3.2"
services:
sql-server-db:
restart: always
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
env_file: /Users/davea/my_project/api/tests/.test_env
ports:
- 3900:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=password
- DB_HOST=0.0.0.0
- DB_NAME=my_db
- DB_USER=SA
- DB_PASS=password
volumes:
- ../../CloudDB/CloudDB:/sqlscripts
python:
restart: always
build: ../
environment:
DEBUG: 'true'
volumes:
- /Users/davea/my_project/api:/my-app
depends_on:
- sql-server-db
Below is my Dockerfile for the sql server container ...
FROM microsoft/mssql-server-linux:latest
RUN apt-get update
RUN apt-get install unzip -y
RUN apt-get install tzdata
ENV TZ=America/New_York
RUN ln -fs /usr/share/zoneinfo/$TZ /etc/localtime && dpkg-reconfigure -f noninteractive tzdata
RUN date
RUN echo "========="
# Install sqlpackage, needed for deplying dacpac file
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=873926 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
# Create work directory
RUN mkdir -p /usr/work
WORKDIR /usr/work
# Copy all SQL scripts into working directory
COPY . /usr/work/
# Grant permissions for the import-data script to be executable
RUN chmod +x /usr/work/import-data.sh
RUN pwd
CMD /bin/bash ./entrypoint.sh
but I'm noticing something strange. The SQL server container does not seem to be fully executing all the commands in the entrypoint.sh file. I see this output ...
...
Removing intermediate container 72550d896ede
---> ae6b93ca884e
Step 14/15 : RUN pwd
---> Running in f229ef6fec4c
/usr/work
Removing intermediate container f229ef6fec4c
---> 7758242bbd95
Step 15/15 : CMD /bin/bash ./entrypoint.sh
---> Running in 76fa5c8308e3
Removing intermediate container 76fa5c8308e3
---> 567633ad757f
Successfully built 567633ad757f
Successfully tagged microsoft/mssql-server-linux:2017-latest
WARNING: Image for service sql-server-db was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building python
Step 1/17 : FROM python:3.8-slim
Below are the contents of the entrypoint.sh file. Is there another way I can structure things so taht the commands are executed? I'm noticing the Python container doesn't seem to recognize the SQL server container either.
#!/bin/bash -l
/usr/work/import-data.sh & /opt/mssql/bin/sqlservr
Is there somethign else I need to do to get the shell script from my sql server container to fully execute?
Your usage of depends_on option is incorrect, or perhaps not working the way in which you intend it to work.
See: Documentation of depends_on. It clearly state it does not wait for the database to be ready in case of sql servers.
Depends_on implies only to wait until the service is up and running.
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
You will benefit to create some sort of manual "wait-for-it" script (as seen in this docker-compose example) before starting python container.
I have got a prisma server image from dockerhub which is
prismagraphql/prisma:1.34
The above prisma image in order to run on PORT 4466 requires database connection string and the same is passed as an environment variable using a docker-compose file like shown below
prisma:
image: prismagraphql/prisma:1.34
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: mongodb://mongodb
I am trying to extend the above prisma server image like shown below.
FROM prismagraphql/prisma:1.34
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.7/main/ nodejs=8.9.3-r1
WORKDIR /project
COPY . .
# To handle 'not get uid/gid' error in alpine linux set unsafe-perm true
RUN apk update && apk upgrade \
&& npm config set unsafe-perm true \
&& npm install --g yarn \
&& npm install -g prisma \
&& yarn install \
&& chmod +x ./entrypoint.sh \
&& chmod +x ./wait-for-it.sh
EXPOSE 4466 4000
ENTRYPOINT ["./entrypoint.sh"]
The entrypoint.sh file is like this
#!/bin/bash
# wait for the prisma service to start.
# then run prisma deploy (more on that later)
./wait-for-it.sh prisma:4466 -- prisma deploy
# go into the project...
cd /project
# run an npm command to use nodemon to start/watch the server
npm run start
In the above Dockerfile
I try installing nodejs app on existing prisma image from dockerhub.
This nodejs application is called prisma nexus. nexus requires to be connected to prisma on
localhost:4466 and nexus runs on port 4000.
When I run the below below image I get this error. i.e nexus(nodejs app) is not able to connect to prisma
Could not connect to server at http://localhost:4466. Please check if your server is running.
Finally I run the extended image like this
mongodb:
image: mongo:4.2
container_name: mongodb
volumes:
- ./mongo-volume:/data/db
ports:
- "27017:27017"
networks:
- prisma
prisma:
image: extended-image-here:1.0
container_name: prisma-server
restart: always
ports:
- "4466:4466"
- "4000:4000"
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: mongodb://mongodb
What am I doing here? Please help.
I guess the reason why it does not work is because the image prismagraphql/prisma:1.34 has an entrypoint and at the end of the Dockerfile there is another entrypoint. Docker accepts only a single entrypoint in a Dockerfile...
First: In your code, you put the MongoDB container on a specific named network called prisma but you do not do the same thing with the prisma container. When using compose, containers on the same overlay network are resolved by name, but requests will only be routed between containers if they're on the same network.
Next: you shouldn't be running two servers in the same container. It's better to not build your app on top of the prisma image at all, but to build it instead on top of alpine or ubuntu (or anything else really). It should connect to another container where the prisma server is running. In the comments you say that you really want to do this, but you really shouldn't. It's not much harder to run a compose configuration on a client's server rather than a single container, but it is that much harder to run 2 servers in a single container.
Finally: The localhost reference (nexus you say?) should be configurable in some way. Find out how, and have it address something like 'http://prisma:4466'. In this way you'll have 3 containers -- mongodb, prisma, and your own app.
I have been trying to get a socketio server moved over from EC2 to Docker.
I have been able to connect to the socket via a web (http) client, but connecting directly to the socket via iOS or Android seems to be impossible.
I read one of the issues can be the ports exposed are not actually published when using Docker. Since our mobile apps currently connect on port 8080 on our classic EC2 instance. I setup a docker-compose.yml file to try and open all ports and communication protocals, but I am two issues:
1. I am not sure what the service should be called so I went with "src" (see DockerFile below). But wondering if it should be app since server file is app.js?
2. Getting "Bind for 0.0.0.0:8080 failed: port is already allocated".
DockerFile
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
RUN mkdir /src
ADD package.json /src
RUN apt-get update
RUN apt-get install --yes curl
RUN curl --silent --location https://deb.nodesource.com/setup_4.x | sudo bash -
RUN apt-get install --yes nodejs
RUN apt-get install --yes build-essential
RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10
RUN cd /src; npm install
RUN npm install --silent socket.io#0.9.14
WORKDIR /src
# Bundle app source
# Trouble with COPY http://stackoverflow.com/a/30405787/2926832
COPY . /src
ADD app.js /src/
EXPOSE 8080
CMD ["node", "/src/app.js"]
Docker-Compose.yml
src:
build: .
volumes:
- ./:/src
expose:
- 8080
ports:
- "8080"
- "8080:8080/udp"
- "8080:8080/tcp"
- "0.0.0.0:8080:8080"
- "0.0.0.0:8080:8080/tcp"
- "0.0.0.0:8080:8080/udp"
environment:
- NODE_ENV=development
- PORT=8080
command:
sh -c 'npm i && node server.js'
echo 'ready'
Getting "Bind for 0.0.0.0:8080 failed: port is already allocated".
you have duplicated port allocations.
when not specifying a connection type, the port defaults to tcp: meaning "0.0.0.0:8080:8080" and "0.0.0.0:8080:8080/tcp" both trying to bind to the same port and hence your error.
since docker uses 0.0.0.0 for default binding, same applies to "8080:8080/tcp" and "0.0.0.0:8080:8080/tcp" - you have no need in both of them.
therefore, you can shrink your ports section to:
ports:
- "8080:8080"
- "8080:8080/udp"
I am not sure what the service should be called
it is completely up to you. usually services are named after their content, or role in the network, such as nginx_proxy laravel_backend etc. so node_app sounds good to me, app is also ok in small networks, src doesnt appear to have any meaning but again - it is just some identifier for your service, without any additional effect.
You are just running another container on the same port. You can view it by docker ps and stop it by docker stop [CONTAINER ID].
you just need to open docker.yml file and change your port address.....this happen for me because already that container was used by my company another member
example from 0.0.0.0:80==>0.0.0.0:8000
and also port from ports:- 80/80 to ports:- 8000:8000
I've created simple project for Symfony4 based on php7.3+mariadb via docker-compose. I used Docker for Windows 10 (x64)
It works correctly at one machine but at laptop it doesn't sync correctly with container.
In root folder I have standard Symfony structure with docker files like:
- /config
- /public
- /src
....
- /env
- /docker
- .env
- docker-compose.yaml
...
My actions in Git Bash to start app:
docker-compose build
it works correctly, all actions were finished successfully
docker-compose up -d
it works correctly, both containers run successfully
docker-compose exec app bash
works correctly, console starts
ls
result is docker env
it syncs only 2 directories - docker and env
docker dir was synced not in full mode - only subdirectories structure without files
I tried to detect what reason can be for problem with files sync but I haven't enough knowledge and experience with Docker. docker-compose logs have no errors.
Maybe somebody can help how to detect the reason? It starts once time but after reboot problem occurs again...
docker-compose.yaml:
version: '3'
services:
app:
restart: unless-stopped
build:
context: .
dockerfile: docker/webserver-apache/Dockerfile
image: php:7.3.1-apache-stretch
volumes:
- "./docker/webserver-apache/sites-enabled:/etc/apache2/sites-enabled:ro"
- "./:/var/www/html"
ports:
- 8080:80
networks:
- dphptrainnet
mariadb:
restart: unless-stopped
image: mariadb:10.4.1
networks:
- dphptrainnet
volumes:
- ./env/mariadb/data:/var/lib/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
networks:
dphptrainnet:
Dockerfile:
FROM php:7.3.1-apache-stretch
# Setting up constants for an environment
ENV PHP_MEMORY_LIMIT 512M
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
mv composer.phar /usr/local/bin/composer
RUN apt-get update && \
apt-get install -y curl vim git zip unzip
# Setting up httpd issues
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN a2enmod rewrite headers && /etc/init.d/apache2 restart
RUN echo "127.0.0.1 dockertrain.local" >> /etc/hosts
WORKDIR "/var/www/html"
RUN a2enmod rewrite
I've found only one working solution - reshare drive for Docker:
1. Disable shared disk, click Apply
2. Enable shared disk, click Apply
3. Restart application - files were synced
But how I should detect there any problems with drive access? No errors, no logs....