Generally do not post here, so forgive me if anything is not up to code, but I have built a micro-service to run database migrations using flask-migrate/alembic. This has seemed like a very good option for the group I am working with. Up until very recently, the micro-service could be deployed very easily by pointing to different databases and running upgrades, but recently, the flask db upgrade command has stopped working inside of the docker container. As can be seen I am using alembic-utils here to handle some aspects of dbmigrations less commonly handled by flask-migrate like views/materialized views etc.
Dockerfile:
FROM continuumio/miniconda3
COPY ./ ./
WORKDIR /dbapp
RUN conda update -n base -c defaults conda -y
RUN conda env create -f environment_py38db.yml
RUN chmod +x run.sh
ENV PATH /opt/conda/envs/py38db/bin:$PATH
RUN echo "source activate py38db" > ~/.bashrc
RUN /bin/bash -c "source activate py38db"
ENTRYPOINT [ "./run.sh" ]
run.sh:
#!/bin/bash
python check_create_db.py
flask db upgrade
environment_py38db.yml:
name: py38db
channels:
- defaults
- conda-forge
dependencies:
- Flask==2.2.0
- Flask-Migrate==3.1.0
- Flask-SQLAlchemy==3.0.2
- GeoAlchemy2==0.12.5
- psycopg2
- boto3==1.24.96
- botocore==1.27.96
- pip
- pip:
- retrie==0.1.2
- alembic-utils==0.7.8
EDITED TO INCLUDE OUTPUT:
from inside the container:
(base) david#<ip>:~/workspace/dbmigrations$ docker run --rm -it --entrypoint bash -e PGUSER="user" -e PGDATABASE="trial_db" -e PGHOST="localhost" -e PGPORT="5432" -e PGPASSWORD="pw" --net=host migrations:latest
(py38db) root#<ip>:/dbapp# python check_create_db.py
successfully created database : trial_db
(py38db) root#<ip>:/dbapp# flask db upgrade
from local environment
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$ python check_create_db.py
database: trial_db already exists: skipping...
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$ flask db upgrade
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 41f5be29ae44, initital migration to generate tables
INFO [alembic.runtime.migration] Running upgrade 41f5be29ae44 -> 34c067400f6b, add materialized views <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b -> 34c067400f6b_views, add <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b_views -> b51d57354e6c, add <. . .>
INFO [alembic.runtime.migration] Running upgrade b51d57354e6c -> 97d41cc70cb2, add-functions
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$
As the output shows, flask db upgrade is hanging inside the docker container while running locally. Both environments are reading in the db parameters from environment variables, and these are being read correctly (the fact that check_create_db.py runs confirms this). I can share more of the code if you can help me figure this out.
For good measure, here is the python script:
check_create_db.py
import psycopg2
import os
def recreate_db():
""" checks to see if the database set by env variables already exists and
creates the appropriate db if it does not exist.
"""
try:
# print statemens would be replaced by python logging modules
connection = psycopg2.connect(
user=os.environ["PGUSER"],
password=os.environ["PGPASSWORD"],
host=os.environ["PGHOST"],
port=os.environ["PGPORT"],
dbname='postgres'
)
connection.set_session(autocommit=True)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_catalog.pg_database WHERE datname = '{os.environ['PGDATABASE']}'")
exists = cursor.fetchone()
if not exists:
cursor.execute(f"CREATE DATABASE {os.environ['PGDATABASE']}")
print(f"successfully created database : {os.environ['PGDATABASE']}")
else:
print(f"database: {os.environ['PGDATABASE']} already exists: skipping...")
except Exception as e:
print(e)
finally:
if connection:
connection.close()
if __name__ == "__main__":
recreate_db()
Ok, so I was able to find the bug easily enough by going through all the commits to isolate when the program stopped working and it was an easy fix. It has however left me with more questions.
The cause of the problem was that in the root directory of the project ( so dbmigrations if you are following above..) I had added an __init__.py. This was unnecessary, but I thought it might help me access database objects defined outside of the env.py in my migrations directory after adding the path to my sys.path in env.py. This was not required, and I probably shouldv'e known not to add the __init__.py to a folder I did not intend to use a python module.
I continue to find it strange is that the project still ran perfectly fine locally, with the same __init__.py in the root folder. However, from within the docker container, this cause the flask-migrate commands to be unresponsive. This remains a point of curiosity.
In any case, if you are feeling like throwing an __init__.py in the root directory of a project, here is a data point that should discourage you from doing so, and it would probably be poor design to do so in most cases anyway.
Related
I'm following a tutorial for docker and docker compose. Although there is a npm install command in Dockerfile (as following), there is a situation that tutor have to run that command manually.
COPY package*.json .
RUN npm install
Because he is mapping the project current directory to the container by volumes as following:(which is like project runs in the container by mapped source code to the host directory)
api:
build: ./backend
ports:
- 3001:3000
environment:
DB_URL: mongodb://db/vidly
volumes:
- ./backend:/app
So npm install command in docker file doesn't make any sense. So he runs this command directly in the root of project.
So, another developer has to run npm install as well, (or if I add a new package, I should do it too) which seems not very developer friendly. Because the purpose of docker is not to do the instructions by yourself. So docker-compose up should do everything. Any idea about this problem would be appreciated.
I agree with #zeitounator, who adds very sensible commentary to your situation and use.
However, if you did want to solve the original problem of running a container that volume mounts in code, and have it run a development server, then you could move the npm command from the COPY directive to the CMD, or even add an entry script to the container that includes the npm call.
That way you could run the container with the volume mount, and the starting process (npm install, npm serve dev, etc) would occur at runtime as opposed to buildtime.
The best solution is as you mention yourself, Vahid, to use a smart Dockerfile that leverages sensible build caching, that allows the application to be built and ran with one command (and no external input). Perhaps you and your tutor can talk about these differences and come to an agreement
I need to start two services/commands in docker, from google I got that I can use ENTRYPOINT and CMD to pass different commands. but when I start the container only ENTRYPOINT script runs and CMD seems not running. since I am a new docker can you help me on how to run two commands.
Dockerfile :
FROM registry.suse.com/suse/sle15
ADD repolist/*.repo /etc/zypp/repos.d/
RUN zypper refs && zypper refresh
RUN zypper in -y bind
COPY docker-entrypoint.d/* /docker-entrypoint.d/
COPY --chown=named:named named /var/lib/named
COPY --chown=named:named named.conf /etc/named.conf
COPY --chown=named:named forwarders.conf /etc/named.d/forwarders.conf
ENTRYPOINT [ "./docker-entrypoint.d/startbind.sh" ]
CMD ["/usr/sbin/named","-g","-t","/var/lib/named","-u","named"]
startbind.sh:
#! /bin/bash
/usr/sbin/named.init start
Thanks & Regards,
Mohamed Naveen
You can use supervisor tools for managing multiple services inside a single docker container.
Check out the below example(running Redis and Django server using single CMD):
Dockerfile:
# Base Image
FROM alpine
# Installing required tools
RUN apk --update add nano supervisor python3 redis
# Adding Django Source code to container
ADD /django_app /src/django_app
# Adding supervisor configuration file to container
ADD /supervisor /src/supervisor
# Installing required python modules for app
RUN pip3 install -r /src/django_app/requirements.txt
# Exposing container port for binding with host
EXPOSE 8000
# Using Django app directory as home
WORKDIR /src/django_app
# Initializing Redis server and Gunicorn server from supervisors
CMD ["supervisord","-c","/src/supervisor/service_script.conf"]
service_script.conf file
## service_script.conf
[supervisord] ## This is the main process for the Supervisor
nodaemon=true ## This setting is to specify that we are not running in daemon mode
[program:redis_script] ## This is the part where we give the name and add config for our 1st service
command=redis-server ## This is the main command to run our 1st service
autorestart=true ## This setting specifies that the supervisor will restart the service in case of failure
stderr_logfile=/dev/stdout ## This setting specifies that the supervisor will log the errors in the standard output
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout ## This setting specifies that the supervisor will log the output in the standard output
stdout_logfile_maxbytes = 0
## same setting for 2nd service
[program:django_service]
command=gunicorn --bind 0.0.0.0:8000 django_app.wsgi
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
Final output:
Redis and Gunicorn service in same docker container
You can read my complete article on this, the link is given below:
Link for complete article
Options to run more than one service within the container described really well in this official docker article:
multi-service_container.
I'd recommend reviewing why you need two services in one container(shared data volume, init, etc) cause by properly separating the services you'll have ready to scale architecture, more useful logs, easier lifecycle/resource management, and easier testing.
Within startbind.sh
you can do:
#! /bin/bash
#start second servvice here, and push it to background:
/usr/sbin/secondesrvice.init start &
#then run the last commands:
/usr/sbin/named.init start
your /usr/sbin/named.init start (the last command on the entry point) command must NOT go into background, you need to keep it on the foreground.
If this last command is not kept in foreground, Docker will exit.
You can add to startbind.sh the two service start. You can use RUN command also. RUN execute commands in docker container. If dont work, you can ask me to keep helping you.
(I'm clearly haven't fully mastered Docker's concepts yet, so please do correct me when I'm incorrectly/inaccurately using terms.)
I was running out of storage space, so I ran docker system prune to clean up my system a bit. However, shortly (perhaps immediately) after that, I started running into segmentation faults after starting Webpack dev server in my container. My guess at this point would be that that is due to some npm package having to be rebuilt, but it not doing so due to some old artefacts still lingering around? I'm not running into the segmentation faults if I run Webpack dev server outside of the container:
web_1 | [2] Project is running at http://0.0.0.0:8000/
web_1 | [2] webpack output is served from /
web_1 | [2] 404s will fallback to /index.html
web_1 | [2] Segmentation fault (core dumped)
web_1 | [2] error Command failed with exit code 139.
Thus, I'm wondering whether docker system prune really removes everything related to the Docker images I've run before, or whether there's some additional cleanup I can do.
My Dockerfile is a follows, where ./stacks/frontend is the directory from which Webpack dev server is run (through yarn start):
FROM node:6-alpine
LABEL Name="Flockademic dev environment" \
Version="0.0.0"
ENV NODE_ENV=development
WORKDIR /usr/src/app
# Needed for one of the npm dependencies (fibers, when compiling node-gyp):
RUN apk add --no-cache python make g++
COPY ["package.json", "yarn.lock", "package-lock.json*", "./"]
# Unfortunately it seems like Docker can't properly glob this at this time:
# https://stackoverflow.com/questions/35670907/docker-copy-with-file-globbing
COPY ["stacks/frontend/package.json", "stacks/frontend/yarn.lock", "stacks/frontend/package-lock*.json", "./stacks/frontend/"]
COPY ["stacks/accounts/package.json", "stacks/accounts/yarn.lock", "stacks/accounts/package-lock*.json", "./stacks/accounts/"]
COPY ["stacks/periodicals/package.json", "stacks/periodicals/yarn.lock", "stacks/periodicals/package-lock*.json", "./stacks/periodicals/"]
RUN yarn install # Also runs `yarn install` in the subdirectories
EXPOSE 3000 8000
CMD yarn start
And this is its section in docker-compose.yml:
version: '2'
services:
web:
image: flockademic
build:
context: .
dockerfile: docker/web/Dockerfile
ports:
- 3000:3000
- 8000:8000
volumes:
- .:/usr/src/app/:rw
# Prevent locally installed node_modules from being mounted inside the container.
# Unfortunately, this does not appear to be possible for every stack without manually enumerating them:
- /usr/src/app/node_modules
- /usr/src/app/stacks/frontend/node_modules
- /usr/src/app/stacks/accounts/node_modules
- /usr/src/app/stacks/periodicals/node_modules
links:
- database
environment:
# Some environment variables I use
I'm getting somewhat frustrated with not having a clear picture of what's going on :) Any suggestions on how to completely restart (and what concepts I'm getting wrong) would be appreciated.
So apparently docker system prune has some additional options, and the proper way to nuke everything was docker system prune --all --volumes. The key for me was probably --volumes, as those would probably hold cached packages that had to be rebuilt.
The segmentation fault is gone now \o/
I am trying to run the create-react-app's development server inside of a docker container and have it recompile and send the changed app code to the client for development purposes, but it isn't picking up the changes from inside of the docker container.
(Of course, I have the working directory of the app as a volume for the container.)
Is there a way to do make this work?
Actually, I found an answer here. Apparently create-react-app uses chokidar to watch file changes, and it has a flag CHOKIDAR_USEPOLLING to use polling to watch for file changes instead. So CHOKIDAR_USEPOLLING=true npm start should fix the problem. As for me, I set CHOKIDAR_USEPOLLING=true in my environment variable for the docker container and just started the container.
Polling, suggested in the other answer, will cause much higher CPU usage and drain your battery quickly. You should not need CHOKIDAR_USEPOLLING=true since file system events should be propagated to the container. Since recently this should work even if your host machine runs Windows: https://docs.docker.com/docker-for-windows/release-notes/#docker-desktop-community-2200 (search for "inotify").
However, when using Docker for Mac, this mechanism seems to be failing sometimes: https://github.com/docker/for-mac/issues/2417#issuecomment-462432314
Restarting the Docker daemon helps in my case.
If your changes are not being picked up, it is probably a problem with the file watching mechanism. A workaround for this issue is to configure polling. You can do that globally as explained by #Javascriptonian, but you can do this also locally via the webpack configuration. This has the benefit of specifying ignored folders (e.g. node_modules) which slow down the watching process (and lead to high CPU usage) when using polling.
In your webpack configuration, add the following configuration:
devServer: {
watchOptions: {
poll: true, // or use an integer for a check every x milliseconds, e.g. poll: 1000
ignored: /node_modules/ // otherwise it takes a lot of time to refresh
}
}
source: documentation webpack watchOptions
If you are having the same issue with nodemon in a back-end Node.js project, you can use the --legacy-watch flag (short -L) which starts polling too.
npm exec nodemon -- --legacy-watch --watch src src/main.js
or in package.json:
"scripts": {
"serve": "nodemon --legacy-watch --watch src src/main.js"
}
documentation: nodemon legacy watch
If you use linux then you don't need to use CHOKIDAR_USEPOLLING=true
With react-script v5.0.0 onward the command is WATCHPACK_POLLING=true instead of CHOKIDAR_USEPOLLING=true
Clear Answer for react-script v5.0.0 onward
1- Create a .env file in the root directory of the project
2- Add the WATCHPACK_POLLING=true to the .env file
3- build new image
4- run new container
5- verify that the changes being detected.
Or you can just add WATCHPACK_POLLING=true to your script for making the container like this
docker run --name my-app -it --rm -v $(pwd)/src:/app/src -p 3000:3000 -e WATCHPACK_POLLING=true myapp
In my case, I was running the docker run command in a Git bash command line (on Windows) and the hot reloading was not working. Using react-script v5.0.0, setting WATCHPACK_POLLING=true in the .env file and running the docker run command in PowerShell worked.
docker run -it --rm -v ${PWD}:/app -v /app/node_modules -p 3000:3000 -e CHOKIDAR_USEPOLLING=true myapp
I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp