(I'm clearly haven't fully mastered Docker's concepts yet, so please do correct me when I'm incorrectly/inaccurately using terms.)
I was running out of storage space, so I ran docker system prune to clean up my system a bit. However, shortly (perhaps immediately) after that, I started running into segmentation faults after starting Webpack dev server in my container. My guess at this point would be that that is due to some npm package having to be rebuilt, but it not doing so due to some old artefacts still lingering around? I'm not running into the segmentation faults if I run Webpack dev server outside of the container:
web_1 | [2] Project is running at http://0.0.0.0:8000/
web_1 | [2] webpack output is served from /
web_1 | [2] 404s will fallback to /index.html
web_1 | [2] Segmentation fault (core dumped)
web_1 | [2] error Command failed with exit code 139.
Thus, I'm wondering whether docker system prune really removes everything related to the Docker images I've run before, or whether there's some additional cleanup I can do.
My Dockerfile is a follows, where ./stacks/frontend is the directory from which Webpack dev server is run (through yarn start):
FROM node:6-alpine
LABEL Name="Flockademic dev environment" \
Version="0.0.0"
ENV NODE_ENV=development
WORKDIR /usr/src/app
# Needed for one of the npm dependencies (fibers, when compiling node-gyp):
RUN apk add --no-cache python make g++
COPY ["package.json", "yarn.lock", "package-lock.json*", "./"]
# Unfortunately it seems like Docker can't properly glob this at this time:
# https://stackoverflow.com/questions/35670907/docker-copy-with-file-globbing
COPY ["stacks/frontend/package.json", "stacks/frontend/yarn.lock", "stacks/frontend/package-lock*.json", "./stacks/frontend/"]
COPY ["stacks/accounts/package.json", "stacks/accounts/yarn.lock", "stacks/accounts/package-lock*.json", "./stacks/accounts/"]
COPY ["stacks/periodicals/package.json", "stacks/periodicals/yarn.lock", "stacks/periodicals/package-lock*.json", "./stacks/periodicals/"]
RUN yarn install # Also runs `yarn install` in the subdirectories
EXPOSE 3000 8000
CMD yarn start
And this is its section in docker-compose.yml:
version: '2'
services:
web:
image: flockademic
build:
context: .
dockerfile: docker/web/Dockerfile
ports:
- 3000:3000
- 8000:8000
volumes:
- .:/usr/src/app/:rw
# Prevent locally installed node_modules from being mounted inside the container.
# Unfortunately, this does not appear to be possible for every stack without manually enumerating them:
- /usr/src/app/node_modules
- /usr/src/app/stacks/frontend/node_modules
- /usr/src/app/stacks/accounts/node_modules
- /usr/src/app/stacks/periodicals/node_modules
links:
- database
environment:
# Some environment variables I use
I'm getting somewhat frustrated with not having a clear picture of what's going on :) Any suggestions on how to completely restart (and what concepts I'm getting wrong) would be appreciated.
So apparently docker system prune has some additional options, and the proper way to nuke everything was docker system prune --all --volumes. The key for me was probably --volumes, as those would probably hold cached packages that had to be rebuilt.
The segmentation fault is gone now \o/
Related
Generally do not post here, so forgive me if anything is not up to code, but I have built a micro-service to run database migrations using flask-migrate/alembic. This has seemed like a very good option for the group I am working with. Up until very recently, the micro-service could be deployed very easily by pointing to different databases and running upgrades, but recently, the flask db upgrade command has stopped working inside of the docker container. As can be seen I am using alembic-utils here to handle some aspects of dbmigrations less commonly handled by flask-migrate like views/materialized views etc.
Dockerfile:
FROM continuumio/miniconda3
COPY ./ ./
WORKDIR /dbapp
RUN conda update -n base -c defaults conda -y
RUN conda env create -f environment_py38db.yml
RUN chmod +x run.sh
ENV PATH /opt/conda/envs/py38db/bin:$PATH
RUN echo "source activate py38db" > ~/.bashrc
RUN /bin/bash -c "source activate py38db"
ENTRYPOINT [ "./run.sh" ]
run.sh:
#!/bin/bash
python check_create_db.py
flask db upgrade
environment_py38db.yml:
name: py38db
channels:
- defaults
- conda-forge
dependencies:
- Flask==2.2.0
- Flask-Migrate==3.1.0
- Flask-SQLAlchemy==3.0.2
- GeoAlchemy2==0.12.5
- psycopg2
- boto3==1.24.96
- botocore==1.27.96
- pip
- pip:
- retrie==0.1.2
- alembic-utils==0.7.8
EDITED TO INCLUDE OUTPUT:
from inside the container:
(base) david#<ip>:~/workspace/dbmigrations$ docker run --rm -it --entrypoint bash -e PGUSER="user" -e PGDATABASE="trial_db" -e PGHOST="localhost" -e PGPORT="5432" -e PGPASSWORD="pw" --net=host migrations:latest
(py38db) root#<ip>:/dbapp# python check_create_db.py
successfully created database : trial_db
(py38db) root#<ip>:/dbapp# flask db upgrade
from local environment
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$ python check_create_db.py
database: trial_db already exists: skipping...
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$ flask db upgrade
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 41f5be29ae44, initital migration to generate tables
INFO [alembic.runtime.migration] Running upgrade 41f5be29ae44 -> 34c067400f6b, add materialized views <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b -> 34c067400f6b_views, add <. . .>
INFO [alembic.runtime.migration] Running upgrade 34c067400f6b_views -> b51d57354e6c, add <. . .>
INFO [alembic.runtime.migration] Running upgrade b51d57354e6c -> 97d41cc70cb2, add-functions
(py38db) david#<ip>:~/workspace/dbmigrations/dbapp$
As the output shows, flask db upgrade is hanging inside the docker container while running locally. Both environments are reading in the db parameters from environment variables, and these are being read correctly (the fact that check_create_db.py runs confirms this). I can share more of the code if you can help me figure this out.
For good measure, here is the python script:
check_create_db.py
import psycopg2
import os
def recreate_db():
""" checks to see if the database set by env variables already exists and
creates the appropriate db if it does not exist.
"""
try:
# print statemens would be replaced by python logging modules
connection = psycopg2.connect(
user=os.environ["PGUSER"],
password=os.environ["PGPASSWORD"],
host=os.environ["PGHOST"],
port=os.environ["PGPORT"],
dbname='postgres'
)
connection.set_session(autocommit=True)
with connection.cursor() as cursor:
cursor.execute(f"SELECT 1 FROM pg_catalog.pg_database WHERE datname = '{os.environ['PGDATABASE']}'")
exists = cursor.fetchone()
if not exists:
cursor.execute(f"CREATE DATABASE {os.environ['PGDATABASE']}")
print(f"successfully created database : {os.environ['PGDATABASE']}")
else:
print(f"database: {os.environ['PGDATABASE']} already exists: skipping...")
except Exception as e:
print(e)
finally:
if connection:
connection.close()
if __name__ == "__main__":
recreate_db()
Ok, so I was able to find the bug easily enough by going through all the commits to isolate when the program stopped working and it was an easy fix. It has however left me with more questions.
The cause of the problem was that in the root directory of the project ( so dbmigrations if you are following above..) I had added an __init__.py. This was unnecessary, but I thought it might help me access database objects defined outside of the env.py in my migrations directory after adding the path to my sys.path in env.py. This was not required, and I probably shouldv'e known not to add the __init__.py to a folder I did not intend to use a python module.
I continue to find it strange is that the project still ran perfectly fine locally, with the same __init__.py in the root folder. However, from within the docker container, this cause the flask-migrate commands to be unresponsive. This remains a point of curiosity.
In any case, if you are feeling like throwing an __init__.py in the root directory of a project, here is a data point that should discourage you from doing so, and it would probably be poor design to do so in most cases anyway.
I'm following a tutorial for docker and docker compose. Although there is a npm install command in Dockerfile (as following), there is a situation that tutor have to run that command manually.
COPY package*.json .
RUN npm install
Because he is mapping the project current directory to the container by volumes as following:(which is like project runs in the container by mapped source code to the host directory)
api:
build: ./backend
ports:
- 3001:3000
environment:
DB_URL: mongodb://db/vidly
volumes:
- ./backend:/app
So npm install command in docker file doesn't make any sense. So he runs this command directly in the root of project.
So, another developer has to run npm install as well, (or if I add a new package, I should do it too) which seems not very developer friendly. Because the purpose of docker is not to do the instructions by yourself. So docker-compose up should do everything. Any idea about this problem would be appreciated.
I agree with #zeitounator, who adds very sensible commentary to your situation and use.
However, if you did want to solve the original problem of running a container that volume mounts in code, and have it run a development server, then you could move the npm command from the COPY directive to the CMD, or even add an entry script to the container that includes the npm call.
That way you could run the container with the volume mount, and the starting process (npm install, npm serve dev, etc) would occur at runtime as opposed to buildtime.
The best solution is as you mention yourself, Vahid, to use a smart Dockerfile that leverages sensible build caching, that allows the application to be built and ran with one command (and no external input). Perhaps you and your tutor can talk about these differences and come to an agreement
I'm trying to mount a volume from my local directory for Next.js/React's hot reload during development. My current docker-compose.development.yml looks like this:
services:
web:
command: next dev
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
- /usr/src/app/.next
depends_on:
db:
condition: service_healthy
It extends my main docker-compose with the command docker-compose -f docker-compose.yml -f docker-compose.development.yml up --build:
services:
web:
build: .
command: /bin/sh -c 'npm run build && npm start'
ports:
- 3000:3000
- 5432:5432
env_file:
- .env.local
It works fine without the development overrides and without docker. I believe this problem has to do with running next dev in a container as the problem persists even after removing the volume bindings. Here is the full call stack. It points to the error being in the src/pages/_app.tsx file.
This are the basic steps to troubleshoot an issue when you can build your project in one environment and you are not able to do it in another.
Make sure the npm install was run before the build starts.
I can not see from the shared snippets you have shared if this was done. To build in the container you need to have the dependancies installed.
Make sure that your package.json is up to date with the versions of the packages/modules that are installed in the development environment.
If you don't have the package.json or it was not maintained you can check in this SO post how to generate it again.
Next to check is the C/C++ build environment. some of the modules require C/C++ or C#/mongo to build environments to be present in the image. Also, most often there will be requirement for specific dev shared libraries to be installed.
Check which dependancies are required for your packages and which libraries are required to be installed in the OS for the modules to work.
Finally, some modules are OS dependent (ex. work only on Windows, or only on macOS), or architecture dependent (amd64, arm64, etc.)
Ream the information about the package/module and research it on internet. If you have such modules, you will face challenges to package them in a container, so best approach here is to refactor them out from your project before you can containerize it.
I had NODE_ENV set to production instead of development in my Dockerfile. I assume it was conflicting with one of the steps for hot reloading.
I am trying to create a docker-compose.yml which will allow me to start up a few services, where some of those services will have their own Dockerfile. For example:
- my-project
- docker-compose.yml
- web
- Dockerfile
- src/
- worker
- Dockerfile
- src/
I'd like a developer to be able to checkout the project and just run docker-compose up --build to get going.
Also, I'm trying mount the source for a service inside the docker container, so that a developer is able to edit the files on the host machine and those changes will be reflected inside the container immediately (say if it a rails app, it will get recompiled on file change).
I have tried to get just the web service going, but I just cannot mount web directory inside the container: https://github.com/zoran119/haskell-webservice
And here is docker-compose.yml:
version: "2"
services:
web:
build: web
image: web
volumes:
- ./web:/app
Can anyone spot a problem here?
The problem is that host ./web folder shadows the internal /app folder. Which means anything that is inside is overshadowed by your host folder. So you can follow an approach like below
Additional bash script for setups
./scripts/deploy_app.sh
#!/bin/bash
set -ex
# By default checkout the master branch, if none specified
BRANCH=${BRANCH:-master}
cd /usr/src/app
git clone https://github.com/tarunlalwani/docker-nodejs-sample-app .
git checkout $BRANCH
# Install app dependencies
npm install
./scripts/run_app.sh
#!/bin/bash
set -ex
cd /usr/src/app
exec npm start
Dockerfile
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./scripts /scripts
EXPOSE 8080
CMD [ "bash", "-c" , "/scripts/deploy_app.sh && /scripts/run_app.sh"]
Now in your docker-compose you can use below
version: '3'
services:
app:
build:
context: .
volumes:
- ./app:/usr/src/app
Now when you do docker-compose up it will run the /scripts/deploy_app.sh and deploy the app to /usr/src/app inside the container. The host folder ./app will have the source for developers to edit.
You can enhance the script not to download source code if the folder already has data. The branch of the code can be controlled using the BRANCH environment variables. If you want you can even run the script in Dockerfile as well to have images which by default contains the source
See a detailed article I wrote about Static and Dynamic code deployment
http://tarunlalwani.com/post/deploying-code-inside-docker-images-statically-dynamically/
So it seems your issue turn out to be be something different
When I try docker-compose up i get below error
Attaching to hask_web_1
web_1 | Version 1.3.2, Git revision 3f675146590da4f3edf768b89355f798229da2a5 (4395 commits) x86_64 hpack-0.15.0
web_1 | 2017-09-10 09:58:04.741696: [debug] Checking for project config at: /app/stack.yaml
web_1 | #(Stack/Config.hs:863:9)
web_1 | 2017-09-10 09:58:04.741873: [debug] Loading project config file stack.yaml
web_1 | #(Stack/Config.hs:881:13)
web_1 | <stdin>: hGetLine: end of file
hask_web_1 exited with code 1
But when i use docker-compose run web i get the following output
$ docker-compose run web
Version 1.3.2, Git revision 3f675146590da4f3edf768b89355f798229da2a5 (4395 commits) x86_64 hpack-0.15.0
2017-09-10 09:58:37.859351: [debug] Checking for project config at: /app/stack.yaml
#(Stack/Config.hs:863:9)
2017-09-10 09:58:37.859580: [debug] Loading project config file stack.yaml
#(Stack/Config.hs:881:13)
2017-09-10 09:58:37.862281: [debug] Trying to decode /root/.stack/build-plan-cache/x86_64-linux/lts-9.3.cache
So that made me realize your issue. docker-compose up and docker-compose run has one main difference that is the tty. Run allocates a tty while up doesn't. So you need to change the compose to
version: "2"
services:
web:
build: web
image: web
volumes:
- ./web:/app
tty: true
Before I post any configuration, I try to explain what I would like to archive and would like to mention, that I’m new to docker.
To make path conversations easier, let's assume we talk about the project "Docker me up!" and it's located in X:\docker-projects\docker-me-up\.
Goal:
I would like to run multiple nginx project with different content, each project represents a dedicated build. During development [docker-compose up -d] a container should get updated instantly; which works fine.
The tricky part is, that I want to outsource npm [http://gruntjs.com] from my host directly into the container/image, so I’m able to debug and develop wherever I am, by just installing docker. Therefore, npm must be installed in a “service” and a watcher needs to be initialized.
Each project is encapsulated in its own folder on the host/build in docker and should not be have any knowledge of anything else but itself.
My solution:
I have tried many different versions, with “volumes_from” etc. but I decided to show you this, because it’s minified but still complete.
Docker-compose.yml
version: '2'
services:
web:
image: nginx
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
links:
- php
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
app:
build: .
volumes:
- ./assets:/website/assets
- ./config:/website/config:ro
- ./www:/website/www
Dockerfile
FROM debian:jessie-slim
RUN apt-get update && apt-get install -y \
npm
RUN gem update --system
RUN npm install -g grunt-cli grunt-contrib-watch grunt-babel babel-preset-es2015
RUN mkdir -p /website/{assets,assets/es6,config,www,www/js,www/css}
VOLUME /website
WORKDIR /website
Problem:
As you can see, the “data” service contains npm and should be able to execute a npm command. But, if I run docker-compose up -d everything works. I can edit the page content, work with it, etc. But the data container is not running and because of that cannot perform any npm command. Unless I have a huge logic error; which is quite possible ;-)
Environment:
Windows 10 Pro [up2date]
Shared drive for docker is used
Docker version 1.12.3, build 6b644ec
docker-machine version 0.8.2, build e18a919
docker-compose version 1.8.1, build 004ddae
After you call docker-compose up, you can get an interactive shell for your app container with:
docker-compose run app
You can also run one-off commands with:
docker-compose run app [command]
The reason your app container is not running after docker-compose up completes is that your Dockerfile does not define a service. For app to run as a service, you would need to keep a thread running in the foreground of the container by adding something like:
CMD ./run-my-service
to the end of your Dockerfile.