Changes not reflect on docker - docker

I'm new to the docker, I'm using windows 11, while trying to do the changes to the code, I found that the code changes did not reflect on docker, unless I remove every container and image and rebuild then only reflect. This is too troublesome and time consuming, is there any other way to make it auto watch the changes and reflect automatically?
Here is my docker-compose.yml
version: "3.3"
services:
db:
image: mysql:5.7
restart: always
command: --max_allowed_packet=32505856
volumes:
- db_volume:/var/lib/mysql
ports:
- "9002:3306"
environment:
MYSQL_ROOT_PASSWORD: pw
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpassword
networks:
- internal
web:
depends_on:
- db
image: web:1.0.1
build: .
ports:
- "9001:80"
restart: always
environment:
MYSQL_HOST: db
MYSQL_PORT: 3306
MYSQL_USER: user
MYSQL_PASSWORD: pw
MYSQL_DATABASE: db
networks:
- internal
volumes:
- drupal-data:/var/www/html
phpmyadmin:
image: phpmyadmin
restart: always
depends_on:
- db
ports:
- 9004:80
environment:
PMA_HOST: db
PMA_USER: user
PMA_PASSWORD: pw
UPLOAD_LIMIT: 4000M
networks:
- internal
volumes:
db_volume:
drupal-data:
networks:
internal:
driver: bridge
Here is my Dockerfile
FROM drupal:9.2.5-php7.4-apache
RUN docker-php-ext-install mysqli
COPY ./k8s/php.ini "$PHP_INI_DIR/php.ini"
COPY ./k8s/000-default.conf /etc/apache2/sites-available/
COPY --chown=www-data:www-data . /var/www/html
WORKDIR /var/www/html
RUN composer install
# Install php-redis - this is for drupal redis module
# RUN pecl install -o redis && \
# echo "extension=redis.so" > /usr/local/etc/php/conf.d/redis.ini
# # Installing modules
# RUN composer require 'acquia/lightning:~5.2.0'
# RUN composer require 'cweagans/composer-patches:^1.6.0'
# RUN composer require 'oomphinc/composer-installers-extender:^1.1 || ^2'
# RUN composer require 'drupal/advagg:^4.1'
# RUN composer require 'drupal/autosave_form:^1.2'
# RUN composer require 'drupal/backup_migrate:^5.0'
# RUN composer require 'drupal/conditional_fields:^4.0#alpha'
# RUN composer require 'drupal/entity_reference_revisions:^1.9'
# RUN composer require 'drupal/field_group:^3.1'
# RUN composer require 'drupal/http_client_manager:^2.5'
# RUN composer require 'drupal/moderated_content_bulk_publish:^2.0'
# RUN composer require 'drupal/pathauto:^1.8'
# RUN composer require 'drupal/quick_node_clone:^1.14'
# RUN composer require 'drupal/svg_image:^1.14'
# RUN composer require 'drupal/svg_image_field:^2.0'
# # Installing other dependencies
# RUN composer require 'phpoffice/phpspreadsheet:1.18'
# RUN composer require 'lodash-php/lodash-php:^0.0.7'
# RUN composer require drush/drush 10.6 && ln -s $(pwd)/vendor/bin/drush /usr/local/bin/drush
RUN ln -s /var/www/html/docroot /var/www/html/docroot/tv
COPY ./k8s/php.min.ini "$PHP_INI_DIR/conf.d/php.ini"
# RUN rm "$PHP_INI_DIR/php.ini"
RUN chmod 777 -R /var/www/html/docroot/sites/default/files
RUN apache2ctl restart
EXPOSE 80
I'm using docker compose up -d to start the website
What am I doing wrong for this?

I think you need to map local code directory to the container directory.
For that you need to add this line inside your volumes in web:
volumes:
- ./:/var/www/html # This maps the local code directory to the container directory
- drupal-data:/var/www/html
With help of this chnages will be reflected inside the container
without having to rebuild the image

Related

permission denied while trying to start rails server in docker

I'm trying to run a rails server in a docker image along with a mysql and vue frontend image. I'm using ruby 3 and rails 6. The mysql and frontend image both start without problems. However the rails images doesn't start.
I'm on a Macbook Pro with MacOS Monterey and Docker Desktop 4.5.0
this is my docker-compose.yml:
version: "3"
services:
mysql:
image: mysql:8.0.21
command:
- --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=nauza_backend_development
ports:
- "3307:3306"
volumes:
- mysql:/var/lib/mysql
backend:
build:
context: nauza-backend
args:
UID: ${UID:-1001}
tty: true
stdin_open: true
command:
bundle exec rails s -p 8080 -b '0.0.0.0'
volumes:
- ./nauza-backend:/usr/src/app
# attach a volume at /bundle to cache gems
- bundle:/bundle
# attach a volume at ./node_modules to cache node modules
- node-modules:/usr/src/app/node_modules
# attach a volume at ./tmp to cache asset compilation files
- tmp:/usr/src/app/tmp
environment:
- RAILS_ENV=development
ports:
- "8080:8080"
depends_on:
- mysql
user: rails
environment:
- RAILS_ENV=development
- MYSQL_HOST=mysql
- MYSQL_USER=root
- MYSQL_PASSWORD=root
frontend:
build:
context: nauza-frontend
args:
UID: ${UID:-1001}
volumes:
- ./nauza-frontend:/usr/src/app
ports:
- "3000:3000"
user: frontend
volumes:
bundle:
driver: local
mysql:
driver: local
tmp:
driver: local
node-modules:
driver: local
and this is my Dockerfile:
FROM ruby:3.0.2
ARG UID
RUN adduser rails --uid $UID --disabled-password --gecos ""
ENV APP /usr/src/app
RUN mkdir $APP
WORKDIR $APP
ENV EDITOR=vim
RUN apt-get update \
&& apt-get install -y \
nmap \
vim
COPY Gemfile* $APP/
RUN bundle install -j3 --path vendor/bundle
COPY . $APP/
CMD ["rails", "server", "-p", "8080", "-b", "0.0.0.0"]
when I try to start this with docker-compose up on my Mac I get the following error:
/usr/local/lib/ruby/3.0.0/fileutils.rb:253:in `mkdir': Permission denied # dir_s_mkdir - /usr/src/app/tmp/cache (Errno::EACCES)
Any ideas on how to fix this?
Remove the line - tmp:/usr/src/app/tmp on your Dockerfile.
You don't need to access temp files of your container I would say. 🙂

one of services started with docker-compose up doesn't stop with docker-compose stop

I have the file docker-compose.production.yml that contains configurations of 5 services. I start them all with the command sudo docker-compose -f docker-compose.production.yml up --build in the directory where the file is. When I want to stop all the services, I simply call sudo docker-compose stop in the directory where the file is. Strangely, 4 out of 5 services stop correctly, but 1 keeps running and if I want to stop it, I must use sudo docker stop [CONTAINER]. The service is not event being listed in the list of services that are being stopped after the stop command is run. It's like the service somehow "detaches" from the group. What could be causing this strange behaviour?
Here's an example of the docker-compose.production.yml file:
version: '3'
services:
fe:
build:
context: ./fe
dockerfile: Dockerfile.production
ports:
- 5000:80
restart: always
be:
image: strapi/strapi:3.4.6-node12
environment:
NODE_ENV: production
DATABASE_CLIENT: mysql
DATABASE_NAME: some_db
DATABASE_HOST: db
DATABASE_PORT: 3306
DATABASE_USERNAME: someuser
DATABASE_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
URL: https://some-url.com
volumes:
- ./be:/srv/app
- ${SOME_DIRECTORY:?no directory specified}:/srv/something:ro
- ./some-directory:/srv/something-else
expose:
- 1447
ports:
- 5001:1337
depends_on:
- db
command: bash -c "yarn install && yarn build && yarn start"
restart: always
watcher:
build:
context: ./watcher
dockerfile: Dockerfile
environment:
LICENSE_KEY: ${LICENSE_KEY:?no license key specified}
volumes:
- ./watcher:/usr/src/app
- ${SOME_DIRECTORY:?no directory specified}:/usr/src/something:ro
db:
image: mysql:8.0.23
environment:
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
MYSQL_DATABASE: some_db
volumes:
- ./db:/var/lib/mysql
restart: always
db-backup:
build:
context: ./db-backup
dockerfile: Dockerfile.production
environment:
MYSQL_HOST: db
MYSQL_DATABASE: some_db
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD:?no database password specified}
volumes:
- ./db-backup/backups:/backups
restart: always
The service that doesn't stop together with others is the last one - db-backup. Here's an example of its Dockerfile.production:
FROM alpine:3.13.1
COPY ./scripts/startup.sh /usr/local/startup.sh
RUN chmod +x /usr/local/startup.sh
# NOTE used for testing when needs to run cron tasks more frequently
# RUN mkdir /etc/periodic/1min
COPY ./cron/daily/* /etc/periodic/daily
RUN chmod +x /etc/periodic/daily/*
RUN sh /usr/local/startup.sh
CMD [ "crond", "-f", "-l", "8"]
And here's an example of the ./scripts/startup.sh:
#!/bin/sh
echo "Running startup script"
echo "Checking if mysql-client is installed"
apk update
if ! apk info | grep -Fxq "mysql-client";
then
echo "Installing MySQL client"
apk add mysql-client
echo "MySQL client installed"
fi
# NOTE this was used for testing. backups should run daily, thus script should
# normally be placed in /etc/periodic/daily/
# cron_task_line="* * * * * run-parts /etc/periodic/1min"
# if ! crontab -l | grep -Fxq "$cron_task_line";
# then
# echo "Enabling cron 1min periodic tasks"
# echo -e "${cron_task_line}\n" >> /etc/crontabs/root
# fi
echo "Startup script finished"
All this happens on all the Ubuntu 18.04 machines that I've tried running this on. Didn't try it on anything else.

How to run docker-compose in production

I have built a MEAN stack application with nginx front end.
I have 2 docker files - one for front end and one for back end
And I have a docker-compose file that pulls them together along with the database
This works great on my development machine
I then push the images to my dockerhub site
On my production ubuntu machine I pull the images that I want from my dockerhub repository
But how should I run them?
I transfer my docker-compose file to the server and try to run it:
docker-compose -f docker-compose.prod.yml up
but it complains that the folder structure isnt what I have on my dev machine:
ERROR: build path /home/demo/api either does not exist, is not accessible, or is not a valid URL.
I dont want to put all the code on the server and rebuild it.. surely that defeats the purpose of using dockerhub images?
I also need the docker compose file to pull in the .prod.env file for database credentials etc.
I know Im missing something here.
How do I run my images without rebuilding them from scratch?
Do I need another service for this?
Thanks in advance
docker-compose.prod.yml:
version: '3'
services:
# Database
database:
env_file:
- .prod.env
image: mongo
restart: always
environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: $DB_ADMIN_PASSWORD
# Create a new database. Please note, the
# /docker-entrypoint-initdb.d/init.js has to be executed
# in order for the database to be created
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: $MONGO_INITDB_ROOT_PASSWORD
DB_NAME: $DB_NAME
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
MONGO_INITDB_DATABASE: $DB_NAME
volumes:
# Add the db-init.js file to the Mongo DB container
- ./mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
- /data/db
ports:
- '27017-27019:27017-27019'
networks:
- backend-net
# Database management
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: $MONGO_INITDB_ROOT_PASSWORD
ME_CONFIG_MONGODB_SERVER: database
depends_on:
- database
networks:
- backend-net
# Nodejs API
backend:
depends_on:
- database
env_file:
- .prod.env
build:
context: ./api
dockerfile: Dockerfile-PROD-API
# Note: put this container name into proxy.conf.json for local angular CLI development instead of localhost
container_name: node-api-prod
networks:
- backend-net
# Nginx and compiled angular app
frontend:
build:
context: ./ui
dockerfile: Dockerfile-PROD-UI
ports:
- "8180:80"
container_name: nginx-ui-prod
networks:
- backend-net
networks:
backend-net:
driver: bridge
DOCKERFILE-PROD-API:
#SERVER ========================================
FROM node:10-alpine as server
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
#RUN ls -lha
EXPOSE 3000
CMD ["npm", "run", "start"]
DOCKERFILE-PROD-UI:
#APP ========================================
FROM node:10-alpine as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install #angular/cli && npm install
COPY . .
RUN npm run build
#RUN ls -lha
#FINAL ========================================
FROM nginx:1.18.0-alpine
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Using full image names including dockerhub path resolved the issue for me.
Working solution shown below:
Dockerfile-PROD-UI
#GET ANGULAR ========================================
FROM node:10-alpine as base
WORKDIR /usr/src/app
COPY ui/package*.json ./
RUN npm install #angular/cli && npm install
COPY ui/. .
#BUILD ANGULAR ========================================
FROM base as build
RUN npm run build
#RUN ls -lha
#NGINX ========================================
FROM nginx:1.18.0-alpine
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Dockerfile-PROD-API
#SERVER ========================================
FROM node:10-alpine as server
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
#RUN ls -lha
EXPOSE 3000
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3.5'
services:
# Database
database:
image: mongo
restart: always
env_file:
- .prod.env
environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: $DB_ADMIN_PASSWORD
# Create a new database. Please note, the
# /docker-entrypoint-initdb.d/init.js has to be executed
# in order for the database to be created
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: $MONGO_INITDB_ROOT_PASSWORD
DB_NAME: $DB_NAME
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
MONGO_INITDB_DATABASE: $DB_NAME
volumes:
# Add the db-init.js file to the Mongo DB container
- ./mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
- db-data:/data/db
ports:
- '27017-27019:27017-27019'
networks:
- backend-net
# Nodejs API
backend:
image: DOCKERHUBHUSER/DOCKERHUB_REPO:prod-backend
restart: always
depends_on:
- database
env_file:
- .prod.env
build:
context: ./api
dockerfile: Dockerfile-PROD-API
container_name: backend
networks:
- backend-net
# Nginx and compiled angular app
frontend:
image: DOCKERHUBHUSER/DOCKERHUB_REPO:prod-frontend
restart: always
depends_on:
- backend
build:
context: .
dockerfile: Dockerfile-PROD-UI
ports:
- "8180:80"
container_name: frontend
networks:
- backend-net
networks:
backend-net:
driver: bridge
volumes:
db-data:
name: db-data
external: true

"Error: Cannot find module" with Nodemon and Docker, even with volumes mounted

I keep getting errors that my modules don't exist when I'm running nodemon inside Docker and I save the node files. It takes a couple of saves before it throws the error. I have the volumes mounted like how the answers suggested here but I'm still getting the error and I'm not too sure what's causing it.
Here is my docker-compose.yml file.
version: "3.7"
services:
api:
container_name: api
build:
context: ./api
target: development
restart: on-failure
ports:
- "3000:3000"
- "9229:9229"
volumes:
- "./api:/home/node/app"
- "node_modules:/home/node/app/node_modules"
depends_on:
- db
networks:
- backend
db:
container_name: db
command: mongod --noauth --smallfiles
image: mongo
restart: on-failure
volumes:
- "mongo-data:/data/db"
- "./scripts:/scripts"
- "./data:/data/"
ports:
- "27017:27017"
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongo-data:
node_modules:
Here is my docker file:
# Ger current Node Alpine Linux image.
FROM node:alpine AS base
# Expose port 3000 for node.
EXPOSE 3000
# Set working directory.
WORKDIR /home/node/app
# Copy project content.
COPY package*.json ./
# Development environment.
FROM base AS development
# Set environment of node to development to trigger flag.
ENV NODE_ENV=development
# Express flag.
ENV DEBUG=app
# Run NPM install.
RUN npm install
# Copy source code.
COPY . /home/node/app
# Run the app.
CMD [ "npm", "start" ]
# Production environment.
FROM base AS production
# Set environment of node to production to trigger flag.
ENV NODE_ENV=production
# Run NPM install.
RUN npm install --only=production --no-optional && npm cache clean --force
# Copy source code.
COPY . /home/node/app
# Set user to node for better security.
USER node
# Run the app.
CMD [ "npm", "run", "start:prod" ]
Turns out I didn't put my .dockerignore in the proper folder. You're supposed to put it in the context folder.

Sending http requests from docker container using same network as sending requests from host machine

My application is dockerized. Its python/django application. We are using a local sms sending api that is restricted on IP based. So I have given them my EC2 ip address. And I am running my docker container in this EC2 machine. But my python app is not able to send requests to that machine. Because this docker container has different IP.
How do I solve this problem ?
Dockerfile
# ToDo use alpine image
FROM python:3.6
# Build Arguments with defaults
ARG envior
ARG build_date
ARG build_version
ARG maintainer_name='Name'
ARG maintainaer_email='email#email.com'
# Adding Labels
LABEL com.example.service="Service Name" \
com.example.maintainer.name="$maintainer_name" \
com.example.maintainer.email="$maintainaer_email" \
com.example.build.enviornment="$envior" \
com.example.build.version="$build_version" \
com.example.build.release-date="$build_date"
# Create app directory
RUN mkdir -p /home/example/app
# Install Libre Office for pdf conversion
RUN apt-get update -qq \
&& apt-get install -y -q libreoffice \
&& apt-get remove -q -y libreoffice-gnome
# Cleanup after apt-get commands
RUN apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
/var/cache/apt/archives/*.deb /var/cache/apt/*cache.bin
# Activate WORKING DIR
WORKDIR /home/example/app
# Copying requirements
COPY requirements/${envior}.txt /tmp/requirements.txt
# Install the app dependencies
# ToDo Refactor requirements
RUN pip install -r /tmp/requirements.txt
# Envs
ENV DJANGO_SETTINGS_MODULE app.settings.${envior}
ENV ENVIORNMENT ${envior}
# ADD the source code and entry point into the container
ADD . /home/example/app
ADD entrypoint.sh /home/example/app/entrypoint.sh
# Making entry point executable
RUN chmod +x entrypoint.sh
# Exposing port
EXPOSE 8000
# Entry point and CMD
ENTRYPOINT ["/home/example/app/entrypoint.sh"]
docker-compose.yml
version: '3'
services:
postgres:
image: onjin/alpine-postgres:9.5
restart: unless-stopped
ports:
- "5432:5432"
environment:
LC_ALL: C.UTF-8
POSTGRES_USER: django
POSTGRES_PASSWORD: django
POSTGRES_DB: web
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build:
context: .
args:
environ: local
command: gunicorn app.wsgi:application -b 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- postgres
environment:
DATABASE_URL: 'postgres://django:django#postgres/web'
DJANGO_MANAGEPY_MIGRATE: 'on'
DJANGO_MANAGEPY_COLLECTSTATIC: 'on'
DJANGO_LOADDATA: 'off'
DOMAIN: '0.0.0.0'
volumes:
postgres_data:
You should try putting the container in the same network as your EC2 instance. It means using networks with host driver.
suggested docker-compose file
version: '3'
services:
postgres:
[...]
networks:
- host
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
[...]
networks:
- host
volumes:
postgres_data:
networks:
host:
In case it wouldn't work, you might define your own network by:
networks:
appnet:
driver: host
and connect to that network form services:
postgres:
[..]
networks:
- appnet
Further reading about networks official ref.
An interesting read too from official networking tutorial.
Publish port from docker container to base machine, then configure ec2IP:port in sms application.

Resources