I am trying to containerize the front-end of my website and automate its deployment. My goal is to be able to have a new image be generated and hosted when a change is pushed, and have the server automatically fetch it and restart the container. Here are the steps that I am taking:
I create the image by first building my Node application and then bundling the distribution and nginx configuration files into the latest linuxserver/letsencrypt image. This is the Dockerfile:
# Use the NodeJS image as builder
FROM node:alpine AS builder
# Create the workspace
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy the package file and source code
COPY package.json /usr/src/app
COPY . ./
# Install dependencies
RUN npm install
# Build the application
RUN npm run build
# The nginx server, this builds the final image
FROM linuxserver/letsencrypt
# Copy the nginx configuration
COPY ./config/nginx.conf /config
# Copy the output of the builder
COPY --from=builder /usr/src/app/dist /config/www
# Inform Docker to listen on port 443 and 80
EXPOSE 443 80
This image is uploaded to GitHub's package registry and I poll for updates using Watchtower.
The image is started using this docker-compose file:
version: "3"
services:
...
frontend:
image: [IMAGE]
container_name: frontend
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1000
- TZ=[TIMEZONE]
- URL=[URL]
- SUBDOMAINS=www,
- VALIDATION=http
ports:
- 443:443
- 80:80
volumes:
- ./frontend:/config
restart: unless-stopped
...
The issue is that the files that were packaged into the image using the COPY instruction are being overwritten when I use the following line in my docker-compose:
volumes:
- ./frontend:/config
If I remove that section from my docker-compose file everything works fine, however this is not a solution because that folder stores important data.
I have read that mounting a volume completely overwrites any previous data, however I like the fact that I can easily load the image onto my server and have all the required files already embedded. Is there anything that I can do to fix my issue, or am I misusing/misunderstanding docker images?
I have tried setting the volume to read only as suggested here, however this did not work and instead caused the image to continually stop and restart.
I have also briefly read about bind mounts and am wondering if they will be of any use.
This behavior is expected. Docker mounts work in the same way as Linux mounts, i.e. overwriting contents of the target directory with the source directory contents.
My suggestion is to use another destination directory for your volume, e.g.
volumes:
- ./frontend:/someotherdir
And then adjust your nginx configuration to look for JS files there.
I found out that I could retain the data in the image by first creating a named volume:
volumes:
frontend_data:
And then mounting the container to that volume:
services:
frontend:
...
volumes:
- frontend_data:/config
...
Related
I'm trying to run a Next.js project inside docker-compose. To take advantage of hot-reloading, I'm mounting in the entire project to the Docker image as a volume.
So far, so good!
This is where things are starting to get tricky: For this particular project, it turns out Apple Silicon users need a .babelrc file included in their dockerized app, but NOT in the files on their computer.
All other users do not need a .babelrc file at all.
To sum up, this is what I'd like to be able to do:
hot reload project (hence ./:/usr/src/app/)
have an environment variable write content to /usr/src/app/.babelrc.
not have a .babelrc in the host's project root.
My attempt at solving was including the .babelrc under ci-cd/.babelrc in the host file system.
Then I tried mounting the file as a volume like - ./ci-cd/.babelrc:/usr/src/app/.babelrc. But then a .babelrc file gets written back to the root of the project in the host filesystem.
I also tried include COPY ./ci-cd/.babelrc /usr/src/app/.babelrc within the Dockerfile, but it seems to be overwritten with docker-composes's volume property.
Here's my Dockerfile:
FROM node:14
WORKDIR /usr/src/app/
COPY package.json .
RUN npm install
And the docker-compose.yml:
version: "3.8"
services:
# Database image
psql:
image: postgres:13
restart: unless-stopped
ports:
- 5432:5432
# image for next.js project
webapp:
build: .
command: >
bash -c "npm run dev"
ports:
- 3002:3002
expose:
- 3002
depends_on:
- testing-psql
volumes:
- ./:/usr/src/app/
I've 2 problems with flask app in docker. Application working slowly and freeze after finish last request (for example: first route work fine, next click other link/page app freeze. If i go to homepage via URL and run page again working ok ). Outside docker app working very fast.
Second problem is docker not synch files in container after change files.
# Dockerfile
FROM python:3.9
# set work directory
WORKDIR /base
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update
RUN pip install --upgrade pip
COPY ./requirements.txt /base/requirements.txt
COPY ./base_app.py /base/base_app.py
COPY ./config.py /base/config.py
COPY ./certs/ /base/certs/
COPY ./app/ /base/app/
COPY ./tests/ /base/tests/
RUN pip install -r requirements.txt
# docker-compose
version: '3.3'
services:
web:
build: .
command: tail -f /dev/null
volumes:
- ${PWD}/app/:/usr/src/app/
networks:
- flask-network
ports:
- 5000:5000
depends_on:
- flaskdb
flaskdb:
image: postgres:13-alpine
volumes:
- ${PWD}/postgres_database:/var/lib/postgresql/data/
networks:
- flask-network
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
ports:
- "5432:5432"
restart: always
networks:
flask-network:
driver: bridge
`
You have a couple of significant errors in the code you show.
The first problem is that your application doesn't run at all: the Dockerfile is missing the CMD line that tells Docker what to run, and you override it in the Compose setup with a meaningless tail command. You should generally set this in the Dockerfile:
CMD ["./base_app.py"]
You can remove most of the Compose settings you have. You do not need command: (it's in the Dockerfile), volumes: (what you have is ineffective and the code is in the image anyways), or networks: (Compose provides a network named default; delete all of the networks: blocks in the file).
Second problem is docker not synch files in container after change files.
I don't usually recommend trying to do actual development in Docker. You can tell Compose to just start the database
docker-compose up -d flaskdb
and then you can access it from the host (PGHOST=localhost, PGPORT=5432). This means you can use an ordinary non-Docker Python virtual environment for development.
If you do want to try to use volumes: to simulate a live development environment (you talk about performance; this specific path can be quite slow on non-Linux hosts) then you need to make sure the left side of volumes: is the host directory with your code (probably .), the right side is the container directory (your Dockerfile uses /base), and your Dockerfile doesn't rearrange, modify, or generate the files at all (the bind mount hides all of it).
# don't run the application in the image; use the Docker infrastructure
# to run something else
volumes:
# v-------- left side: host path (matches COPY source directory)
- .:/base
# ^^^^-- right side: container path (matches WORKDIR/destination directory)
I've got a simple Node / React project. I'm trying to use Docker to create two containers, one for the server, and one for the client, each with their own Dockerfile in the appropriate directory.
docker-compose.yml
version: '3.9'
services:
client:
image: node:14.15-buster
build:
context: ./src
dockerfile: Dockerfile.client
ports:
- '3000:3000'
- '45799:45799'
volumes:
- .:/app
tty: true
server:
image: node:14.15-buster
build:
context: ./server
dockerfile: Dockerfile.server
ports:
- '3001:3001'
volumes:
- .:/app
depends_on:
- redis
links:
- redis
tty: true
redis:
container_name: redis
image: redis
ports:
- '6379'
src/Dockerfile.client
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:client
server/Dockerfile.server
FROM node:14.15-buster
# also the directory you land in on ssh
WORKDIR /app
CMD cd /app && \
yarn && \
yarn start:server
After building and starting the containers, both containers run the same command, seemingly at random. Either both run yarn start:server or yarn start:client. The logs clearly detail duplicate startup commands and ports being used. Requests to either port 3000 (client) or 3001 (server) confirm that the same one is being used in both containers. If I change the command in both Dockerfiles to echo the respective filename (Dockerfile.server! or Dockerfile.client!), startup reveals only one Dockerfile being used for both containers. I am also running the latest version of Docker on Mac.
What is causing docker-compose to use the same Dockerfile for both containers?
After a lengthy and painful bout of troubleshooting, I narrowed the issue down to duplicate image references. image: node:14.15-buster for each service in docker-compose.yml and FROM node:14.15-buster in each Dockerfile.
Why this would cause this behavior is unclear, but after removing the image references in docker-compose.yml and rebuilding / restarting, everything works as expected.
When you run docker-compose build with both image and build properties set on a service, it will build an image according to the build property and then tag the image according to the image property.
In your case, you have two services building different images and tagging them with the same tag node:14.15-buster. One will overwrite the other.
This probably has the additional unintended consequence of causing your next image to be built on top of the previously built image instead of the true node:14.15-buster.
Then when you start the service, both containers will use the image tagged node:14.15-buster.
From the docs:
If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image
I have a python and java app that I want to run inside a container.
I have a folder named pass-hash with:
--h2o-start folder containing Dockerfile that I use to start h2o.jarwhich starts a server for machine learning.
--model-generator folder containing passhash.py and a data.csv file.
The passhash.py app contains h2o.import_file("/var/my-data/data.csv") which takes the data.csv file from the my-data folder I created in the container and generates a POJO file with it.
The h2o-start Dockerfile contains:
FROM openjdk:8
ADD h2o.jar h2o.jar
EXPOSE 54321
EXPOSE 54322
ENTRYPOINT ["java", "-jar", "h2o.jar"]
The model-generator Dockerfile contains:
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
COPY data.csv /var/my-data
RUN chmod 777 /var/my-data/data.csv
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
The docker-compose.yml file contains:
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build:
context: ./h2o-start
ports:
- "54321:54321"
- "54322:54322"
volumes:
- "home/data"
model-generator:
image: milanpanic2/model-generator
build:
context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- "csvdata:/var/my-data"
volumes:
csvdata:
Docker Volumes are designed to share folder in between Host Machine and Docker containers. If you copy any file on your host machine (volume location path), it will be automatically going to available inside containers.
The syntax for docker volume is as below:
-v /home/data:/data
In the above syntax /home/data is folder available on the host machine and /data this folder is available inside docker container.
If you copy any file on the host machine inside /home/data folder, it will be automatically going to be available inside container /data folder.
Hope this is clear to you.
If you are using docker-compose then add volume tag as below
volumes:
- /home/data:/data
for example:
version '3'
services:
app:
image: nginx:alpine
ports:
- 80:80
volumes:
- /home/data:/data
I don't know, that is the solution i came up with. Can you tell me the better solution for my problem? My problem is: I have a python app that uses a data.csv file to generate a POJO machine learning model. When I specify a path to this python app to the data file, it gives an exception that the file doen't exist. Also, I have another app written in java, that uses the generated POJO file and it gives predictions based on that data. The java app also updates the data.csv file every day. I want every app (microservice) to run in separate container, but want them both to use the data.csv
To answer this, You need to use volumes.
Try below code.
This is your docker-compose file
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build: context: ./h2o-start
ports:
- "54321:54321" - "54322:54322"
volumes:
- /home/data:/var/my-data
model-generator:
image: milanpanic2/model-generator
build: context: ./model-generator
ports:
- "8080:8080"
depends_on:
- "h2o-start"
volumes:
- /home/data:/var/my-data
This is your docker file
FROM python:2.7-slim
WORKDIR /model-generator
ADD . /model-generator
RUN mkdir /var/my-data
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 8080
ENV NAME World
CMD ["python", "passhash.py"]
And where is your java docker file?
Now just create a default data.csv file and copy it in your host machine location /home/data.
and run application let me know.
If you mean add files when do docker build, take a look at ADD & COPY instructions.
I am using docker-compose for a basic web app. When the image is built, it copies the static JS files in (ADD) and then builds them.
I then want to expose that directory to other containers, using VOLUME.
E.g.
Dockerfile
ADD ./site/static /site/static
WORKDIR /site/static
RUN gulp
docker-compose.yml
app:
build: .
volumes:
- /site/static
http:
image: nginx
volumes_from:
- app
nginx.conf
location /static {
alias /site/static
}
(Note, this is just an example)
The problem is that it seems to work the first time (i.e. when the volume does not exist), but is then never overwritten by the modified image. If I was using purely a Dockerfile, I could achieve this by putting VOLUME after ADD.
Is there a way to allow this, or am I approaching it completely wrong?
Thanks
Possible solution 1
I might be wrong, but I think the trouble is that when (and if) you do
docker-compose down && docker-compose up
your containers are recreated, and new "anonymous" volume is created.
You can check my guess running:
docker volume ls
I would try to use named volume, like so:
version: "2"
volumes:
app-volume: ~
services:
app:
build: .
volumes:
- app-volume:/site/static
http:
image: nginx
volumes:
- app-volume:/site/static
You need docker-compose 1.6.0+ and require a Docker Engine of version 1.10.0+ for usinng version 2 of docker-compose file.
Possible solution 2
just
app:
build: .
volumes:
- ./site/static:/site/static # maps host directory `./site/static` (relative to docker-compose.yml) to /site/static inside container
http:
image: nginx
volumes_from:
- app
And remove
ADD ./site/static /site/static
from your Dockerfile