docker-compose volumes not persisting after executing "docker-compose down" - docker

I have a docker project which uses docker-compose to stand up an API container and a database container. I also have it setup to create volumes for uploads and logs. When I build a new image, and then stand it up using docker-compose up, any data that was previously in the logs or uploads ceases to exist, but the data in our db-data volume persists. What do I need to change in my process and/or config to preserve the data in those other two volumes?
Here's how we stand up the docker:
docker-compose down
docker build -t api:latest .
docker save -o api.tar api:latest postgres:14
docker load -i api.tar
docker-compose up -d
Here's the Dockerfile:
# syntax=docker/dockerfile:1
FROM cupy/cupy AS deps
COPY nodesource_setup.sh .
RUN bash nodesource_setup.sh
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install libssl-dev ca-certificates cmake nodejs libgl-dev libglib2.0-0 -y
RUN pip3 install opencv-python python-dotenv
WORKDIR /app
COPY ./python ./ml_services/
COPY ./api/ .
RUN npm config set unsafe-perm true
RUN npm ci
# Rebuild the source only when needed
FROM deps AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=deps /app .
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build
FROM deps AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/ml_services ./ml_services
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/.env.production.local ./.env
COPY --from=builder /app/prisma ./prisma
COPY --from=builder /app/.next ./.next
RUN mkdir -p /logs
RUN mkdir -p /uploads
RUN chown -R nextjs ./ /logs /uploads
# Cleanup
RUN apt-get clean && rm -rf /var/lib/apt
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["npm", "run", "deploy"]
And the docker-compose.yml
version: "3.7"
services:
database:
image: postgres:14
restart: always
env_file: .env.production.local
container_name: postgres
healthcheck:
test: "pg_isready --username=autocal && psql --username=autocal --list"
timeout: 10s
retries: 20
volumes:
- db-data:/var/lib/postgresql/data
api:
image: api:latest
ports:
- 3000:3000
depends_on: [database]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
volumes:
- uploads:/uploads
- logs:/logs
volumes:
db-data:
uploads:
logs:

Make sure you writing logs and uploads in /logs and /uploads in your API code.
It might be the case it is being written it in ~/logs and ~/uploads (in the home directory of execution user).
Try to use absolute path not the relative path in your code.
Please go through node js docs around the path module https://nodejs.org/api/path.html

Absolutely ended up being a code issue in the API layer. Sigh. Thanks all.

Related

how to get docker-compose populate old volumes?

Im running a docker container where i copy the content of current folder in /app in the container.
Then I put in a volume /app/media folder of the container.
However, when the volume is already created from a previous docker-compose build, i dont find all the new files put in my ./media folder, supposed to be copied to /app/media in the container...
Therefore i'm wondering how docker is populating the volume ? Is it not supposed to check in the container folder new files and put them in the volume?
I had the issue first and it was /media folder in the .dockerignore file, but now it's doing this again with other files in /media folder
Following What is the right way to add data to an existing named volume in Docker? I ve tried to do :
docker run -v mediafiles:/data --name helper busybox true
cd ./media && docker cp . helper:/data
docker rm helper
And it is now working
Thank you
Here is my docker-compose.yml
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
ports:
- 80:80
depends_on:
- backend
- frontend
volumes:
- staticfiles:/app/static
- mediafiles:/app/media
networks:
spa_network:
frontend:
build:
context: .
dockerfile: ./compose/production/frontend/Dockerfile
restart: always
stdin_open: true
command: yarn start
ports:
- 3000:3000
depends_on:
- backend
networks:
spa_network:
ipv4_address: 172.20.128.3
backend:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles:/app/static
- mediafiles:/app/media
- sqlite_db:/app/db
ports:
- 8000:8000
env_file:
- ./env/prod-sample
networks:
spa_network:
ipv4_address: 172.20.128.2
networks:
spa_network:
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
sqlite_db:
staticfiles:
mediafiles:
Here is my dockerfile for backend (where i dont find the /app/media files)
FROM python:3.8-slim-buster
ENV PYTHONUNBUFFERED 1
RUN apt-get update \
# dependencies for building Python packages
&& apt-get install -y build-essential netcat \
# psycopg2 dependencies
&& apt-get install -y libpq-dev \
# Translations dependencies
&& apt-get install -y gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
RUN addgroup --system django \
&& adduser --system --ingroup django django
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
#COPY ./compose/production/django/entrypoint /entrypoint
#RUN sed -i 's/\r$//g' /entrypoint
#RUN chmod +x /entrypoint
#RUN chown django /entrypoint
COPY ./compose/production/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
RUN chown django /start
WORKDIR /app
# avoid 'permission denied' error
# copy project code
COPY . .
RUN chown -R django:django /app
#USER django
#ENTRYPOINT ["/entrypoint"]
If the volume is new, then docker will copy any files in the image to the new volume.
If the volume isn't new, then nothing is copied and you get the existing, old, contents of the volume.
More info here: https://docs.docker.com/storage/volumes/#populate-a-volume-using-a-container

Dockerizing Nextjs

I need to publish my Next js project on a server. For that, by request of admin, I need to dockerize it (because SWC compiler is not available on the server).
I set up Docker Desktop, created Dockerfile (content below) and docker-compose.yml (content below)
Then I successfully ran "docker-compose build"
and then "docker-compose up" - after that website is successfully work on my localhost:3000
What is my next steps? What should I provide to admin? I can push those 2 files on Github, but I guess, it's not enough. I can see in my docker app, that it created image of 723.88 Mb. Maybe I need to sent this one? But how and where it located?
I'm a noob in Docker, any advice is highly welcomed.
My Dockerfile:
# Install dependencies only when needed
FROM node:alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json ./
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
FROM node:alpine AS builder
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN yarn build && yarn install --production --ignore-scripts --prefer-offline
# Production image, copy all the files and run next
FROM node:alpine AS runner
WORKDIR /app
RUN npm install --global pm2
ENV NODE_ENV production
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
USER nextjs
EXPOSE 3000
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
ENV NEXT_TELEMETRY_DISABLED 1
# Run npm start script with PM2 when container starts
CMD [ "pm2-runtime", "npm", "--", "start" ]
My docker-compose.yml file:
version: '3'
services:
next:
build: ./frontend
image: dockerhubid/project-webui:latest
ports:
- '3000:3000'

Permission denied when running docker-compose up on Mac OS

I'm having some issues with permissions in my docker-compose, Dockerfile scripts.
This is the error I have when running docker-compose up:
As you can see I have a "Permission denied" error that prevents my API to be up and running.
This is what my docker-compose.yml file looks like (I skipped the database part because it's not relevant to the problem I have here):
version: '3'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "1338:1337"
links:
- postgres
environment:
- DATABASE_URL=postgres://postgres:postgres#postgres:5432/postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./:/usr/src/app
- /usr/src/app/node_modules
command: [
"docker/api/wait-for-postgres.sh",
"postgres",
"docker/api/start.sh"
]
And my Dockerfile:
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
EXPOSE 1337
What I've tried so far is changing the permissions and switching to root user inside my container but it didn't change a thing (still have the same error as the one shown in the screenshot above).
FROM node:14
RUN apt-get update && apt-get install -y postgresql-client
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
RUN npm install -g nodemon
COPY . /usr/src/app
USER root
RUN chmod +x docker/api/start.sh
RUN chmod +x docker/api/wait-for-postgres.sh
EXPOSE 1337
EDIT:
Content of wait-for-postgres.sh script:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 10
done
>&2 echo "Postgres is up - executing command"
exec "$#"
Any thoughts on this ? Thanks for your help !

entrypoint.sh: exec: gunicorn: not found

When I build image from Dockerfile:
FROM python:3.8.3-alpine as builder
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev jpeg-dev zlib-dev
RUN pip install --upgrade pip
COPY . .
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
And next run containers. Show me:
web_1 | PostgreSQL started
web_1 | /home/app/web/entrypoint.prod.sh: exec: line 14: gunicorn: not found
entrypoint.prod.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $DB_HOST 5432; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$#"
I also have a problem with nginx because when I enter the address 0.0.0.0:1337 I get an error 502 bad gateway(failed (113: Host is unreachable)). I think it is related to the problem above.
docker-compose.yml
version: '3.7'
services:
nginx:
build: ./nginx
ports:
- "1337:80"
restart: always
networks:
- nginx_network
depends_on:
- web
web:
build:
context: .
dockerfile: Dockerfile
command: gunicorn znajdki.wsgi:application --bind 0.0.0.0:8000
expose:
- "8000"
env_file:
- ./.env.prod
networks:
- nginx_network
- postgres_network
depends_on:
- db
db:
image: postgres
volumes:
- postgres:/var/lib/postgresql/data
env_file:
- ./.env.prod.db
networks:
- postgres_network
networks:
nginx_network:
driver: bridge
postgres_network:
driver: bridge
volumes:
postgres:

GCP Cloud Run error on deploying my Docker image to Google Container Registry / Cloud Run

i´m quite new to Docker and GCP and try to find a working way, to deploy my Laravel App on GCP.
I already set up CI and and selected "cloudbuild.yaml" as build configuration. I followed innumerable tutorials and read the Google Container Docs, so i created a "cloudbuild.yaml" which includes arguments to use my docker-composer.yaml, to create the stack of my app (app code, database, server).
During the Google Cloud Build process i get:
Step #0: Creating workspace_app_1 ...
Step #0: Creating workspace_web_1 ...
Step #0: Creating workspace_db_1 ...
Step #0: Creating workspace_app_1 ... done
Step #0: Creating workspace_web_1 ... done
Step #0: Creating workspace_db_1 ... done
Finished Step #0
Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/docker
Step #1: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
docker-compose.yml:
version: "3.8"
volumes:
php-fpm-socket:
db-store:
services:
app:
build:
context: .
dockerfile: ./infra/docker/php/Dockerfile
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
environment:
- DB_CONNECTION=mysql
- DB_HOST=db
- DB_PORT=3306
- DB_DATABASE=${DB_NAME:-laravel_local}
- DB_USERNAME=${DB_USER:-phper}
- DB_PASSWORD=${DB_PASS:-secret}
web:
build:
context: .
dockerfile: ./infra/docker/nginx/Dockerfile
ports:
- ${WEB_PORT:-80}:80
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
db:
build:
context: .
dockerfile: ./infra/docker/mysql/Dockerfile
ports:
- ${DB_PORT:-3306}:3306
volumes:
- db-store:/var/lib/mysql
environment:
- MYSQL_DATABASE=${DB_NAME:-laravel_local}
- MYSQL_USER=${DB_USER:-phper}
- MYSQL_PASSWORD=${DB_PASS:-secret}
- MYSQL_ROOT_PASSWORD=${DB_PASS:-secret}
cloudbuild.yaml
steps:
# running docker-compose
- name: 'docker/compose:1.28.4'
args: ['up', '-d']
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/MY_PROJECT_ID/laravel-docker-1']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'laravel-docker-1', '--image', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '--region', 'europe-west3', '--platform', 'managed']
images:
- gcr.io/MY_PROJECT_ID/laravel-docker-1
What is wrong in this configuration?
I solved this issue to deploy a running Laravel 8 application to Google Cloud with the following Dockerfile. PS: Any optimization regarding the FROM and RUN steps are appreciated:
#
# PHP Dependencies
#
FROM composer:2.0 as vendor
WORKDIR /app
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-interaction \
--no-plugins \
--no-scripts \
--no-dev \
--prefer-dist
COPY . .
RUN composer dump-autoload
#
# Frontend
#
FROM node:14.9 as frontend
WORKDIR /app
COPY artisan package.json webpack.mix.js package-lock.json ./
RUN npm audit fix
RUN npm cache clean --force
RUN npm cache verify
RUN npm install -f
COPY resources/js ./resources/js
COPY resources/sass ./resources/sass
RUN npm run development
#
# Application
#
FROM php:7.4-fpm
WORKDIR /app
# Install PHP dependencies
RUN apt-get update -y && apt-get install -y build-essential libxml2-dev libonig-dev
RUN docker-php-ext-install pdo pdo_mysql opcache tokenizer xml ctype json bcmath pcntl
# Install Linux and Python dependencies
RUN apt-get install -y curl wget git file ruby-full locales vim
# Run definitions to make Brew work
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
RUN useradd -m -s /bin/zsh linuxbrew && \
usermod -aG sudo linuxbrew && \
mkdir -p /home/linuxbrew/.linuxbrew && \
chown -R linuxbrew: /home/linuxbrew/.linuxbrew
USER linuxbrew
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
USER root
#RUN chown -R $CONTAINER_USER: /home/linuxbrew/.linuxbrew
ENV PATH "$PATH:/home/linuxbrew/.linuxbrew/bin"
#Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt install -y ./google-chrome-stable_current_amd64.deb
# Install Python modules (dependencies) of scraper
RUN brew install python3
RUN pip3 install selenium
RUN pip3 install bs4
RUN pip3 install pandas
# Copy Frontend build
COPY --from=frontend app/node_modules/ ./node_modules/
COPY --from=frontend app/public/js/ ./public/js/
COPY --from=frontend app/public/css/ ./public/css/
COPY --from=frontend app/public/mix-manifest.json ./public/mix-manifest.json
# Copy Composer dependencies
COPY --from=vendor app/vendor/ ./vendor/
COPY . .
RUN cp /app/drivers/chromedriver /usr/local/bin
#COPY .env.prod ./.env
COPY .env.local-docker ./.env
# Copy the scripts to docker-entrypoint-initdb.d which will be executed on container startup
COPY ./docker/ /docker-entrypoint-initdb.d/
COPY ./docker/init_db.sql .
RUN php artisan config:cache
RUN php artisan route:cache
CMD php artisan serve --host=0.0.0.0 --port=8080
EXPOSE 8080

Resources