Why docker-compose don't let me create a volume? - docker

I am writing this request today because I will like to create my first Docker container. I watched a lot of tutorials, and there I come across a problem that I cannot solve, I must have missed a piece of information.
My program is quite basic, I would like to create a volume so as not to lose the information retrieved each time the container is launched.
Here is my docker-compose
version: '3.3'
services:
homework-logger:
build: .
ports:
- '54321:1235'
volumes:
- ./app:/app
image: 'cinabre/homework-logger:latest'
networks:
- homeworks
networks:
homeworks:
name: homeworks-logger
and here is my DockerFile
FROM debian:9
WORKDIR /app
RUN apt-get update -yq && apt-get install wget curl gnupg git apt-utils -yq && apt-get clean -y
RUN apt-get install python3 python3-pip -y
RUN git clone http://192.168.5.137:3300/Cinabre/Homework-Logger /app
VOLUME /app
RUN ls /app
RUN python3 -m pip install bottle beaker bottle-cork requests
CMD ["python3", "main.py"]
I did an "LS" in the container to see if the / app folder was empty: it is not
Any ideas?
thanks in advance !

Volumes are there to hold your application data, not its code. You don't usually need the Dockerfile VOLUME directive and you should generally avoid it unless you understand exactly what it does.
In terms of workflow, it's commonplace to include the Dockerfile and similar Docker-related files in the source repository yourself. Don't run git clone in the Dockerfile. (Credential management is hard; building a non-default branch can be tricky; layer caching means Docker won't re-pull the branch if it's changed.)
For a straightforward application, you should be able to use a near-boilerplate Dockerfile:
FROM python:3.9 # unless you have a strong need to hand-install it
WORKDIR /app
# Install packages first. Unless requirements.txt changes, Docker
# layer caching won't repeat this step. Do not list out individual
# packages in the Dockerfile; list them in Python-standard setup.py
# or Pipfile.
COPY requirements.txt .
# ...in the "system" Python space, not a virtual environment.
RUN pip3 install -r requirements.txt
# Copy the rest of the application in.
COPY . .
# Set the default command to run the container, and other metadata.
EXPOSE 1235
CMD ["python3", "main.py"]
In your application code you need to know where to store the data. You might put this in an environment variable:
import os
DATA_DIR = os.environ.get('DATA_DIR', '.')
with open(f"${DATA_DIR}/output.txt", "w") as f:
...
Then in your docker-compose.yml file, you can specify an alternate data directory and mount that into your container. Do not mount a volume over the /app directory containing your application's source code.
version: '3.8'
services:
homework-logger:
build: .
image: 'cinabre/homework-logger:latest' # names the built image
ports:
- '54321:1235'
environment:
- DATA_DIR=/data # (consider putting this in the Dockerfile)
volumes:
- homework-data:/data # (could bind-mount `./data:/data` instead)
# Use the automatic `networks: [default]`
volumes:
homework-data:

Related

Using base Images for Services in docker-compose with different args

My Setup:
I have 3 Services defined in my docker-compose.yml: frontend backend and postgresql. postgresql is pulled from docker-hub.
frontend and backend are built from their own Dockerfiles, most of the Code of these Dockerfiles is the same and only EXPOSE ENTRPOINT CMD and ARG-Values differ from each other. That is why I wanted to create a 'base-Dockerfile' that these two Services can "include".
Sadly I found out I can not simply "include" a Dockerfile into another Dockerfile, I have to create an Image.
So I tried to create a base image for frontend and backend in my docker-compose.yml:
services:
frontend_base:
image: frontend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/frontend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/frontend/client
backend_base:
image: backend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/backend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/backend/api
frontend:
depends_on:
- frontend_base
# Some more stuff for the service
backend:
depends_on:
- backend_base
# Some more stuff for the service
My 'base-Dockerfile':
FROM node:18
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
The Problem I am facing:
My frontend and backend Dockerfiles try to pull the 'base-image' from docker.io
=> ERROR [docker-backend internal] load metadata for docker.io/library/backend_base_image:latest 0.9s
=> ERROR [docker-frontend internal] load metadata for docker.io/library/frontend_base_image:latest 0.9s
=> CANCELED [frontend_base_image internal] load metadata for docker.io/library/node:18
My Research:
I do not know if my approach is possible, I did not find much Resources about this (integrated with docker-compose) online, only Resources about building the Images via Shell and then using them in a Dockerfile. I also tried this and ran into some other issues, where I could not provide correct arguments to the base-Dockerfile.
So I firstly wanted to find out if it is possible with docker-compose.
I am sorry if this is super obvious and my Question is dumb, I am relatively new to Docker.
We could use the feature of a multistage containerfile to define all three images in a single containerfile:
FROM node:18 AS base
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
FROM base AS frontend
...
FROM base AS backend
...
In our docker-compose.yml, we can then build a specific stage for the frontend- and backend-service:
...
frontend:
image: frontend
build:
context: ./
target: frontend
dockerfile: base.dockerfile
...
backend:
image: backend
build:
context: ./
target: backend
dockerfile: base.dockerfile
...
If you want a single base image with shared tools, you can do this almost exactly the way you describe; but the one caveat is that you can't describe the base image in the docker-compose.yml file. You need to run separately from Compose
docker build -t base-image -f base.dockerfile .
I would not try to install any application code in that base Dockerfile. Where you for example install an init wrapper that needs to be shared across all of your application images, that does make sense. I think it's fine to tie a Dockerfile to a specific source-tree and image layout, and don't typically recommend passing filesystem paths as ARGs.
# base.dockerfile
FROM node:18
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64 \
&& chmod +x /usr/local/bin/dumb-init
COPY docker/tools/start.sh /usr/local/bin/
ENTRYPOINT ["dumb-init", "--"]
CMD ["start.sh"]
The per-image Dockerfiles will look pretty similar – and like every other Node Dockerfile – but there's no harm in repeating this, in much the same way that your components probably have similar-looking but self-contained package.json files.
# */Dockerfile
FROM base-image
WORKDIR /app # also creates it
COPY package*.json ./
RUN npm ci
COPY ./ ./
RUN npm build
EXPOSE 3000
# CMD ["npm", "run", "start"] # if the start.sh from the base is wrong
Of note, this gives you some flexibility to change things if the two image setups aren't identical; if you need an additional build step, or if you want to run a dev server, or package the frontend into a lighter-weight Nginx server.
In the Compose file you'd declare these normally with a build: block. Compose isn't aware of the base image and there's no way to tell it about it.
version: '3.8'
services:
frontend:
build: ./app/frontend/client
ports: ['3000:3000']
backend:
build: ./app/backend/api
ports: ['3001:3000']
One thing I've done here which at least reduces the number of variable references is to consistently use . as the current directory name. In the Compose file that's the directory containing the docker-compose.yml; on the left-hand side of COPY it's the build: context directory on the host; on the right-hand side of COPY it's the most recent WORKDIR. Using . where appropriate means you don't have to repeat the directory name, so you do have a little flexibility if you do need to rearrange your source tree or container filesystem.

How to install a golang package in a docker file?

I'm new in docker and I want to setting-up a docker-compose for my django app. in the backend of my app, I have golang packages too and run that in djang with subprocess library.
But, when I want to install a package using go install github.com/x/y#latest and then copy its binary to the project directory, it gives me the error: package github.com/x/y#latest: cannot use path#version syntax in GOPATH mode
I searched a lot in the internet but didn't find a solution to solve my problem. Could you please tell me where I'm wrong?
here is my Dockerfile:
FROM golang:1.18.1-bullseye as go-build
# Install go package
RUN go install github.com/hakluke/hakrawler#latest \
&& cp $GOPATH/bin/hakrawler /usr/local/bin/
# Install main image for backend
FROM python:3.8.11-bullseye
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install Dist packages
RUN apt-get update \
&& apt-get -y install --no-install-recommends software-properties-common libpq5 python3-dev musl-dev git netcat-traditional golang \
&& rm -rf /var/lib/apt/lists/
# Set work directory
WORKDIR /usr/src/redteam_toolkit/
# Install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Copy project, and then the go package
COPY . .
COPY --from=go-build /usr/local/bin/hakrawler /usr/src/redteam_toolkit/toolkit/scripts/webapp/
docker-compose.yml:
version: '3.3'
services:
webapp:
build: .
command: python manage.py runserver 0.0.0.0:4334
container_name: toolkit_webapp
volumes:
- .:/usr/src/redteam_toolkit/
ports:
- 4334:4334
env_file:
- ./.env
depends_on:
- db
db:
image: postgres:13.4-bullseye
container_name: database
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=redteam_toolkit_db
volumes:
postgres_data:
the get.py file inside /usr/src/redteam_toolkit/toolkit/scripts/webapp/ directory, to just run the go package, and list files in this dir:
import os
import subprocess
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print(f"Current path is: {BASE_DIR}")
def go(target_url):
run_go_package = subprocess.getoutput(
f"echo {target_url} | {BASE_DIR}/webapp/hakrawler -t 15 -u"
)
list_files = subprocess.getoutput(f"ls {BASE_DIR}/webapp/")
print(run_go_package)
print(list_files)
go("https://example.org")
and then I just run:
$ docker-compose up -d --build
$ docker-compose exec webapp python toolkit/scripts/webapp/get.py
The output is:
Current path is: /usr/src/redteam_toolkit/toolkit/scripts
/bin/sh: 1: /usr/src/redteam_toolkit/toolkit/scripts/webap/hakrawler: not found
__init__.py
__pycache__
scr.py
gather.py
This looks like a really good candidate for a multi-stage build:
FROM golang:1.18.0 as go-build
# Install packages
RUN go install github.com/x/y#latest \
&& cp $GOPATH/bin/pacakge /usr/local/bin/
FROM python:3.8.11-bullseye as release
...
COPY --from=go-build /usr/local/bin/package /usr/src/toolkit/toolkit/scripts/webapp/
...
Your compose file also needs to be updated, it is masking the entire /usr/src/redteam_toolkit folder with the volume mount. Delete that volume mount to see the content of the image.
GOPATH mode does not work with Golang modules, in your Dockerfile file, add:
RUN unset GOPATH
use RUN go get <package_repository>

Exposing Docker Volumes to Nginx

I'm trying to connect a Json file which resides in a docker volume of the following container to my main docker container which is running a django project.
Since I am using Caprover my Docker Compose options are very limited.
So Docker Composer is not really an option. I want to instead just expose the json file over the web with a link.
Something like domain.com/folder/jsonfile.json
Can somebody tell me if this is possible inside this dockerfile?
The image I am using is crucial to the container so can I just add an nginx image or do I need any other changes to make this work?
Or is nginx not even necessary?
FROM ubuntu:devel
ENV TZ=Etc/UTC
ARG APP_HOME=/app
WORKDIR ${APP_HOME}
ENV DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
RUN echo $TZ > /etc/timezone
RUN apt-get update && apt-get upgrade -y
RUN apt-get install gnumeric -y
RUN mkdir -p /etc/importer/data
RUN mkdir /voldata
COPY config.toml /etc/importer/
COPY datasets/* /etc/importer/data/
VOLUME /voldata
COPY importer /usr/bin/
RUN chmod +x /usr/bin/importer
COPY . ${APP_HOME}
CMD sleep 999d
Using the same volume in 2 containers
docker-compose:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file'
the mechanism above replaces the volumes_from since v3, but this works for v2 as well:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes_from:
- service1
If you want to avoid unintentional altering add :ro for readonly to the target service:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file:ro'
http-server
Surely you can provide the file via http (or other protocol). There are two oppertunities:
Including a http-service to your container (quite easy depending on what is already given in the container) e.g. using nodejs you can use this https://www.npmjs.com/package/http-server very easy. Size doesn't matter? So just install:
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/path/to/your/json"]
docker-compose (Runs per default on 8080, so open this):
existing_service:
ports:
- '8080:8080'
Run a stand alone http-server (nginx, apache httpd,..) in another container, but then you depend again on using the same volume for two services, so for local solutions quite an overkill.
Base image
If you don't have good reasons i'll would never use something like :devel, :rolling or :latest as base image. Stick to a LTS version instead like ubuntu:22.04
Testing for http-server
Dockerfile
FROM ubuntu:20.04
ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server#13.1.0 # Issue with JSON-File in V14: https://github.com/http-party/http-server/issues/634
COPY ./test.json ./usr/wwwhttp/test.json
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/usr/wwwhttp/"]
# docker build -t test/httpserver:latest .
# docker run -p 8080:8080 test/httpserver:latest
Disclaimer:
I am not that familiar with node-docker-images, this is just to give a quick working solution and go on from there. I'm not using nodeJS in production, but I'm sure it can be optimized from being fat to.. well.. being rather fat. But for quick prototyping size doesn't matter.
If you want to just have two containers access the same file, just use a volume with --mount.

Library wkhtmltopdf is not working inside Docker

I have a code in Python Flask where I generate pdf files using an HTML template. The code works just fine when I run it alone, but when I try to run it inside a Docker container, as soon as I call the endpoint that generates the report the docker crashes and resets. It just stays loading then it returns an error (in Postman which I'm using to test).
The code for the PDF is as follows:
def create_report(download_uuid):
transactions = get_transaction_for_report(download_uuid)
config = pdfkit.configuration(wkhtmltopdf=environ.get('WKHTMLTOPDF'))
file_obj = io.BytesIO()
with zipfile.ZipFile(file_obj, 'w') as zip_file:
for transaction in transactions:
html = render_template("report.html", transaction=transaction)
pdf = pdfkit.from_string(html, False, configuration=config)
data = zipfile.ZipInfo('{}.pdf'.format(transaction['control_number']))
data.compress_type = zipfile.ZIP_DEFLATED
zip_file.writestr(data, pdf)
file_obj.seek(0)
return send_file(file_obj, attachment_filename="forms.zip", as_attachment=True)
It is returning a zip file, but inside the zip file are pdf files. Furthermore, if I remove the pdf generating part, the zip file returns just fine. This is my Dockerfile:
FROM madnight/docker-alpine-wkhtmltopdf as wkhtmltopdf_image
FROM python:3.9-alpine
RUN adduser -D custom
WORKDIR /home/Project
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install --upgrade pip
RUN apk add make automake gcc g++ subversion python3-dev jpeg-dev zlib-dev libffi-dev musl-dev openssl-dev freetype freetype-dev ttf-freefont libxrender qt5-qtbase-dev
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn
COPY Project Project
COPY boot.sh app.py .env run.py create_database.py config.py ./
COPY templates templates
COPY static static
COPY --from=wkhtmltopdf_image /bin/wkhtmltopdf /usr/local/bin/wkhtmltopdf
RUN chmod +x boot.sh
ENV FLASK_APP app.py
USER root
RUN chown -R custom ./
USER custom
EXPOSE 9001
ENTRYPOINT ["./boot.sh"]
I should say that this is the last iteration of many, MANY attempts to try to get this to work. Essentially, I've tried getting wkhtmltox by curl, I've tried putting wkhtmltopdf in different directories. So far nothing has worked. I don't know what I'm missing. This is basically what I need to fix in order to finish this project so any help at all will be immensely appreciated.
EDIT: docker-compose.yml
version: '2'
services:
app:
build: .
networks:
- custom
ports:
- "9001:9001"
volumes:
- "./static:/home/EventismedEquipmentAPI/static"
external_links:
- eventismed-equipment:db
networks:
custom:
external: true
Let's fix this.
I've managed to run wkhtmltopdf isolated on a docker container.
Dockerfile:
# https://stackoverflow.com/a/62737156/152016
# Create image based on the official openjdk 8-jre-alpine image from the dockerhub
FROM openjdk:8-jre-alpine
# Install wkhtmltopdf
# https://stackoverflow.com/a/56925361/152016
RUN apk add --no-cache wkhtmltopdf ttf-dejavu
ENTRYPOINT ["sh"]
docker-compose.yml:
version: '3.8'
services:
wkhtmltopdf:
image: wkhtmltopdf
container_name: wkhtmltopdf
build:
dockerfile: Dockerfile
context: .
Then:
docker-compose build
docker run -ti --rm -v /tmp:/tmp wkhtmltopdf
Inside the container:
$ cd /tmp
$ wkhtmltopdf https://www.google.com test.pdf
Then you will see the pdf on your mac at /tmp/test.pdf
First let me know if this works.

Docker-compose volume mount before run

I have a Dockerfile I'm pointing at from a docker-compose.yml.
I'd like the volume mount in the docker-compose.yml to happen before the RUN in the Dockerfile.
Dockerfile:
FROM node
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
ENTRYPOINT gulp watch
docker-compose.yml
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
It makes complete sense for it to do the Dockerfile first, then mount from docker-compose, however is there a way to get around it.
I want to keep the Dockerfile generic, while passing more specific bits in from compose. Perhaps that's not the best practice?
Erik Dannenberg's is correct, the volume layering means that what I was trying to do makes no sense. (There is another really good explaination on the Docker website if you want to read more). If I want to have Docker do the npm install then I could do it like this:
FROM node
ADD . /usr/src/app
WORKDIR /usr/src/app
RUN npm install --global gulp-cli \
&& npm install
CMD ["gulp", "watch"]
However, this isn't appropriate as a solution for my situation. The goal is to use NPM to install project dependencies, then run gulp to build my project. This means I need read and write access to the project folder and it needs to persist after the container is gone.
I need to do two things after the volume is mounted, so I came up with the following solution...
docker/gulp/Dockerfile:
FROM node
RUN npm install --global gulp-cli
ADD start-gulp.sh .
CMD ./start-gulp.sh
docker/gulp/start-gulp.sh:
#!/usr/bin/env bash
until cd /usr/src/app && npm install
do
echo "Retrying npm install"
done
gulp watch
docker-compose.yml:
version: '2'
services:
build_tools:
build: docker/gulp
volumes_from:
- build_data:rw
build_data:
image: debian:jessie
volumes:
- .:/usr/src/app
So now the container starts a bash script that will continuously loop until it can get into the directory and run npm install. This is still quite brittle, but it works. :)
You can't mount host folders or volumes during a Docker build. Allowing that would compromise build repeatability. The only way to access local data during a Docker build is the build context, which is everything in the PATH or URL you passed to the build command. Note that the Dockerfile needs to exist somewhere in context. See https://docs.docker.com/engine/reference/commandline/build/ for more details.

Resources