I'm trying to use Docker and Docker Compose to create a containerized app. I have a PubNub account, which allows me to use different API keys for different environments (dev, test, prod). To help me build images for this, I am trying to use build args set with an env_file.
It's not working.
WARNING: The PUB_KEY variable is not set. Defaulting to a blank string.
WARNING: The SUB_KEY variable is not set. Defaulting to a blank string.
Questions:
What mistake am I making in setting the build args?
How do I fix it?
Is this a good way to set ENV variables for the containers scan and flask?
At the very bottom is an IntelliJ IDE screenshot, or the text code is just below.
Here is the docker-compose.yml content:
version: '3.6'
services:
scan:
env_file:
- sample.env
build:
context: .
dockerfile: Dockerfile
args:
pub_key: $PUB_KEY
sub_key: $SUB_KEY
target: scan
image: bt-beacon/scan:v1
flask:
env_file:
- sample.env
build:
context: .
dockerfile: Dockerfile
args:
pub_key: $PUB_KEY
sub_key: $SUB_KEY
target: flask
image: bt-beacon/flask:v1
ports:
- "5000:5000"
And the Dockerfile:
# --- BASE NODE ---
FROM python:3.6-jessie as base
ARG pub_key
ARG sub_key
RUN test -n "$pub_key"
RUN test -n "$sub_key"
# --- SCAN NODE ---
FROM base as scan
ENV PUB_KEY=$pub_key
ENV SUB_KEY=$sub_key
COPY app/requirements.scan.txt /
RUN apt-get update
RUN apt-get -y install bluetooth bluez bluez-hcidump python-bluez python-numpy python3-dev libbluetooth-dev libcap2-bin
RUN pip install -r /requirements.scan.txt
RUN setcap 'cap_net_raw,cap_net_admin+eip' $(readlink -f $(which python))
COPY app/src /app
WORKDIR /app
CMD ["./scan.py", "$pub_key", "$sub_key"]
# -- FLASK APP ---
FROM base as flask
ENV SUB_KEY=$sub_key
COPY app/requirements.flask.txt /
COPY app/src /app
RUN pip install -r /requirements.flask.txt
WORKDIR /app
EXPOSE 5000
CMD ["flask", "run"]
Finally, sample.env:
# PubNub app keys here
PUB_KEY=xyz1
SUB_KEY=xyz2
env_file can only set environment variables inside a service container. Variables from env_file cannot be injected into docker-compose.yml itself.
You have such options (described there in detail):
inject these variables into the shell, from which you run docker-compose up
create .env file containing these variables (syntax identical to your sample.env)
Personally I would separate image building process and container launching process (take away image building responsibility from docker-compose to external script, then building process can be configured easily).
Related
My Setup:
I have 3 Services defined in my docker-compose.yml: frontend backend and postgresql. postgresql is pulled from docker-hub.
frontend and backend are built from their own Dockerfiles, most of the Code of these Dockerfiles is the same and only EXPOSE ENTRPOINT CMD and ARG-Values differ from each other. That is why I wanted to create a 'base-Dockerfile' that these two Services can "include".
Sadly I found out I can not simply "include" a Dockerfile into another Dockerfile, I have to create an Image.
So I tried to create a base image for frontend and backend in my docker-compose.yml:
services:
frontend_base:
image: frontend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/frontend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/frontend/client
backend_base:
image: backend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/backend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/backend/api
frontend:
depends_on:
- frontend_base
# Some more stuff for the service
backend:
depends_on:
- backend_base
# Some more stuff for the service
My 'base-Dockerfile':
FROM node:18
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
The Problem I am facing:
My frontend and backend Dockerfiles try to pull the 'base-image' from docker.io
=> ERROR [docker-backend internal] load metadata for docker.io/library/backend_base_image:latest 0.9s
=> ERROR [docker-frontend internal] load metadata for docker.io/library/frontend_base_image:latest 0.9s
=> CANCELED [frontend_base_image internal] load metadata for docker.io/library/node:18
My Research:
I do not know if my approach is possible, I did not find much Resources about this (integrated with docker-compose) online, only Resources about building the Images via Shell and then using them in a Dockerfile. I also tried this and ran into some other issues, where I could not provide correct arguments to the base-Dockerfile.
So I firstly wanted to find out if it is possible with docker-compose.
I am sorry if this is super obvious and my Question is dumb, I am relatively new to Docker.
We could use the feature of a multistage containerfile to define all three images in a single containerfile:
FROM node:18 AS base
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
FROM base AS frontend
...
FROM base AS backend
...
In our docker-compose.yml, we can then build a specific stage for the frontend- and backend-service:
...
frontend:
image: frontend
build:
context: ./
target: frontend
dockerfile: base.dockerfile
...
backend:
image: backend
build:
context: ./
target: backend
dockerfile: base.dockerfile
...
If you want a single base image with shared tools, you can do this almost exactly the way you describe; but the one caveat is that you can't describe the base image in the docker-compose.yml file. You need to run separately from Compose
docker build -t base-image -f base.dockerfile .
I would not try to install any application code in that base Dockerfile. Where you for example install an init wrapper that needs to be shared across all of your application images, that does make sense. I think it's fine to tie a Dockerfile to a specific source-tree and image layout, and don't typically recommend passing filesystem paths as ARGs.
# base.dockerfile
FROM node:18
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64 \
&& chmod +x /usr/local/bin/dumb-init
COPY docker/tools/start.sh /usr/local/bin/
ENTRYPOINT ["dumb-init", "--"]
CMD ["start.sh"]
The per-image Dockerfiles will look pretty similar – and like every other Node Dockerfile – but there's no harm in repeating this, in much the same way that your components probably have similar-looking but self-contained package.json files.
# */Dockerfile
FROM base-image
WORKDIR /app # also creates it
COPY package*.json ./
RUN npm ci
COPY ./ ./
RUN npm build
EXPOSE 3000
# CMD ["npm", "run", "start"] # if the start.sh from the base is wrong
Of note, this gives you some flexibility to change things if the two image setups aren't identical; if you need an additional build step, or if you want to run a dev server, or package the frontend into a lighter-weight Nginx server.
In the Compose file you'd declare these normally with a build: block. Compose isn't aware of the base image and there's no way to tell it about it.
version: '3.8'
services:
frontend:
build: ./app/frontend/client
ports: ['3000:3000']
backend:
build: ./app/backend/api
ports: ['3001:3000']
One thing I've done here which at least reduces the number of variable references is to consistently use . as the current directory name. In the Compose file that's the directory containing the docker-compose.yml; on the left-hand side of COPY it's the build: context directory on the host; on the right-hand side of COPY it's the most recent WORKDIR. Using . where appropriate means you don't have to repeat the directory name, so you do have a little flexibility if you do need to rearrange your source tree or container filesystem.
I want to create a dockerfile which contains 2 stages.
The first stage is to set up a MySQL server and the second stage is to start a backend service that accesses the server.
The problem is that the backend service stops when no MySQL server is available. Is there a way to make the stage dependent on the first stage being started?
what is a little strange is that when i create the dockerfile with the database at the top, the log of the backend is displayed. If the backend is on top, the log of the MySQL is displayed when starting.
Actual Dockerfile:
FROM mysql:latest AS BackendDatabase
RUN chown -R mysql:root /var/lib/mysql/
ARG MYSQL_DATABASE="DienstplanverwaltungDatabase"
ARG MYSQL_USER="user"
ARG MYSQL_PASSWORD="password"
ARG MYSQL_ROOT_PASSWORD="password"
ENV MYSQL_DATABASE=$MYSQL_DATABASE
ENV MYSQL_USER=$MYSQL_USER
ENV MYSQL_PASSWORD=$MYSQL_PASSWORD
ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
EXPOSE 3306
FROM openjdk:10-jre-slim AS Backend
LABEL description="Backend Dienstplanverwaltung"
LABEL maintainer="Martin"
COPY ./SpringDienstplanverwaltung/build/libs/dienstplanverwaltung-0.0.1-SNAPSHOT.jar /usr/local/app.jar
EXPOSE 8080
ENTRYPOINT java -jar /usr/local/app.jar
actually you need Docker-composer of two containers. One for Mysql one for java app.
Multistage is mostly for cases like #1 build something, for example java or Go. #2 create second image and copy results of build. The general idea is to keep the second stage clean. We do not need to build tools, only results in second stage.
please see example:
FROM
Learn more about the "FROM" Dockerfile command.
golang:1.16
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
Okay you seem to be a little confused with various things here. First of all, multi-stage builds are for building an application that needs some kind of build/compiling process, copying that build into another container with fewer dependencies and with just the executable, so in this context, trying to run a database in a multistage build makes no sense at all, due to the fact that building the container does not run it.
Now, you want to have a multi stage to build the java app and then copy that build into another container and then run it. Also, when you are running that container you need a mysql database, using docker-compose is a good tool for that, like this example:
version: '3.8'
services:
db:
image: mysql:8.0
cap_add:
- SYS_NICE
restart: always
environment:
- MYSQL_DATABASE=mydatabase
- MYSQL_ROOT_PASSWORD=mypassword
ports:
- '3306:3306'
volumes:
- db:/var/lib/mysql
# - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
api:
container_name: your-backend
build:
context: .
image: your-backend
depends_on:
- db
ports:
- 8080:8080
environment:
ENV_VAR_EXAMPLE: example
links:
- db
volumes:
db:
driver: local
Also, an example multi-stage Dockerfile for java applications:
# First stage: complete build environment
FROM maven:3.5.0-jdk-8-alpine AS builder
# add pom.xml and source code
ADD ./pom.xml pom.xml
ADD ./src src/
# package jar
RUN mvn clean package
# Second stage: minimal runtime environment
From openjdk:8-jre-alpine
# copy jar from the first stage
COPY --from=builder target/my-app-1.0-SNAPSHOT.jar my-app-1.0-SNAPSHOT.jar
EXPOSE 8080
CMD ["java", "-jar", "my-app-1.0-SNAPSHOT.jar"]
I am using the latest Docker Toolbox under Windows 10 to build the native image for Quarkus applciations.
$ docker -v
Docker version 19.03.1, build 74b1e89e8a
The docker-compose.yaml file:
version: '3.7' # specify docker-compose version
services:
blogdb:
image: postgres
ports:
- "5432:5432"
restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: blogdb
POSTGRES_USER: user
volumes:
- ./data:/var/lib/postgresql
post-service:
image: hantsy/quarkus-post-service
build:
context: ./backend
dockerfile: src/main/docker/Dockerfile.multistage
args:
- QUARKUS_DATASOURCE_URL
environment:
QUARKUS_DATASOURCE_URL: jdbc:postgresql://blogdb:5432/blogdb
ports:
- "8080:8080" #specify ports forewarding
depends_on:
- blogdb
And the Dockerfile:
## Stage 1 : build with maven builder image with native capabilities
FROM quay.io/quarkus/centos-quarkus-maven:19.1.1 AS build
ARG QUARKUS_DATASOURCE_URL
RUN echo "QUARKUS_DATASOURCE_URL>>>: $QUARKUS_DATASOURCE_URL"
ENV QUARKUS_DATASOURCE_URL $QUARKUS_DATASOURCE_URL
WORKDIR /usr/src/app
COPY pom.xml .
RUN mvn -U dependency:go-offline dependency:resolve-plugins -Pnative
COPY src/ /usr/src/app/src/
USER root
RUN chown -R quarkus /usr/src/app
USER quarkus
RUN mvn clean package -Pnative -Dquarkus.datasource.url=$QUARKUS_DATASOURCE_URL
## -DskipTests -Dmaven.test.skip=true
## Stage 2 : create the docker final image
FROM registry.access.redhat.com/ubi8/ubi-minimal
WORKDIR /work/
COPY --from=build /usr/src/app/target/*-runner /work/application
RUN chmod 775 /work
EXPOSE 8080
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
The echo always prints blank for the arg: QUARKUS_DATASOURCE_URL.
I have tried to change the args to QUARKUS_DATASOURCE_URL=${QUARKUS_DATASOURCE_URL}, it still prints blank.
It can not read the environment defined in docker-compose.yaml file as Docker docs described said.
If set value as a string directly, it workw, eg. QUARKUS_DATASOURCE_URL="test".
According to this, you need to follow window PowerShell syntax.
It seems that you must use $env:arg or %arg% when using PowerShell or cmd, but you must use $arg when using docker commands.
FROM microsoft/nanoserver
ARG QUARKUS_DATASOURCE_URL=some_default_value
RUN echo %QUARKUS_DATASOURCE_URL%
ENV QUARKUS_DATASOURCE_URL %QUARKUS_DATASOURCE_URL%
Can't get Docker to expand ARG define at head of file
It seems that you must use $env:arg or %arg% when using powershell or
cmd, but you must use $arg when using docker commands.
I want to dockerize my vuejs app and to pass it environment variables from the docker-compose file.
I suspect the app gets the environment variables only at the build stage, so it does not get the environment variables from the docker-compose.
vue app:
process.env.FIRST_ENV_VAR
Dockerfile:
FROM alpine:3.7
RUN apk add --update nginx nodejs
RUN mkdir -p /tmp/nginx/vue-single-page-app
RUN mkdir -p /var/log/nginx
RUN mkdir -p /var/www/html
COPY nginx_config/nginx.conf /etc/nginx/nginx.conf
COPY nginx_config/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /tmp/nginx/vue-single-page-app
COPY . .
RUN npm install
RUN npm run build
RUN cp -r dist/* /var/www/html
RUN chown nginx:nginx /var/www/html
CMD ["nginx", "-g", "daemon off;"]
docker-compose:
version: '3.6'
services:
app:
image: myRegistry/myProject:tag
restart: always
environment:
- FIRST_ENV_VAR="first environment variable"
- SECOND_ENV_VAR="first environment variable"
ports:
- 8080:8080
Is there any way to pass environment variables to a web application after the build stage?
In vue js apps you need to pass the env variables as VUE_APP_
so in your case it should be VUE_APP_FIRST_ENV_VAR
Based on this https://medium.com/#rakhayyat/vuejs-on-docker-environment-specific-settings-daf2de660b9, I have made a silly npm package that help to acomplish what you want.
Go to https://github.com/juanise/jvjr-docker-env and take a look to README file.
Basically just run npm install jvjr-docker-env. A new Dockerfile, entrypoint and json file will be added to your project.
Probably you will need to modify some directory and/or file name in Dockerfile in order to work.
You can try this. The value of FIRST_ENV_VAR inside docker will be set to the value of FIRST_ENV_VAR_ON_HOST on your host system.
version: '3.6'
services:
app:
image: myRegistry/myProject:tag
restart: always
environment:
- FIRST_ENV_VAR=$FIRST_ENV_VAR_ON_HOST
- SECOND_ENV_VAR=$SECOND_ENV_VAR_ON_HOST
ports:
- 8080:8080
As you can see in the docker docs docker-compose reference envs
the defined environment values are always available in the container, not only at build stage.
You can check this by change the CMD to execute the command "env" to display all environments in your container.
If your application is not getting the actual values of the env variables it should be anything else related with your app
I'm trying to containerize two services an socket service and a django application
My file structure is
\main file {docker-compose file}
\ django application {Dockerfile}
\ socket app {Dockerfile}
When I run docker build . it build the image
then when I run docker-compose build,
I notice that the socket app and django app are copied to the container instead of only the django application as specified by the Dockerfile.
I get the idea that the Dockerfile is executed in the main directory instead of the django directory?
Here is Dockerfile that is inside the django app application
# Pull base image
FROM python:3
# Set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
RUN ls
And here is the docker-compose file.
With the usage of the ls command I tried to figure out what happend and the output is that the applications in the main folder are copied instead of the django application.
version: '3'
services:
db:
image: postgres:10.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build: ./django_app
command: ls /code/
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
volumes:
postgres_data:
is this intended use or am I doing something wrong?
The volumes: directive in your docker-compose.yml file is hiding literally everything your Dockerfile does. You'll solve your immediate problem by changing the two directories to match: in the volumes: directive, bind-mount ./django_app:/code.
In a more production-oriented workflow, I'd recommend making your Docker image totally self-contained: make sure it has a CMD that runs your application, and do not use volumes: to inject your code. Delete command: and volumes: from the docker-compose.yml and let the image provide its own code and default command. (To do development, use a Python virtual environment for local code isolation, and make sure all of your tests and a basic hand-run workflow pass before using Docker for anything.)