Docker container with build output and no source - docker

I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.

Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.

The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/

This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.

As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.

Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"

Related

Understanding workflow of multi-stage Dockerfile

There are a few processes I'm struggling to wrap my brain around when it comes to multi-stage Dockerfile.
Using this as an example, I have a couple questions below it:
# Dockerfile
# Uses multi-stage builds requiring Docker 17.05 or higher
# See https://docs.docker.com/develop/develop-images/multistage-build/
# Creating a python base with shared environment variables
FROM python:3.8.1-slim as python-base
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
POETRY_HOME="/opt/poetry" \
POETRY_VIRTUALENVS_IN_PROJECT=true \
POETRY_NO_INTERACTION=1 \
PYSETUP_PATH="/opt/pysetup" \
VENV_PATH="/opt/pysetup/.venv"
ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"
# builder-base is used to build dependencies
FROM python-base as builder-base
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
curl \
build-essential
# Install Poetry - respects $POETRY_VERSION & $POETRY_HOME
ENV POETRY_VERSION=1.0.5
RUN curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python
# We copy our Python requirements here to cache them
# and install only runtime deps using poetry
WORKDIR $PYSETUP_PATH
COPY ./poetry.lock ./pyproject.toml ./
RUN poetry install --no-dev # respects
# 'development' stage installs all dev deps and can be used to develop code.
# For example using docker-compose to mount local volume under /app
FROM python-base as development
ENV FASTAPI_ENV=development
# Copying poetry and venv into image
COPY --from=builder-base $POETRY_HOME $POETRY_HOME
COPY --from=builder-base $PYSETUP_PATH $PYSETUP_PATH
# Copying in our entrypoint
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
# venv already has runtime deps installed we get a quicker install
WORKDIR $PYSETUP_PATH
RUN poetry install
WORKDIR /app
COPY . .
EXPOSE 8000
ENTRYPOINT /docker-entrypoint.sh $0 $#
CMD ["uvicorn", "--reload", "--host=0.0.0.0", "--port=8000", "main:app"]
# 'lint' stage runs black and isort
# running in check mode means build will fail if any linting errors occur
FROM development AS lint
RUN black --config ./pyproject.toml --check app tests
RUN isort --settings-path ./pyproject.toml --recursive --check-only
CMD ["tail", "-f", "/dev/null"]
# 'test' stage runs our unit tests with pytest and
# coverage. Build will fail if test coverage is under 95%
FROM development AS test
RUN coverage run --rcfile ./pyproject.toml -m pytest ./tests
RUN coverage report --fail-under 95
# 'production' stage uses the clean 'python-base' stage and copyies
# in only our runtime deps that were installed in the 'builder-base'
FROM python-base as production
ENV FASTAPI_ENV=production
COPY --from=builder-base $VENV_PATH $VENV_PATH
COPY ./docker/gunicorn_conf.py /gunicorn_conf.py
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
COPY ./app /app
WORKDIR /app
ENTRYPOINT /docker-entrypoint.sh $0 $#
CMD [ "gunicorn", "--worker-class uvicorn.workers.UvicornWorker", "--config /gunicorn_conf.py", "main:app"]
The questions I have:
Are you docker build ... this entire image and then just docker run ... --target=<stage> to run a specific stage (development, test, lint, production, etc.) or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test && docker run test ...)?
I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?
When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages in the Dockerfile (devspace Hooks or something) or using native test tools in the container (e.g. npm test, ./manage.py test, etc.)?
Thanks for clearing this questions up.
To answer from a less DevSpace-y persepctive and a more general Docker-y one (With no disrespect to Lukas!):
Question 1
Breakdown
❌ Are you docker build ... this entire image and then just docker run ... --target= to run a specific stage
You're close in your understanding and managed to outline the approach in your second part of the query:
✅ or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test && docker run test ...)?
The --target option is not present in the docker run command, which can be seen when calling docker run --help.
I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?
Yes, it's impossible to do it the first way, as when --target is not specified, then only the final stage is incorporated into your image. This is a great benefit as it cuts down the final size of your container, while allowing you to use multiple directives.
Details and Examples
It is a flag that you can pass in at build time so that you can choose which layers to build specifically. It's a pretty helpful directive that can be used in a few different ways. There's a decent blog post here talking about the the new features that came out with multi-stage builds (--target is one of them)
For example, I've had a decent amount of success building projects in CI utilising different stages and targets, the following is pseudo-code, but hopefully the context is applied
# Dockerfile
FROM python as base
FROM base as dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
FROM dependencies as test
COPY src/ src/
COPY test/ test/
FROM dependencies as publish
COPY src/ src/
CMD ...
A Dockerfile like this would enable you to do something like this in your CI workflow, once again, pseudo-code-esque
docker build . -t my-app:unit-test --target test
docker run my-app:unit-test pyunit ...
docker build . -t my-app:latest
docker push ...
In some scenarios, it can be quite advantageous to have this fine grained control over what gets built when, and it's quite the boon to be able to run those images that comprise of only a few stages without having built the entire app.
The key here, is that there's no expectation that you need to use --target, but it can be used to solve particular problems.
Question 2
When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages in the Dockerfile (devspace Hooks or something) or using native test tools in the container (e.g. npm test, ./manage.py test, etc.)?
Lukas covers a devspace specific approach very well, but ultimately you can test however you like. Using devspace to make it easier to run (and remember to run) tests certainly sounds like a good idea. Whatever tool you use to enable an easier workflow, will likely still use npm test etc under the hood.
If you wish to call npm test outside of a container that's fine, if you wish to call it in a container, that's also fine. The solution to your problem will always change depending on your landscape. CICD helps to standardise on external factors and provide a uniform means to ensure testing is performed, and deployments are auditable
Hope that helps in any way shape or form 👍
Copying my response to this from Reddit to help others who may look for this on StackOverflow:
DevSpace maintainer here. For my workflow (and the default DevSpace behavior if you set it up with devspace init), image building is being skipped during development because it tends to be the most annoying and time-consuming part of the workflow. Instead, most teams that use DevSpace have a dev image pushed to a registry and build by CI/CD which is then used in devspace.yaml using replacePods.replaceImage as shown here: https://devspace.sh/cli/docs/configuration/development/replace-pods
This means that your manifests or helm charts are being deployed referencing the prod images (as they should be) and then devspace will (after deployment) replace the images of your pods with dev-optimized images that ship all your tooling. Inside these pods, you can then use the terminal to build your application, run tests along with other dependencies running in your cluster etc.
However, typically teams also start using DevSpace in CI/CD after a while and then they add profiles (e.g. prod profile or integration-testing profile etc. - more on https://devspace.sh/cli/docs/configuration/profiles/basics) to their devspace.yaml where they add image building again because they want to build the images in their pipelines using kaniko or docker. For this, you would specify the build target in devspace.yaml as well: https://devspace.sh/cli/docs/configuration/images/docker#target
FWIW regarding 1: I never use docker run --target but I also always use Kubernetes directly over manual docker commands to run any workloads.

Automatic build of docker container (java backend, angular frontend)

I would like to say that this is my first container and actually my first JAVA app so maybe I will have basic questions so be lenient, please.
I wrote spring boot app and my colleague has written the frontend part for it in angular. What I would like to achieve is to have "one button/one command" in IntelliJ to create a container containing whole app backend and front end.
What I need to do is:
Clone FE from company repository (I am using ssh key now)
Clone BE from GitHub
Build FE
Copy built FE to static folder in java app
Build BE
Create a container running this app
My current solution is to create "builder" container and there build FE and BE and then copy it to "production" container like this:
#BUILDER
FROM alpine AS builder
WORKDIR /src
# add credentials on build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/ \
&& echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa \
&& echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
&& chmod 600 /root/.ssh/id_rsa
# installing dependencies
RUN apk update && apk upgrade && apk add --update nodejs nodejs-npm \
&& npm install -g #angular/cli \
&& apk add openjdk11 \
&& apk add maven \
&& apk add --no-cache openssh \
&& apk add --no-cache git
#cloning repositories
RUN git clone git#code.siemens.com:apcprague/edge/metal-forming-fe.git
RUN git clone git#github.com:bzumik1/metalForming.git
# builds front end
WORKDIR /src/metal-forming-fe
RUN npm install && ng build
# builds whole java app with front end
WORKDIR /src/metalForming
RUN cp -a /src/metal-forming-fe/dist/metal-forming/. /src/metalForming/src/main/resources/static \
&& mvn install -DskipTests=true
#PRODUCTION CONTAINER
FROM adoptopenjdk/openjdk11:debian-slim
LABEL maintainer jakub.znamenacek#siemens.com
RUN mkdir app
RUN ["chmod", "+rwx", "/app"]
WORKDIR /app
COPY --from=builder /src/metalForming/target/metal_forming-0.0.1-SNAPSHOT.jar .
EXPOSE 4200
RUN java -version
CMD java -jar metal_forming-0.0.1-SNAPSHOT.jar
This works but I takes very long time so I guess this is not correct way how to do it. Could anyone point me in correct direction? I was thinking if there is a way how to make maven to all these steps for me but maybe this is totally off.
Also if you will find any problem in my Dockerfile please let me know as I said this is my first Dockerfile so I could overlook something.
EDITED:
BTW does anyone know how can I get rid of this part:
echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
it adds GitHub to known_hosts (I also need to add a company repository there). It is because when I run git clone it will ask me if I trust this ... and I have to write yes but I don't know how to do it if it is automatically running in docker and I have no option to write there yes. I have tried yes | git clone ... but this is also not working
a few things:
1, if this runs "slow" on the machine than it will run slow inside a container too.
2, remove --no-cache,* you want to cache everything that is static, because next time when you build those commands will not run where there is no change. Once there is change in one command than that command will rerun instead using the builder cache and also all subsequent commands will have to rerun too.
*UPDATE: I have mistaken "apk update --no-cache" with "docker build --no-cache". I stated wrong that "apk add --no-cache" would mean that command is not cached, because this command is cached on docker builder level. However with this parameter you wouldn't need to delete in a later step the /var/cache/apk/ directory to make you image smaller, but that you wouldn't need to do here, because you are already using multi stage build, so this would not affect your final image size.
One more thing to clarify, all statements in Dockerfile are checked if they changed, if they did not than docker builder uses the cached layer for it and won't run that statement. Exception is ADD and COPY commands, here builder also checks the copied, added files if they changed with checksum. Also if a statement is changed or ADD-ed COPY-ed file(s) changed than that statement is re-run and all subsequent statements re-run too, so you want to put your source code copy statemant as much at the end as it is possible
If you want to disable this cache, do "docker build --no-cache ..." this way all the steps will be re-run that is in the Dockerfile.
3, specify WORKDIR at the top once, if you need to switch directory later use this:
RUN cd /someotherdir && mycommand
Also specifying a Subsequent WORKDIR will be relativ to the previous WORKDIR so it will mess up readibilty what is the (probably) sole purpose of WORKDIR statement.
4, Enable BuildKit:
Either declare environment variable
DOCKER_BUILDKIT=1
or add this to /etc/docker/daemon.json
{ "features": { "buildkit": true } }
BuildKit might not help in this case, but if you do more complex Dockerfiles with more stages Buildkit can run those parallel so overall build will be faster.
5, Do not skip tests with DskipTests=true :)
6, as stated in a comment, do not clone the repo inside the image build, you do not need to do that at all. Just put the Dockerfile in the / of the repo, and COPY the repo files with a Dockerfile command:
COPY . .
First dot is the source that is your current directory on your machine, second dot is the target, the working dir inside the image, /src with your Dockerfile. You build the image and publish it, push it to a docker registry so others can pull it and start using it. If you want more complex stuff building and publishing with a help of a server, look up CI/CD techniques.

Create multistage docker file

I have 2 different docker files (production and test environment). I want to build a single multistage docker file with these 2 dockerfiles.
First dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Second dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss ./install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Question How to create a multistage docker file for these 2 docker files?
Thanks in advance.
I'm not sure what you are trying to do requires a multi-stage build. Instead, what you may want to do is use a custom base image (for dev) which would be your first code block and use that base image for production.
If the first image was tagged as shanmukh/wildfly:dev:
FROM shanmukh/wildfly:dev
USER jboss
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
This could be tagged as shanmukh/wildfly:prod.
The reason why I don't think you want a multi-stage build is because you mentioned you are trying to handle two environments (test and production).
Even if you did want to say use a multi-stage build for production, from what I can see, there is no reason to. If your initial build stage included installing build dependencies such as a compiler, copying code and building it, then it'd be efficient to use a multi-stage build as the final image would not include the initial dependencies (such as the compiler and other potential dangerous dev tools) or any unneeded build artifacts that were produced.
Hope this helps.

How to configure different dockerfile for development and production

I use docker for development and in production for laravel project. I have slightly different dockerfile for development and production. For example I am mounting local directory to docker container in development environment so that I don't need to do docker build for every change in code.
As mounted directory will only be available when running the docker container I can't put commands like "composer install" or "npm install" in dockerfile for development.
Currently I am managing two docker files, is there any way that I can do this with single docker file and decide which commands to run when doing docker build by sending parameters.
What I am trying to achieve is
In docker file
...
IF PROD THEN RUN composer install
...
During docker build
docker build [PROD] -t mytag .
As a best practice you should try to aim to use one Dockerfile to avoid unexpected errors between different environments. However, you may have a usecase where you cannot do that.
The Dockerfile syntax is not rich enough to support such a scenario, however you can use shell scripts to achieve that.
Create a shell script, called install.sh that does something like:
if [ ${ENV} = "DEV" ]; then
composer install
else
npm install
fi
In your Dockerfile add this script and then execute it when building
...
COPY install.sh install.sh
RUN chmod u+x install.sh && ./install.sh
...
When building pass a build arg to specify the environment, example:
docker build --build-arg "ENV=PROD" ...
UPDATE (2020):
Since this was written 3 years ago, many things have changed (including my opinion about this topic). My suggested way of doing this, is using one dockerfile and using scripts. Please see #yamenk's answer.
ORIGINAL:
You can use two different Dockerfiles.
# ./Dockerfile (non production)
FROM foo/bar
MAINTAINER ...
# ....
And a second one:
# ./Dockerfile.production
FROM foo/bar
MAINTAINER ...
RUN composer install
While calling the build command, you can tell which file it should use:
$> docker build -t mytag .
$> docker build -t mytag-production -f Dockerfile.production .
You can use build args directly without providing additional sh script. Might look a little messy, though. But it works.
Dockerfile must be like this:
FROM alpine
ARG mode
RUN if [ "x$mode" = "xdev" ] ; then echo "Development" ; else echo "Production" ; fi
And commands to check are:
docker build -t app --build-arg mode=dev .
docker build -t app --build-arg mode=prod .
I have tried several approaches to this, including using docker-compose, a multi-stage build, passing an argument through a file and the approaches used in other answers. My company needed a good way to do this and after trying these, here is my opinion.
The best method is to pass the arg through the cmd. You can pass it through vscode while right clicking and choosing build image
Image of visual studio code while clicking image build
using this code:
ARG BuildMode
RUN echo $BuildMode
RUN if [ "$BuildMode" = "debug" ] ; then apt-get update \
&& apt-get install -y --no-install-recommends \
unzip \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg ; fi
and in the build section of dockerfile:
ARG BuildMode
ENV Environment=${BuildMode:-debug}
RUN dotnet build "debugging.csproj" -c $Environment -o /app
FROM build AS publish
RUN dotnet publish "debugging.csproj" -c $Environment -o /app
The best way to do it is with .env file in your project.
You can define two variables CONTEXTDIRECTORY and DOCKERFILENAME
And create Dockerfile-dev and Dockerfile-prod
This is example of using it:
docker compose file:
services:
serviceA:
build:
context: ${CONTEXTDIRECTORY:-./prod_context}
dockerfile: ${DOCKERFILENAME:-./nginx/Dockerfile-prod}
.env file in the root of project:
CONTEXTDIRECTORY=./
DOCKERFILENAME=Dockerfile-dev
Be careful with the context. Its path starts from the directory with the dockerfile that you specified, not from docker-compose directory.
In default values i using prod, because if you forget to specify env variables, you won't be able to accidentally build a dev version in production
Solution with diffrent dockerfiles is more convinient, then scripts. It's easier to change and maintain

Docker and symlinks

I've got a repo set up like this:
/config
config.json
/worker-a
Dockerfile
<symlink to config.json>
/code
/worker-b
Dockerfile
<symlink to config.json>
/code
However, building the images fails, because Docker can't handle the symlinks. I should mention my project is far more complicated than this, so restructuring directories isn't a great option. How do I deal with this situation?
Docker doesn't support symlinking files outside the build context.
Here are some different methods for using a shared file in a container:
Build Time
Copy from a config image (Docker buildkit)
Recent versions of Docker allow RUN steps to bind mount from a named image or previous build stage with the --mount=type=bind,target=/dir,source=/dir,from=image-or-stage-name
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
FROM scratch
COPY config.json /config.json
Build and tag the config image me/worker-config
docker build -t me/worker-config:latest .
Mount the me/worker-config image during the real build
RUN --mount=type=bind,target=/worker-config,source=/,from=me/worker-config:latest \
cp /worker-config/config.json /app/config.json;
Share a base image
Create a Dockerfile for the base me/worker-config image that includes the shared config/files.
COPY config.json /config.json
Build and tag the image me/worker-config
docker build -t me/worker-config:latest .
Source the base me/worker-config image for all your worker Dockerfiles
FROM me/worker-config:latest
Build script
Use a script to push the common config to each of your worker containers.
./build worker-n
#!/bin/sh
set -uex
rundir=$(readlink -f "${0%/*}")
container=$(shift)
cd "$rundir/$container"
cp ../config/config.json ./config-docker.json
docker build "$#" .
Build from URL
Pull the config from a common URL for all worker-n builds.
ADD http://somehost/config.json /
Increase the scope of the image build context
Include the symlink target files in the build context by building from a parent directory that includes both the shared files and specific container files.
cd ..
docker build -f worker-a/Dockerfile .
All the source paths you reference in a Dockerfile must also change to match the new build context:
COPY workerathing /app
becomes
COPY worker-a/workerathing /app
Using this method can make all build contexts large if you have one large build context, as they all become shared. It can slow down builds, especially to remote Docker build servers. Note that only the .dockerignore file from the base of the build context is referenced.
Alternate build that can mount volumes
Other projects that strive for Dockerfile compatibility may support volumes at build time. For example a podman build / buildah support a --volume option to bind mount files from the host into a build container.
podman build --volume /project/config:/worker-config:ro,Z -t me/worker-a .
Then the build can reference the mounted volume
COPY /worker-config/config.json /app
Run time
Mount a config directory from a named volume
Volumes like this only work as directories, so you can't specify a file like you could when mounting a file from the host to container.
docker volume create --name=worker-cfg-vol
docker run -v worker-cfg-vol:/config worker-config cp config.json /config
docker run -v worker-cfg-vol:/config:/config worker-a
Mount config directory from data container
Again, directories only as it's basically the same as above. This will automatically copy files from the destination directory into the newly created shared volume though.
docker create --name wcc -v /config worker-config /bin/true
docker run --volumes-from wcc worker-a
Mount config file from host at runtime
docker run -v /app/config/config.json:/config.json worker-a
Node.js-specific solution
I also ran into this problem, and would like to share another method that hasn't been mentioned above. Instead of using npm link in my Dockerfile, I used yalc.
Install yalc in your container, e.g. RUN npm i -g yalc.
Build your library in Docker, and run yalc publish (add the --private flag if your shared lib is private). This will 'publish' your library locally.
Run yalc add my-lib in each repo that would normally use npm link before running npm install. It will create a local .yalc folder in your Docker container, create a symlink in node_modules that works inside Docker to this folder, and rewrite your package.json to refer to this folder too, so you can safely run install.
Optionally, if you do a two stage build, make sure that you also copy the .yalc folder to your final image.
Below an example Dockerfile, assuming you have a mono repository with three packages: models, gui and server, and the models repository must be shared and named my-models.
# You can access the container using:
# docker run -it my-name sh
# To start it stand-alone:
# docker run -it -p 8888:3000 my-name
FROM node:alpine AS builder
# Install yalc globally (the apk add... line is only needed if your installation requires it)
RUN apk add --no-cache --virtual .gyp python make g++ && \
npm i -g yalc
RUN mkdir /packages && \
mkdir /packages/models && \
mkdir /packages/gui && \
mkdir /packages/server
COPY ./packages/models /packages/models
WORKDIR /packages/models
RUN npm install && \
npm run build && \
yalc publish --private
COPY ./packages/gui /packages/gui
WORKDIR /packages/gui
RUN yalc add my-models && \
npm install && \
npm run build
COPY ./packages/server /packages/server
WORKDIR /packages/server
RUN yalc add my-models && \
npm install && \
npm run build
FROM node:alpine
RUN mkdir -p /app
COPY --from=builder /packages/server/package.json /app/package.json
COPY --from=builder /packages/server/dist /app/dist
# Make sure you copy the yalc registry too.
COPY --from=builder /packages/server/.yalc /app/.yalc
COPY --from=builder /packages/server/node_modules /app/node_modules
COPY --from=builder /packages/gui/dist /app/dist/public
WORKDIR /app
EXPOSE 3000
CMD ["node", "./dist/index.js"]
Hope that helps...
The docker build CLI command sends the specified directory (typically .) as the "build context" to the Docker Engine (daemon). Instead of specifying the build context as /worker-a, specify the build context as the root directory, and use the -f argument to specify the path to the Dockerfile in one of the child directories.
docker build -f worker-a/Dockerfile .
docker build -f worker-b/Dockerfile .
You'll have to rework your Dockerfiles slightly, to point them to ../config/config.json, but that is pretty trivial to fix.
Also check out this question/answer, which I think addresses the exact same problem that you're experiencing.
How to include files outside of Docker's build context?
Hope this helps! Cheers
An alternative solution is to upgrade all your soft links into hard links.

Resources