I'm building a web-based application that has separate reactjs projects for each role. (Think a public react site, and an admin site). Each of these web modules are in separate directories, but I would like to combine the results into a single nginx deployment. I currently have a Dockerfile under each of the directories that can build the image for that directory, but I don't want to end up with 2 image as it is a waste of space and resources.
Directory Structure
root
admin-ui
public-ui
api
I would like the output of the two UI projects to be created in a single file structure like below so I don't need to run it under a subdomain or have an extra docker container running the admin portion.
root
index.html \\public-ui
admin
index.html \\admin-ui
I suppose this can be done with a Dockerfile at the root level that copies the two ui directories in the context and does all the building there, but I was hoping for a more elegant solution - perhaps using the output of one Docker build as an input to another so I could build the directories independently then create an image that had the combined results.
There's a couple of ways to approach this. The big differences are how many commands you need to run, how easy it is to run the projects independently, and how much Docker is required to run the build process.
Run most of the build process on the host. In a comment you write out a sequence of commands to build the two projects without Docker. You can run that exactly as is, and use Docker only to build the final image.
# combined/Dockerfile
FROM nginx
COPY . /usr/share/nginx/html
(cd public-ui && yarn build)
(cd admin-ui && yarn build)
cp -a public-ui/build/* combined
cp -a admin-ui/build combined/admin
docker build ./combined
Run the entire build process from a Dockerfile in the root directory. You could do this with a multi-stage build; this does technically create multiple images, but you can clean those up pretty easily.
# Dockerfile
FROM node AS public
WORKDIR /app
COPY public-ui/package.json public-ui/yarn.lock .
RUN yarn install
COPY public-ui .
RUN yarn
CMD ["node", "build/main.js"]
FROM node AS admin
# similarly
FROM nginx
COPY --from=public /app/build /usr/share/nginx/html
COPY --from=admin /app/build /usr/share/nginx/html/admin
Go ahead and build three separate images. This would let you run the subproject images independently in a development setup; but then you can COPY --from=... the static files from those images into the final image for production use. The Docker tooling doesn't directly support this setup, though; you need to manually run the separate docker build commands to build the whole thing.
# combined/Dockerfile
FROM nginx
ARG tag=latest
COPY --from=me/public:${tag} /app/build /usr/share/nginx/html
COPY --from=me/admin:${tag} /app/build /usr/share/nginx/html/admin
tag=20210213
docker build -t me/public:$tag ./public
docker build -t me/admin:$tag ./admin
docker build -t me/app:$tag --build-arg tag=$tag ./combined
# docker rmi me/public:$tag me/admin:$tag
Related
I have a gRPC microservices project with following structure:
- common (common protobuf definitions)
- microservices
- ms1
..
- msN
Now I want to add multi stage Dockerfiles for each microservice.
The problem is that I have this common module which I need to build the rest of the projects. I can't reference the sources outside the microservice project in Dockerfile.
So the only posibility I see is to have one Dockerfile in the root folder that builds all the images:
FROM maven:3.8.6-eclipse-temurin-17 AS builder
COPY ./ /usr/app
RUN mvn -f /usr/app/pom.xml clean package
FROM eclipse-temurin:17-jre-alpine
COPY --from=builder /usr/app/microservices/ms1/target/ms1-1.0-SNAPSHOT.jar /usr/app/ms1-1.0-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "/usr/app/ms1-1.0-SNAPSHOT.jar"]
But still I have to build all the project in builder image.
One other option I see is to create separate Docker images for builder and then referencing it inside of the microservice Dockerfile by tag. But how can I trigger rebuild for builder image when building microservice image.
Are there any other options? Which one should I use?
We can use a multistage dockerfile with arguments (docs.docker.com) and maven's --also-make command line option (maven.apache.org):
FROM maven:3.8.6-eclipse-temurin-17 AS builder
ARG RELATIVE_PATH_TO_ADAPTER_MODULE # e.g. service/ms1
COPY ./ /usr/app
# specify the specific project to build via "-pl", we could also use "-f /usr/app/${PATH_TO_ADAPTER_MODULE}/pom.xml"
# we specify --also-make to also build the dependencies
RUN mvn -pl /usr/app/${PATH_TO_ADAPTER_MODULE} --also-make clean package
FROM eclipse-temurin:17-jre-alpine
ARG RELATIVE_PATH_TO_ADAPTER_MODULE # e.g. service/ms1
COPY --from=builder /usr/app/${RELATIVE_PATH_TO_ADAPTER_MODULE}/target/*.jar /usr/app/service.jar
ENTRYPOINT ["java", "-jar", "/usr/app/service.jar"]
Should the target-directory contains more than one .jar-file, we can exclude all unwanted .jar-files through a .dockerignore-file (docs.docker.com), e.g. in the module's root directory.
We can then build a particular microservice by calling docker build with --build-arg (docs.docker.com)
docker build --build-arg RELATIVE_PATH_TO_ADAPTER_MODULE=services/ms1 --tag services/ms1 .
docker build --build-arg RELATIVE_PATH_TO_ADAPTER_MODULE=services/ms2 --tag services/m22 .
...
Notice, however, that each build has still to re-build the dependency modules; they are not shared across builds. Also, each build has to re-download all maven dependencies. This article at baeldung.com shows how to use a host directory as maven cache.
I have a .net solution which has 2 projects, and each project is a microservice (a server). I have a dockerfile which first installs all the dependencies which are used by both projects. Then I publish the solution:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.sln .
COPY Server/*.csproj ./Server/
COPY JobRunner/*.csproj ./JobRunner/
RUN dotnet restore ./MySolution.sln
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "Server.dll"]
When the solution is published, 2 executables are available: Server.dll and JobRunner.dll. However, I can only start only one of them in Dockerfile.
This seems to be wasteful because restoring the solution is a common step for both Server and JobRunner project. In addition this line RUN dotnet publish -c Release -o out produces both an executable for Server and JobRunner. I could write a separate Dockerfile for each project but this seems redundant as 99% of the build steps for each project is identical.
Is there a way to somehow start 2 executables from a single file without using a script (I don't want that both services will be started in a single container)? The closest I've found is the --target option in docker build but it probably won't work because I'd need multiple entrypoints.
In your Dockerfile, change ENTRYPOINT to CMD in the very last line.
Once you do this, you can override the command by just providing an alternate command after the image name in the docker run command:
docker run ... my-image \
dotnet JobRunner.dll
(In principle you can do this without changing the Dockerfile, but the docker run construction is awkward, and there's no particular benefit to using ENTRYPOINT here. If you're using Docker Compose, you can override either entrypoint: or command: on a container-by-container basis.)
I am aware that you cannot step outside of Docker's build context and I am looking for alternatives on how to share a file between two folders (outside the build context).
My folder structure is
project
- server
Dockerfile
- client
Dockerfile
My client folder needs to access a file inside the server folder for some code generation, where the client is built according to the contract of the server.
The client Dockerfile looks like the following:
FROM node:10-alpine AS build
WORKDIR /app
COPY . /app
RUN yarn install
RUN yarn build
FROM node:10-alpine
WORKDIR /app
RUN yarn install --production
COPY --from=build /app ./
EXPOSE 5000
CMD [ "yarn", "serve" ]
I run docker build -t my-name . inside the client directory.
During the RUN yarn build step, a script is looking for a file in ../server/src/schema/schema.graphql which can not be found, as the file is outside the client directory and therefore outside Docker's build context.
How can I get around this, or other suggestions to solving this issue?
The easiest way to do this is to use the root of your source tree as the Docker context directory, point it at one or the other of the Dockerfiles, and be explicit about whether you're using the client or server trees.
cd $HOME/project
docker build \
-t project-client:$(git rev-parse --short HEAD) \
-f client/Dockerfile \
.
FROM node:10-alpine AS build
WORKDIR /app
COPY client/ ./
ET cetera
In the specific case of GraphQL, depending on your application and library stack, it may be possible to avoid needing the schema at all and just make unchecked client calls; or to make an introspection query at startup time to dynamically fetch the schema; or to maintain two separate copies of the schema file. Some projects I work on use GraphQL interfaces but the servers and clients are in actual separate repositories and there's no choice but to store separate copies of the schema, but if you're careful about changes, this isn't been a problem in practice.
I have a Dockerfiles like this
# build-home
FROM node:10 AS build-home
WORKDIR /usr/src/app
COPY /home/package.json /home/yarn.lock /usr/src/app/
RUN yarn install
COPY ./home ./
RUN yarn build
# build-dashboard
FROM node:10 AS build-dashboard
WORKDIR /usr/src/app
COPY /dashboard/package.json /dashboard/yarn.lock /usr/src/app/
RUN yarn install
COPY ./dashboard ./
RUN yarn build
# run
FROM nginx
EXPOSE 80
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build-home /usr/src/app/dist /usr/share/nginx/html/home
COPY --from=build-dashboard /usr/src/app/dist /usr/share/nginx/html/dashboard
Here building two react application and then artifacts of build are put in nginx. To improve build performance, I need to cache the dist folder in the build-home andbuild-dashboard build-stages.
For this i create a volume in docker-compose.yml
...
web:
container_name: web
build:
context: ./web
volumes:
- ./web-build-cache:/usr/src/app
ports:
- 80:80
depends_on:
- api
I’ve stopped at this stage because I don’t understand how to add volume created bydocker-compose first for the build-home stage, and after adding thisvolume to the build-dashboard.
Maybe i should be create a two volumes and attach each to each of build stages, but how do this?
UPDATE:
Initial build.
Home application:
Install modules: 100.91s
Build app: 39.51s
Dashboard application:
Install modules: 100.91s
Build app: 50.38s
Overall time:
real 8m14.322s
user 0m0.560s
sys 0m0.373s
Second build (without code or dependencies change):
Home application:
Install modules: Using cache
Build app: Using cache
Dashboard application:
Install modules: Using cache
Build app: Using cache
Overall time:
real 0m2.933s
user 0m0.309s
sys 0m0.427s
Third build (with small change in code in first app):
Home application:
Install modules: Using cache
Build app: 50.04s
Dashboard application:
Install modules: Using cache
Build app: Using cache
Overall time:
real 0m58.216s
user 0m0.340s
sys 0m0.445s
Initial build of home application without Docker: 89.69s
real 1m30.111s
user 2m6.148s
sys 2m17.094s
Second build of home application without Docker, the dist folder exists on disk (without code or dependencies change): 18.16s
real 0m18.594s
user 0m20.940s
sys 0m2.155s
Third build of home application without Docker, the dist folder exists on disk (with small change in code): 20.44s
real 0m20.886s
user 0m22.472s
sys 0m2.607s
In the docker-container, the third builds of the application is 2 times longer. This shows that if the result of the first build is on disk, other builds completed faster. In the docker container, all assemblies after the first are executed as long as the first, because there is no dist folder.
If you're using multi-stage builds then there's a problem with docker cache. The final image don't have layers with build steps. By using --target and --cache-from together you can save this layers and reuse them in rebuild.
You need something like
docker build \
--target build-home \
--cache-from build-home:latest \
-t build-home:latest
docker build \
--target build-dashboard \
--cache-from build-dashboard:latest \
-t build-dashboard:latest
docker build \
--cache-from build-dashboard:latest \
--cache-from build-home:latest \
-t my-image:latest \
You can find more details at
https://andrewlock.net/caching-docker-layers-on-serverless-build-hosts-with-multi-stage-builds---target,-and---cache-from/
You can't use volumes during image building, and in any case Docker already does the caching you're asking for. If you leave your Dockerfile as-is and don't try to add your code in volumes in the docker-compose.yml, you should get caching of the built Javascript files access rebuilds as you expect.
When you run docker build, Docker looks at each step in turn. If the input to the step hasn't changed, the step itself hasn't changed, and any files that are being added haven't changed, then Docker will just reuse the result of running that step previously. In your Dockerfile, if you only change the nginx config, it will skip over all of the Javascript build steps and reuse their result from the previous time around.
(The other relevant technique, which you already have, is to build applications in two steps: first copy in files like package.json and yarn.lock that name dependencies, and install dependencies; then copy in and build your application. Since the "install dependencies" step is frequently time-consuming and the dependencies change relatively infrequently, you want to encourage Docker to reuse the last build's node_modules directory.)
I have followed these tutorials to build Docker image for my Spring Boot application, which uses Maven as build tool.
I am using boot2docker VM on top of Windows 10 machine, cloning my project to the VM from my Bitbucker repository.
https://spring.io/guides/gs/spring-boot-docker/
https://www.callicoder.com/spring-boot-docker-example/
I understand the instructions told, but, I failed to build a proper Docker image. Here's the things I tried.
Use the Spotify maven plugin for Dockerfile. Try to run ./mvnw to build the JAR as well as the Docker image. But, I don't have Java installed in the boot2docker. So the Maven wrapper ./mvnw cannot be run.
I tried to build the JAR through Dockerfile, which is based on the openjdk:8-jdk-alpine image. I added RUN ./mvnw package instruction in the Dockerfile. Then run docker build -t <my_project> . to build Docker image.
It fails at the RUN instruction, claiming /bin/sh: mvnw: not found
The command '/bin/sh -c mvnw package' returned a non-zero code: 127
My Dockerfile, located in the directory where the mvnw is located:
MAINTAINER myname
VOLUME /tmp
RUN ./mvnw package
ARG JAR_FILE=target/myproject-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
For 1, I need to have Java installed in the OS where the Docker engine resides. But I think it's not a good practice coz this lowers the portability.
For 2, first, I don't know how to run ./mvnw successfully in Dockerfile. Second, I'm not sure if it is a good practice to build the Spring Boot JAR through Dockerfile, coz I don't see any "Docker for Spring Boot" tutorial to tell to do so.
So, what is the best practice to solve my situation? I'm new to Docker. Comments and answers are appreciated!
You can install maven and run the compile directly in the build. Typically this would be a multi-stage build to avoid including the entire jdk in your pushed image:
FROM openjdk:8-jdk-alpine as build
RUN apk add --no-cache maven
WORKDIR /java
COPY . /java
RUN mvn package -Dmaven.test.skip=true
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/java/target/myproject-0.0.1-SNAPSHOT.jar"]
The above is a stripped down version of a rework from the same example that I've done in the past. You may need to adjust filenames in your entrypoint, but the key steps are to install maven and run it inside your build.
From your second example I think you are misunderstanding how Docker builds images. When Docker executes RUN ./mvnw package, the file mvnw must exist in the file system of the image being built, which means you should have an instruction like COPY mvnw . in a previous step - that will copy the file from your local filesystem into the image.
You will likely need to copy the entire project structure inside the image, before calling ./mvnw, as the response from #BMitch suggests.
Also, as #BMitch said, to generate a small-sized image it's normally recommended to use a multi-stage build, where the first stage installs every dependency but the final image has only your JAR.
You could try something like below:
# First stage: build fat JAR
# Select base image.
# (The "AS builder" gives a name to the stage that we will need later)
# (I think it's better to use a slim image with Maven already installed instead
# than ./mvnw. Otherwise you could need to give execution rights to your file
# with instructions like "RUN chmod +x mvnw".)
FROM maven:3.6.3-openjdk-8-slim AS builder
# Set your preferred working directory
# (This tells the image what the "current" directory is for the rest of the build)
WORKDIR /opt/app
# Copy everything from you current local directory into the working directory of the image
COPY . .
# Compile, test and package
# (-e gives more information in case of errors)
# (I prefer to also run unit tests at this point. This may not be possible if your tests
# depend on other technologies that you don't whish to install at this point.)
RUN mvn -e clean verify
###
# Second stage: final image containing only WAR files
# The base image for the final result can be as small as Alpine with a JRE
FROM openjdk:8-jre-alpine
# Once again, the current directory as seen by your image
WORKDIR /opt/app
# Get artifacts from the previous stage and copy them to the new image.
# (If you are confident the only JAR in "target/" is your package, you could NOT
# use the full name of the JAR and instead something like "*.jar", to avoid updating
# the Dockerfile when the version of your project changes.)
COPY --from=builder /opt/app/target/*.jar ./
# Expose whichever port you use in the Spring application
EXPOSE 8080
# Define the application to run when the Docker container is created.
# Either ENTRYPOINT or CMD.
# (Optionally, you could define a file "entrypoint.sh" that can have a more complex
# startup logic.)
# (Setting "java.security.egd" when running Spring applications is good for security
# reasons.)
ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar /opt/app/*.war