I read on the docker documentation how ONBUILD instruction can be used, but it is not clear at all.
Can someone please explain it to me?
The ONBUILD instruction is very useful for automating the build of your chosen software stack.
Example
The Maven container is designed to compile java programs. Magically all your project's Dockerfile needs to do is reference the base container containing the ONBUILD intructions:
FROM maven:3.3-jdk-8-onbuild
CMD ["java","-jar","/usr/src/app/target/demo-1.0-SNAPSHOT-jar-with-dependencies.jar"]
The base image's Dockerfile tells all
FROM maven:3-jdk-8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD ADD . /usr/src/app
ONBUILD RUN mvn install
There's a base image that has both Java and Maven installed and a series of instructions to copy files and run Maven.
The following answer gives a Java example
How to build a docker container for a java app
As stated by the docker docs:
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.
So what does that mean? Let's take this Nodejs Dockerfile:
FROM node:0.12.6
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
In your own Dockerfile, when you do FROM node:0.12.6-onbuild you're getting an image, which means the build command has already been run, so the instructions have ALREADY been executed as well, however all but those starting with ONBUILD. These have been deferred to another time, when the downstream build (when your image is getting built from your own Dockerfile) uses this image as the base (FROM node:0.12.6-onbuild).
You can’t just call ADD and RUN now, because you don’t yet have access to the application source code, and it will be different for each application build.
That's right! The image containing onbuild instructions wasn't built on your machine, so it doesn't yet have access to package.json.
Then when you build your own Dockerfile, before executing any instruction in your file, the builder will look for ONBUILD triggers, which were added to the metadata of the parent image when it was built.
That spares you the hassle of executing these commands yourself, it really is as though these commands were written in your own Dockerfile.
Finally, they add:
You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
The thing is that if these instructions are modified in the boilerplate Dockerfile, you will have to modify them as well in your Dockerfile. But thanks to the ONBUILD instruction, we don't have to worry about it.
Related
I have two Dockerfile,
Dockerfile1
FROM centos:centos7
WORKDIR /root
ONBUILD COPY ./onbuilddemo.txt /tmp/onbuilddemo.txt
Dockerfile2
FROM onbuilddemo:latest
FROM adoptopenjdk/openjdk8:jre8u352-b05-ea-ubuntu-nightly
EXPOSE 8080
WORKDIR /root
CMD ["npm", "start"]
The image created out of dockerfile1 is onbuilddemo:latest
Now, when Im running the container built out of the image created from Dockerfile2 , then Im not seeing the file (onbuilddemo.txt) created/available in /tmp folder
Can someone please help , what Im missing . Thanks
You never used the onbuilddemo:latest image for anything, and if built with buildkit, this first step would be completely skipped:
FROM onbuilddemo:latest
FROM adoptopenjdk/openjdk8:jre8u352-b05-ea-ubuntu-nightly
A multi-stage build is used to split build dependencies from the runtime image. It does not merge multiple images together (there's no way to universally do this with arbitrary Linux filesystems that would result in a lot of broken use cases).
You need to remove the second from step, or copy the file from the first to second stage (using copy --from), or add the onbuild definition to the other base image.
Note that onbuild tends to be a bad idea, it's hard to debug and is rarely documented in places that someone is looking to explain the behavior of their build. If you can't run the steps in an entrypoint, consider templating the Dockerfile so that it's clear exactly what's being performed in the build.
I wish to build a docker image that can start a container where I can use both node version 14 and lz4. The dockerfile I have so far is:
FROM node:14-alpine
WORKDIR /app
RUN apk update
RUN apk add --upgrade lz4
node --version and lz4 --help seem to run ok with the docker run command - but I wanted to ask whether there is a specific WORKDIR I should be using in the dockerfile to follow any best practices (if any exist), or does it not matter what I set the WORKDIR to? Note I'm not sure of all my future requirements, but I may need to use this image to build other images in the future, so I want to ensure WORKDIR is set appropriately.
WORKDIR should be set to set the working directory for the subsequent docker commands in dockerfile, which makes things a little easy to understand as the paths will be relative to the working directory.
By default, / root dir is the set working directory. Without setting any other workdir, all the commands can have absolute paths which make it even more easy to understand.
It doesn't really matter much. Besides, you could always change it for your future builds.
Might be a noob question here. I'm playing around with Docker builds for my Meteor project and noticed that in this repo for a popular base image, the author suggests using the devbuild during development and the onbuild for production.
devbuild
ONBUILD RUN bash $METEORD_DIR/lib/install_meteor.sh # install dependencies
ONBUILD COPY ./ /app
ONBUILD RUN bash $METEORD_DIR/lib/build_app.sh # build the app
vs onbuild
ONBUILD COPY ./ /app
ONBUILD RUN bash $METEORD_DIR/lib/install_meteor.sh # install dependencies
ONBUILD RUN bash $METEORD_DIR/lib/build_app.sh # build the app
I assume the former uses Docker's layer caching capability to speed up the build, and he warns that using the devbuild for the final build would result in a much larger image than necessary since it contains the full meteor installation.
This seems to contradict what I've read in guides like this from the Docker Quickstart, and this one, that recommend installing dependencies first so they can be cached.
Is there a difference between the situation presented in the meteor guide vs the node guides, and what's the best way to build these dependencies in production?
Can we build a docker container that doesn't have any shell in that container ? Is it possible to create a container without a shell ?
Yes, you can create a container from scratch, which does not contain anything even bash, it will only contain binaries that you copy during build time, otherwise, it will be empty.
FROM scratch
COPY hello /
CMD ["/hello"]
You can use Docker’s reserved, minimal image, scratch, as a starting
point for building containers. Using the scratch “image” signals to
the build process that you want the next command in the Dockerfile to
be the first filesystem layer in your image.
While scratch appears in Docker’s repository on the hub, you can’t
pull it, run it, or tag any image with the name scratch. Instead, you
can refer to it in your Dockerfile. For example, to create a minimal
container using scratch:
scratch-docker-image
Using this as a base image, you can create your custom image, for example you only node runtime not thing more, then you try form scratch-node.
FROM node as builder
WORKDIR /app
COPY package.json package-lock.json index.js ./
RUN npm install --prod
FROM astefanutti/scratch-node
COPY --from=builder /app /
ENTRYPOINT ["node", "index.js"]
I have followed these tutorials to build Docker image for my Spring Boot application, which uses Maven as build tool.
I am using boot2docker VM on top of Windows 10 machine, cloning my project to the VM from my Bitbucker repository.
https://spring.io/guides/gs/spring-boot-docker/
https://www.callicoder.com/spring-boot-docker-example/
I understand the instructions told, but, I failed to build a proper Docker image. Here's the things I tried.
Use the Spotify maven plugin for Dockerfile. Try to run ./mvnw to build the JAR as well as the Docker image. But, I don't have Java installed in the boot2docker. So the Maven wrapper ./mvnw cannot be run.
I tried to build the JAR through Dockerfile, which is based on the openjdk:8-jdk-alpine image. I added RUN ./mvnw package instruction in the Dockerfile. Then run docker build -t <my_project> . to build Docker image.
It fails at the RUN instruction, claiming /bin/sh: mvnw: not found
The command '/bin/sh -c mvnw package' returned a non-zero code: 127
My Dockerfile, located in the directory where the mvnw is located:
MAINTAINER myname
VOLUME /tmp
RUN ./mvnw package
ARG JAR_FILE=target/myproject-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
For 1, I need to have Java installed in the OS where the Docker engine resides. But I think it's not a good practice coz this lowers the portability.
For 2, first, I don't know how to run ./mvnw successfully in Dockerfile. Second, I'm not sure if it is a good practice to build the Spring Boot JAR through Dockerfile, coz I don't see any "Docker for Spring Boot" tutorial to tell to do so.
So, what is the best practice to solve my situation? I'm new to Docker. Comments and answers are appreciated!
You can install maven and run the compile directly in the build. Typically this would be a multi-stage build to avoid including the entire jdk in your pushed image:
FROM openjdk:8-jdk-alpine as build
RUN apk add --no-cache maven
WORKDIR /java
COPY . /java
RUN mvn package -Dmaven.test.skip=true
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/java/target/myproject-0.0.1-SNAPSHOT.jar"]
The above is a stripped down version of a rework from the same example that I've done in the past. You may need to adjust filenames in your entrypoint, but the key steps are to install maven and run it inside your build.
From your second example I think you are misunderstanding how Docker builds images. When Docker executes RUN ./mvnw package, the file mvnw must exist in the file system of the image being built, which means you should have an instruction like COPY mvnw . in a previous step - that will copy the file from your local filesystem into the image.
You will likely need to copy the entire project structure inside the image, before calling ./mvnw, as the response from #BMitch suggests.
Also, as #BMitch said, to generate a small-sized image it's normally recommended to use a multi-stage build, where the first stage installs every dependency but the final image has only your JAR.
You could try something like below:
# First stage: build fat JAR
# Select base image.
# (The "AS builder" gives a name to the stage that we will need later)
# (I think it's better to use a slim image with Maven already installed instead
# than ./mvnw. Otherwise you could need to give execution rights to your file
# with instructions like "RUN chmod +x mvnw".)
FROM maven:3.6.3-openjdk-8-slim AS builder
# Set your preferred working directory
# (This tells the image what the "current" directory is for the rest of the build)
WORKDIR /opt/app
# Copy everything from you current local directory into the working directory of the image
COPY . .
# Compile, test and package
# (-e gives more information in case of errors)
# (I prefer to also run unit tests at this point. This may not be possible if your tests
# depend on other technologies that you don't whish to install at this point.)
RUN mvn -e clean verify
###
# Second stage: final image containing only WAR files
# The base image for the final result can be as small as Alpine with a JRE
FROM openjdk:8-jre-alpine
# Once again, the current directory as seen by your image
WORKDIR /opt/app
# Get artifacts from the previous stage and copy them to the new image.
# (If you are confident the only JAR in "target/" is your package, you could NOT
# use the full name of the JAR and instead something like "*.jar", to avoid updating
# the Dockerfile when the version of your project changes.)
COPY --from=builder /opt/app/target/*.jar ./
# Expose whichever port you use in the Spring application
EXPOSE 8080
# Define the application to run when the Docker container is created.
# Either ENTRYPOINT or CMD.
# (Optionally, you could define a file "entrypoint.sh" that can have a more complex
# startup logic.)
# (Setting "java.security.egd" when running Spring applications is good for security
# reasons.)
ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar /opt/app/*.war