Azure DevOps Pipeline Dockerfile COPY --from clause - docker

Problem
Working with Azure DevOps, we use a Dockerfile to build and statically serve an Angular application:
Dockerfile
FROM node:12-14-alpine AS build
WORKDIR /usr/etc/app
COPY *.json ./
RUN npm install
COPY . .
RUN npm run build -- -c stage
FROM node:alpine as runtime
WORKDIR /app
RUN yarn add express
COPY --from=build /usr/etc/app/dist/production ./dist
COPY --from=build /usr/etc/app/server.js .
ENV NODE_ENV=production
EXPOSE 8080
ENTRYPOINT ["node", "server.js"]
Locally, the container builds as expected. However, running this dockerfile (or a similar one) on the pipeline gives following output:
Pipeline Output
Starting: Build frontend image
==============================================================================
Task : Docker
Description : Build, tag, push, or run Docker images, or run a Docker command
Version : 1.187.2
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/build/docker
==============================================================================
/usr/bin/docker pull =build /usr/etc/app/server.js .
invalid reference format
/usr/bin/docker inspect =build /usr/etc/app/server.js .
Error: No such object: =build /usr/etc/app/server.js .
[]
/usr/bin/docker build -f /home/***/YYY/app/myagent/_work/1/s/Frontend/Dockerfile --label com.azure.dev.image.system.teamfoundationcollectionuri=XXXX --label com.azure.dev.image.build.sourceversion=6440c30bb386************d370f2bc6387 --label com.azure.dev.image.system.teamfoundationcollectionuri=
Sending build context to Docker daemon 508.4kB
Step 1/18 : FROM node:12.14-alpine AS build
...
# normal build until finish, successful
(note the duplicate teamfoundationcollectionuri labelling, but this is another issue)
Questions
We don't understand:
how and why the first command is constructed (/usr/bin/docker pull =build /usr/etc/app/server.js .)
how and why the second command is constructed (/usr/bin/docker inspect =build /usr/etc/app/server.js .)
how the docker agent does not recognize the --from clause at first, but builds successfully (and correctly) nevertheless
why the docker agent does warn about an invalid reference format but then goes on recognising every single instruction correctly.
btw, all these errors also happen when building the .NET backend (with a similar dockerfile).
We now DO understand that this problem only happens with task version 1.187.2 (or 0.187.2, see link below), but not with the previous 1.181.0 (resp. 0.181.0).
Additional Sources
all we could find about this problem is an old issue thread from 2018 that has been archived by microsoft. the only link is via the IP address, no valid certificate. The user has the exact same problem, but the thread was closed. Interestingly enough, the minor and patch version numbers are identical to our system.

Came across this question while searching for an answer to the same issue. I have spent the last few hours digging through source code for the Docker task and I think I can answer your questions.
It appears that the Docker task tries to parse the Dockerfile to determine the base image, and there is (was) a bug in the task that it was looking for lines with FROM in them, but was incorrectly parsing the --from from the COPY --from line.
It then passes that base image to docker pull and docker inspect prior to calling docker build. The first two commands fail because they're being passed garbage, but the third (docker build) reads the dockerfile correctly and does a pull anyway, so it succeeds.
It looks like this was fixed on 2021-08-17 to only parse lines that start with FROM, so I assume it will make it to DevOps agents soon.

Related

commit version number in meta.json to git repo when building docker image

I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.
We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps
FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.
I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo https://github.com/ShadowApex/docker-git-push but not clear on how to execute this one properly, any help would be greatly appreciated.
Instead of adding the Git into the Docker, it will add extra layers to the docker image.
Once your image build is completed after that what you can do is something like copy the JSON outside of the docker image and push it from the CI machine to git or bucket where you want to manage.
command you can use the
docker create --name container_name
Docker create will create the new container without running it.
The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.
When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.
So once container filesystem there run command to copy a file from docker to CI machine simple as that.
Docker copy command
docker cp container_name:/app/build/meta.json .
Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.

How to start docker build at a certain stage in a multi-stage build?

Our application take a very long time to compile. Any change to the Dockerfile triggers a full-recomplile (which takes a very long time) The Dockerfile is a muli-stage build. I'm trying to work on the second stage. Is there any way to tell docker build to begin at the second stage?
FROM debian:latest AS builder
# 10-20 mins worth of stuff here
FROM alpine:latest AS runner
WORKDIR /
COPY --from=builder /work/myapp.zip .
RUN unzip myapp.zip -d /myapp
# and more stuff that I'm working on here
Is there some way to do docker build --begin-with runner?
Docker build run from the last changed stage..
See here might help you
rebuild docker image from specific step
Actually the Docker build cache should handle such situations automatically.
However this implementation has it's limits. It may not give you exactly what you want, but may be close.
Check out https://www.baeldung.com/linux/docker-build-cache
You can use --target [STAGE-NAME] from the build command according to the documentation, e.g
docker build -f Dockerfile --target runner -t imagename:1.0 .
This will compile just the runner stage

How can I get the output of the echo command?

I have a Dockerfile with a RUN instruction like this:
RUN echo "$PWD"
When generating the Docker image I only get this in console:
Step 6/17 : RUN echo "$PWD"
---> Using cache
---> 0e27924953b9
How can I get the output of the echo command in my console?
Each line in the dockerfile creates a new layer in the resulting filesystem. Unless a modification is made to your dockerfile that hasn't previously been encountered, Docker optimizes the build by reusing existing layers that have previously been built. You can see these intermediate images with the command docker images --all.
This means that Docker only needs to build from the 1st changed line in the dockerfile onwards, saving much time with repeated builds on a well-crafted dockerfile. You've already built this layer in a previous build, so it's being skipped and taken from cache.
docker build --no-cache .
should prevent the build process from using cached layers.
Change the final path parameter above to suit your environment.

How to build a Docker image for Spring Boot application without the JAR at hand

I have followed these tutorials to build Docker image for my Spring Boot application, which uses Maven as build tool.
I am using boot2docker VM on top of Windows 10 machine, cloning my project to the VM from my Bitbucker repository.
https://spring.io/guides/gs/spring-boot-docker/
https://www.callicoder.com/spring-boot-docker-example/
I understand the instructions told, but, I failed to build a proper Docker image. Here's the things I tried.
Use the Spotify maven plugin for Dockerfile. Try to run ./mvnw to build the JAR as well as the Docker image. But, I don't have Java installed in the boot2docker. So the Maven wrapper ./mvnw cannot be run.
I tried to build the JAR through Dockerfile, which is based on the openjdk:8-jdk-alpine image. I added RUN ./mvnw package instruction in the Dockerfile. Then run docker build -t <my_project> . to build Docker image.
It fails at the RUN instruction, claiming /bin/sh: mvnw: not found
The command '/bin/sh -c mvnw package' returned a non-zero code: 127
My Dockerfile, located in the directory where the mvnw is located:
MAINTAINER myname
VOLUME /tmp
RUN ./mvnw package
ARG JAR_FILE=target/myproject-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
For 1, I need to have Java installed in the OS where the Docker engine resides. But I think it's not a good practice coz this lowers the portability.
For 2, first, I don't know how to run ./mvnw successfully in Dockerfile. Second, I'm not sure if it is a good practice to build the Spring Boot JAR through Dockerfile, coz I don't see any "Docker for Spring Boot" tutorial to tell to do so.
So, what is the best practice to solve my situation? I'm new to Docker. Comments and answers are appreciated!
You can install maven and run the compile directly in the build. Typically this would be a multi-stage build to avoid including the entire jdk in your pushed image:
FROM openjdk:8-jdk-alpine as build
RUN apk add --no-cache maven
WORKDIR /java
COPY . /java
RUN mvn package -Dmaven.test.skip=true
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/java/target/myproject-0.0.1-SNAPSHOT.jar"]
The above is a stripped down version of a rework from the same example that I've done in the past. You may need to adjust filenames in your entrypoint, but the key steps are to install maven and run it inside your build.
From your second example I think you are misunderstanding how Docker builds images. When Docker executes RUN ./mvnw package, the file mvnw must exist in the file system of the image being built, which means you should have an instruction like COPY mvnw . in a previous step - that will copy the file from your local filesystem into the image.
You will likely need to copy the entire project structure inside the image, before calling ./mvnw, as the response from #BMitch suggests.
Also, as #BMitch said, to generate a small-sized image it's normally recommended to use a multi-stage build, where the first stage installs every dependency but the final image has only your JAR.
You could try something like below:
# First stage: build fat JAR
# Select base image.
# (The "AS builder" gives a name to the stage that we will need later)
# (I think it's better to use a slim image with Maven already installed instead
# than ./mvnw. Otherwise you could need to give execution rights to your file
# with instructions like "RUN chmod +x mvnw".)
FROM maven:3.6.3-openjdk-8-slim AS builder
# Set your preferred working directory
# (This tells the image what the "current" directory is for the rest of the build)
WORKDIR /opt/app
# Copy everything from you current local directory into the working directory of the image
COPY . .
# Compile, test and package
# (-e gives more information in case of errors)
# (I prefer to also run unit tests at this point. This may not be possible if your tests
# depend on other technologies that you don't whish to install at this point.)
RUN mvn -e clean verify
###
# Second stage: final image containing only WAR files
# The base image for the final result can be as small as Alpine with a JRE
FROM openjdk:8-jre-alpine
# Once again, the current directory as seen by your image
WORKDIR /opt/app
# Get artifacts from the previous stage and copy them to the new image.
# (If you are confident the only JAR in "target/" is your package, you could NOT
# use the full name of the JAR and instead something like "*.jar", to avoid updating
# the Dockerfile when the version of your project changes.)
COPY --from=builder /opt/app/target/*.jar ./
# Expose whichever port you use in the Spring application
EXPOSE 8080
# Define the application to run when the Docker container is created.
# Either ENTRYPOINT or CMD.
# (Optionally, you could define a file "entrypoint.sh" that can have a more complex
# startup logic.)
# (Setting "java.security.egd" when running Spring applications is good for security
# reasons.)
ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar /opt/app/*.war

Source files are updated, but CMD does not reflect

I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla

Resources