Docker build outputs not working in Bitbucket pipelines - docker

I'm trying to use the custom build output within docker to export some files in to the build environment for artifacts but even though the files seem to be outputted there's nothing I can find in the build agent.
For simplicity here's a cutdown version of a dockerfile that uses a scratch image as the final image with a single message file.
FROM debian as build
RUN echo "Hello World" > message
FROM scratch as final
COPY --from=build message .
And here's an example of the scripts run for out ci pipeline within the bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
default:
- step:
name: Build
script:
- docker build -o build-output .
- ls
- find / -name "build-output" -type d
services:
- docker
Here's the log output from Bitbucket pipelines.
Images used:
build : docker.io/atlassian/default-image#sha256:d8ae266b47fce4078de5d193032220e9e1fb88106d89505a120dfe41cb592a7b
+ docker build -o build-output .
Sending build context to Docker daemon 65.02kB
Step 1/4 : FROM debian as build
latest: Pulling from library/debian
627b765e08d1: Pulling fs layer
627b765e08d1: Verifying Checksum
627b765e08d1: Download complete
627b765e08d1: Pull complete
Digest: sha256:cc58a29c333ee594f7624d968123429b26916face46169304f07580644dde6b2
Status: Downloaded newer image for debian:latest
---> 0980b84bde89
Step 2/4 : RUN echo "Hello World" > message
---> Running in dfcb5471840d
Removing intermediate container dfcb5471840d
---> 245eb3333d6c
Step 3/4 : FROM scratch as final
--->
Step 4/4 : COPY --from=build message .
---> a8805c31d962
Successfully built a8805c31d962
+ ls
Dockerfile
bitbucket-pipelines.yml
+ find / -name "build-output" -type d
As you can see the build-output directory is nowhere to be seen! 👻
Here's a public build with the whole example - https://bitbucket.org/kev_bite/dockerbuildoutput/addon/pipelines/home#!/results/2

Related

docker /bin/sh: 1: .: Can't open /path

I have an sbt project projectA under home/demo/projectA my Dockerfile resides in /home/demo/ for some reason i don't want it to be inside projectA
so hierarchy looks like this
home/demo
Dockerfile
projectA
here i am trying to run sbt command in the image build process here is the contents of my Dockerfile
FROM hseeberger/scala-sbt:11.0.2_2.12.8_1.2.8 as stripecommon
MAINTAINER sara <sarawaheed3191#gmail.com>
WORKDIR /aa
RUN \
. /home/demo/projectA sbt
I am getting this error when building the image
:~/home/demo$ docker build -t testapp .
Sending build context to Docker daemon 1.297GB
Step 1/4 : FROM hseeberger/scala-sbt:11.0.2_2.12.8_1.2.8 as stripecommon
---> 349a7e4f4029
Step 2/4 : MAINTAINER sara <sarawaheed3191#gmail.com>
---> Using cache
---> 8603662d3730
Step 3/4 : WORKDIR /aa
---> Using cache
---> f07ec5bb4d34
Step 4/4 : RUN . /home/demo/projectA sbt
---> Running in 7509ee45f622
/bin/sh: 1: .: Can't open /home/demo/projectA
The command '/bin/sh -c . /home/demo/projectA sbt' returned a non-zero code: 2
what is the right way to do this also i am a beginner in docker help will be appreciated
You need to make sure that projectA exists inside the container. so for this either you pick code from github or copy it using COPY or ADD command. After that you can build it using sbt.

Docker step results to log file - Jenkins

The Dockerfile contains several steps to build and create a docker image.
E.g. RUN dotnet restore, RUN dotnet build and RUN dotnet publish.
Is it possible to export the results/logging of each step to a separate file which can be displayed/ formatted in several Jenkins stages?
You can export Jenkins build to log file as well using this plugin https://github.com/cboylan/jenkins-log-console-log.
But if you want to view each step log in your Jenkins console log try this way.
Create a job and build your docker image from the bash script and run that script from Jenkins.
docker build --compress --no-cache --build-arg DOCKER_ENV=staging --build-arg DOCKER_REPO="${docker_name}" -t "${docker_name}:${docker_tag}" .
If you run this command from Jenkins or create bash file you will see each step logs as mentioned below. If you looking for something more let me know.
Building in workspace /var/lib/jenkins/workspace/testlog
[testlog] $ /bin/sh -xe /tmp/jenkins8370164159405243093.sh
+ cd /opt/containers/
+ ./scripts/abode_docker.sh build alpine base
verifying docker name: alpine
Docker name verified
verify_config retval= 0
comparing props
LIST: alpine:3.7
Now each step will be displayed under
http://jenkins.domain.com/job/testlog/1/console
Step 1/5 : FROM alpine:3.7
Step 2/5 : COPY id_rsa /root/.ssh/id_rsa
---> 6645bd2838c9
Step 3/5 : COPY supervisord.conf /etc/supervisord.conf
---> 635e37d9503e
.....
Step 5/5 : ONBUILD RUN ls /usr/share/zoneinfo && cp /usr/share/zoneinfo/Europe/Brussels /etc/localtime && echo "US/Eastern" > /etc/timezone && date
---> Running in 7b8517d90264
Removing intermediate container 7b8517d90264
---> 3ead0f40b7b4
Successfully built 3ead0f40b7b4
Successfully tagged alpine:3.7

How to bust the cache for the FROM line of a Dockerfile

I have a Dockerfile like this:
FROM python:2.7
RUN echo "Hello World"
When I build this the first time with docker build -f Dockerfile -t test ., or build it with the --no-cache option, I get this output:
Sending build context to Docker daemon 40.66MB
Step 1/2 : FROM python:2.7
---> 6c76e39e7cfe
Step 2/2 : RUN echo "Hello World"
---> Running in 5b5b88e5ebce
Hello World
Removing intermediate container 5b5b88e5ebce
---> a23687d914c2
Successfully built a23687d914c2
My echo command executes.
If I run it again without busting the cache, I get this:
Sending build context to Docker daemon 40.66MB
Step 1/2 : FROM python:2.7
---> 6c76e39e7cfe
Step 2/2 : RUN echo "Hello World"
---> Using cache
---> a23687d914c2
Successfully built a23687d914c2
Successfully tagged test-requirements:latest
Cache is used for Step 2/2, and Hello World is not executed. I could get it to execute again by using --no-cache. However, each time, even when I am using --no-cache it uses a cached python:2.7 base image (although, unlike when the echo command is cached, it does not say ---> Using cache).
How do I bust the cache for the FROM python:2.7 line? I know I can do FROM python:latest, but that also seems to just cache whatever the latest version is the first time you build the Dockerfile.
If I understood the context correctly, you can use --pull while using docker build to get the latest base image -
$ docker build -f Dockerfile.test -t test . --pull
So using both --no-cache & --pull will create an absolute fresh image using Dockerfile -
$ docker build -f Dockerfile.test -t test . --pull --no-cache
Issue - https://github.com/moby/moby/issues/4238
FROM pulls an image from the registry (DockerHub in this case).
After the image is pulled to produce your build, you will see it if you run docker images.
You may remove it by running docker rmi python:2.7.

provide two docker images from the same dockerfile

I am setting an automatic build from which I would like to produce 2 images.
The use-case is in building and distributing a library:
- one image with the dependencies which will be reused for building and testing on Travis
- one image to provide the built software libs
Basically, I need to be able to push an image of the container at a certain point (before building) and one later (after building and installing).
Is this possible? I did not find anything relevant in Dockerfile docs.
You can do that using Docker Multi Stage builds. Have two Docker files
Dockerfile
FROM alpine
RUN apk update && apk add gcc
RUN echo "This is a test" > /tmp/builtfile
Dockerfile-prod
FROM myapp:testing as source
FROM alpine
COPY --from=source /tmp/builtfile /tmp/builtfile
RUN cat /tmp/builtfile
build.sh
docker build -t myapp:testing .
docker build -t myapp:production -f Dockerfile-prod .
So to explain, what we do is build the image with dependencies first. Then in our second Dockerfile-prod, we just include a FROM of our previously build image. And copy the built file to the production image.
Truncated output from my build
vagrant#vagrant:~/so$ ./build.sh
Step 1/3 : FROM alpine
Step 2/3 : RUN apk update && apk add gcc
Step 3/3 : RUN echo "This is a test" > /tmp/builtfile
Successfully tagged myapp:testing
Step 1/4 : FROM myapp:testing as source
Step 2/4 : FROM alpine
Step 3/4 : COPY --from=source /tmp/builtfile /tmp/builtfile
Step 4/4 : RUN cat /tmp/builtfile
This is a test
Successfully tagged myapp:production
For more information refer to https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages

How to flatten a Docker image?

I made a Docker container which is fairly large. When I commit the container to create an image, the image is about 7.8 GB big. But when I export the container (not save the image!) to a tarball and re-import it, the image is only 3 GB big. Of course the history is lost, but this OK for me, since the image is "done" in my opinion and ready for deployment.
How can I flatten an image/container without exporting it to the disk and importing it again? And: Is it a wise idea to do that or am I missing some important point?
Now that Docker has released the multi-stage builds in 17.05, you can reformat your build to look like this:
FROM buildimage as build
# your existing build steps here
FROM scratch
COPY --from=build / /
CMD ["/your/start/script"]
The result will be your build environment layers are cached on the build server, but only a flattened copy will exist in the resulting image that you tag and push.
Note, you would typically reformulate this to have a complex build environment and only copy over a few directories. Here's an example with Go to make a single binary image from source code and a single build command without installing Go on the host and compiling outside of docker:
$ cat Dockerfile
ARG GOLANG_VER=1.8
FROM golang:${GOLANG_VER} as builder
WORKDIR /go/src/app
COPY . .
RUN go-wrapper download
RUN go-wrapper install
FROM scratch
COPY --from=builder /go/bin/app /app
CMD ["/app"]
The go file is a simple hello world:
$ cat hello.go
package main
import "fmt"
func main() {
fmt.Printf("Hello, world.\n")
}
The build creates both environments, the build environment and the scratch one, and then tags the scratch one:
$ docker build -t test-multi-hello .
Sending build context to Docker daemon 4.096kB
Step 1/9 : ARG GOLANG_VER=1.8
--->
Step 2/9 : FROM golang:${GOLANG_VER} as builder
---> a0c61f0b0796
Step 3/9 : WORKDIR /go/src/app
---> Using cache
---> af5177aae437
Step 4/9 : COPY . .
---> Using cache
---> 976490d44468
Step 5/9 : RUN go-wrapper download
---> Using cache
---> e31ac3ce83c3
Step 6/9 : RUN go-wrapper install
---> Using cache
---> 2630f482fe78
Step 7/9 : FROM scratch
--->
Step 8/9 : COPY --from=builder /go/bin/app /app
---> Using cache
---> 5645db256412
Step 9/9 : CMD /app
---> Using cache
---> 8d428d6f7113
Successfully built 8d428d6f7113
Successfully tagged test-multi-hello:latest
Looking at the images, only the single binary is in the image being shipped, while the build environment is over 700MB:
$ docker images | grep 2630f482fe78
<none> <none> 2630f482fe78 6 days ago 700MB
$ docker images | grep 8d428d6f7113
test-multi-hello latest 8d428d6f7113 6 days ago 1.56MB
And yes, it runs:
$ docker run --rm test-multi-hello
Hello, world.
Up from Docker 1.13, you can use the --squash flag.
Before version 1.13:
To my knowledge, you cannot using the Docker api. docker export and docker import are designed for this scenario, as you yourself already mention.
If you don't want to save to disk, you could probably pipe the outputstream of export into the input stream of import. I have not tested this, but try
docker export red_panda | docker import - exampleimagelocal:new
Take a look at docker-squash
Install with:
pip install docker-squash
Then, if you have a image, you can squash all layers into 1 with
docker-squash -f <nr_layers_to_squash> -t new_image:tag existing_image:tag
A quick 1-liner that is useful for me to squash all layers:
docker-squash -f $(($(docker history $IMAGE_NAME | wc -l | xargs)-1)) -t ${IMAGE_NAME}:squashed $IMAGE_NAME
Build the image with the --squash flag:
https://docs.docker.com/engine/reference/commandline/build/#squash-an-images-layers---squash-experimental
Also consider mopping up unneeded files, such as the apt cache:
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

Resources