Where to put the compiled Go binary? - docker

I am trying to build my first Dockerfile for a Go application and use DroneCI to build pipeline.
The DroneCI configuration looks as follows:
kind: pipeline
type: docker
name: Build auto git tagger
steps:
- name: test and build
image: golang
commands:
- go mod download
- go test ./test
- go build ./cmd/git-tagger
- name: Build docker image
image: plugins/docker
pull: if-not-exists
settings:
username:
password:
repo:
dockerfile: ./build/ci/Dockerfile
registry:
auto_tag: true
trigger:
branch:
- master
I have followed the structure convention from https://github.com/golang-standards/project-layout:
The Dockerfile looks as follows so far:
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
The next step is, to copy the GO application binary into the container and here is the question, where to put the compiled binary? At the moment, it puts into the project folder.

You can specify output directory and file name wit go build -o flag. For example:
go build ./cmd/git-tagger -o ./build/package/foo
Then edit your Dockerfile:
Load the binary you've got with COPY or ADD
Execute it with ENTRYPOINT or CMD
P.S you specified Dockerfile path as ./build/ci/Dockerfile in your config. But it's in package dir on the screenshot. Don't forget that repository you linked is just smb's personal opinion and Go doesn't really enforce you to any structure, it depends on your company style standards or on your preferences. So it's not extremely important where to put the binary.

Related

How to set different ENV variable when building and deploying Docker Image to Cloud Run?

I have a backend service that I'll need to deploy to Google Cloud Run.
From Google's tutorial on Cloud Run, we get that:
First you need to build your image and send it to Cloud Build.
gcloud builds submit --tag gcr.io/PROJECT-ID/helloworld
Only then you deploy it to Cloud Run:
gcloud run deploy --image gcr.io/PROJECT-ID/helloworld --platform managed
I get the sequence above. But I'll be deploying this service to 2 different environments: TEST and PROD.
So I need an SERVER_ENV variable, that should be "PROD" on my production environment, and of course it should be "TEST" on my test environment. This is so my server (express server that will be run from the container) knows which database to connect to.
But the problem is that I only have a single Dockerfile:
FROM node:12-slim
ENV SERVER_ENV=PROD
WORKDIR /
COPY ./package.json ./package.json
COPY ./distApp ./distApp
COPY ./distService ./distService
COPY ./public ./public
RUN npm install
ENTRYPOINT npm start
So how can I set different ENV variables while following the build & deploy sequence above? Is there an option in the gcloud builds submit comment that I can maybe override something? Or use a different Dockerfile? Anybody got other ideas?
AN IDEA:
Maybe use the Cloud Build configuration file?
cloudbuild.yaml
You can't achieve this without a cloudbuild.yaml file. The command gcloud builds submit --tag ... doesn't accept extra docker parameter.
Here an example of configuration
FROM node:12-slim
ARG SERVER_CONF=PROD
ENV SERVER_ENV=$SERVER_CONF
WORKDIR /
COPY ./package.json ./package.json
COPY ./distApp ./distApp
COPY ./distService ./distService
COPY ./public ./public
RUN npm install
ENTRYPOINT npm start
I created a build argument SERVER_CONF. Your ENV will take this value at build time. The default value is PROD
Now your cloudbuild.yaml file
step:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/PROJECT-ID/helloworld', '--build-arg="SERVER_CONF=$_SERVER_CONF"', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PROJECT-ID/helloworld']
substitutions:
_SERVER_CONFPROD: PROD
Use substitution variables to change the environment. Not that here you can also set a default value, that override your Dockerfile value. Take care of this!
You can also set the tag as substitution variable if you want
Eventually, how to call your Cloud Build
# With default server conf (no substitution variables, the the file default)
gcloud builds submit
# With defined server conf
gcloud builds submit --substitutions=_SERVER_CONF=TEST
I think that what you are trying to achieve is possible using the ARG instruction in the Dockerfile.
I would set it to be test and then use the arg parameter according to the environment you are building in right now.
More docs:
how to use it with Docker Compose - https://docs.docker.com/compose/compose-file/#args
additional documentation - https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact

How to use a script of a Docker container from CI pipeline

Newbie in Docker & Docker containers over here.
I'm trying to realize how can I run a script which is in the image from my bitbucket-pipeline process.
Some context about where I am and some knowledge
In a Bitbucket-Pipelines step you can add any image to run in that specific step. What I already tried and works without problem for example is get an image like alpine:node so I can run npm commands in my pipeline script:
definitions:
steps:
- step: &runNodeCommands
image: alpine/node
name: "Node commands"
script:
- npm --version
pipelines:
branches:
master:
- step: *runNodeCommands
This means that each push on master branch will run a build where using the alpine/node image we can run npm commands like npm --version and install packages.
What I've done
Now I'm working with a custom container where I'm installing a few node packages (like eslint) to run commands. I.E. eslint file1.js file2.js
Great!
What I'm trying but don't know how to
I've a local bash script awesomeScript.sh with some input params in my repository. So my bitbucket-pipelines.yml file looks like:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- ./awesomeScript.sh -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
I'm using the same awesomeScript.sh in different repositories and I want to move that functionality inside my Docker container and get rid of that script in the repository
How can I build my Dockerfile to be able to run that script "anywhere" where I use the docker image?
PS:
I've been thinking in build a node_module, installing the module in the Docker Image like the eslint module... but I would like to know if this is possible
Thanks!
If you copy awesomeScript.sh to the my-container-with-eslint Docker image then you should be able to use it without needing the script in each repository.
Somewhere in the Dockerfile for my-container-with-eslint you can copy the script file into the image:
COPY awesomeScript.sh /usr/local/bin/
Then in Bitbucket-Pipelines:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- awesomeScript -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
As peterevans said, If you copy the script to your docker image, then you should be able to use it without needing the script in each repository.
In your Dockerfile add the following line:
COPY awesomeScript.sh /usr/local/bin/ # you may use ADD too
In Bitbucket-Pipelines:
pipelines:
branches:
master:
- step:
image: <your user name>/<image name>
name: "Run script from the image"
script:
- awesomeScript -a $PARAM1 -e $PARAM2

How to ignore folders to send in docker build context

I am facing an issue of large docker build context because of my project structure. In my root directory I have lib folder for common code and folders of micro-services. Now I want to build for miscroservice1 to include only lib folder and to ignore other microservices.
I am running docker build command in root folder because running command in microservice folder giving error Forbidden path outside the build context
rootFolder
-- lib
-- microservice1/Dockerfile
-- microservice2/Dockerfile
-- microservice3/Dockerfile
I have two solutions but didn't try for now
To add symlinks for lib in my microservice folder
To write script for each docker build to add lib folder in specific microservice folder and then run docker build.
I am trying the above two solutions. Can anyone suggest any best practice?
You can create .dockerignore in your root directory and add
microservice1/
microservice2/
microservice3/
to it, just like .gitignore does during tracking files, docker will ignore these folders/files during the build.
Update
You can include docker-compose.yml file in your root directory, look at docker-compose for all the options, such as setting environment, running a specific command, etc, that you can use during the build process.
version: "3"
services:
microservice1:
build:
context: .
dockerfile: ./microservice1/Dockerfile
volumes:
- "./path/to/share:/path/to/mount/on/container"
ports:
- "<host>:<container>"
links:
- rootservice # defines a dns record in /etc/hosts to point to rootservice
microservice2:
build:
context: .
dockerfile: ./microservice2/Dockerfile
volumes:
- "./path/to/share:/path/to/mount/on/container"
ports:
- "<host>:<container>"
links:
- rootservice # defines a dns record in /etc/hosts to point to rootservice
- microservice1
rootservice:
build:
context: .
dockerfile: ./Dockerfile
volumes:
- "./path/to/share:/path/to/mount/on/container"
ports:
- "<host>:<container>"
depends_on:
- microservice1
- microservice2
ports:
- "<host1>:<container1>"
- "<host2>:<container2>"
This will be your build recipe for your microservices, you can now run docker-compose build to build all your images.
If the only tool you have is Docker, there aren't very many choices. The key problem is that there is only one .dockerignore file. That means you always have to use your project root directory as the Docker context directory (including every services' sources), but you can tell Docker which specific Dockerfile within that to use. (Note that all COPY directives will be relative to the rootFolder in this case.)
docker build rootFolder -f microservice1/Dockerfile -t micro/service1:20190831.01
In many languages there is a way to package up the library (C .a, .h, and .so files; Java .jar files; Python wheels; ...). If your language supports that, another option is to build the library, then copy (not symlink) the library into each service's build tree. Using Python's wheel format as an example:
pip wheel ./lib
cp microlib.whl microservice1
docker build microservice1 -t micro/service1:20190831.01
# Dockerfile needs to
# RUN pip install ./microlib.whl
Another useful variant on this is a manual multi-stage build. You can have lib/Dockerfile pick some base image, and then install the library into that base image. Then each service's Dockerfile starts FROM the library image, and has it preinstalled. Using a C library as an example:
# I am lib/Dockerfile
# Build stage
FROM ubuntu:18.04 AS build
RUN apt-get update && apt-get install build-essential
WORKDIR /src
COPY ./ ./
RUN ./configure --prefix=/usr/local && make
# This is a typical pattern implemented by GNU Autoconf:
# it actually writes files into /src/out/usr/local/...
RUN make install DESTDIR=/src/out
# Install stage -- service images are based on this
FROM ubuntu:18.04
COPY --from=build /src/out /
RUN ldconfig
# I am microservice1/Dockerfile
ARG VERSION=latest
FROM micro/lib:${VERSION}
# From the base image, there are already
# /usr/local/include/microlib.h and /usr/local/lib/libmicro.so
COPY ...
RUN gcc ... -lmicro
CMD ...
There is also usually an option (again, depending on your language and its packaging system) to upload your built library to some server, possibly one you're running yourself. (A Python pip requirements.txt file can contain an arbitrary HTTP URL for a wheel, for example.) If you do this then you can just declare your library as an ordinary dependency, and this problem goes away.
Which of these works better for you depends on your language and runtime, and how much automation of multiple coordinated docker build commands you're willing to do.

Cloud Build, maven package, no target folder

- name: 'gcr.io/cloud-builders/mvn'
args: ['clean',
'package',
'-Ddockerfile.skip',
'-DskipTests'
]
- name: 'gcr.io/cloud-builders/mvn'
args: ['dockerfile:build',
'-Ddockerfile.skip',
'-DskipTests'
]
when I run these two commands on top locally, I do have the target folder with docker folder and the image-name file in it
On this step it fails:
..
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- -c
- |
docker push $(cat /workspace/target/docker/image-name)
cat: /workspace/target/docker/image-name: No such file or directory
I tried target/docker, app/target/docker
In My Dockerfile:
...
WORKDIR /app
...
ADD target/${JAR_FILE} app.jar
...
Question: how to see target folder, how to make
docker push $(cat /workspace/target/docker/image-name) work?
similar to answer here: Cloud Build fails to build the the simple build step with maven
how to see target folder
there isn't currently a way to check the remote workspace for the target folder, but you can debug with cloud-build-local and write the workspace locally.
https://cloud.google.com/cloud-build/docs/build-debug-locally
https://github.com/GoogleCloudPlatform/cloud-build-local
make sure that target/ or .jar files are not being ignored by gcloudignore or gitignore
https://cloud.google.com/sdk/gcloud/reference/topic/gcloudignore
https://github.com/GoogleCloudPlatform/cloud-builders/issues/236
I also wonder if the docker step is not picking up on what is being produced by the dockerfile plugin, does dockerfile:push work?
steps:
- name: 'gcr.io/cloud-builders/mvn'
args: ['dockerfile:build']
- name: 'gcr.io/cloud-builders/mvn'
args: ['dockerfile:push']

Artifact caching for multistage docker builds

I have a Dockerfiles like this
# build-home
FROM node:10 AS build-home
WORKDIR /usr/src/app
COPY /home/package.json /home/yarn.lock /usr/src/app/
RUN yarn install
COPY ./home ./
RUN yarn build
# build-dashboard
FROM node:10 AS build-dashboard
WORKDIR /usr/src/app
COPY /dashboard/package.json /dashboard/yarn.lock /usr/src/app/
RUN yarn install
COPY ./dashboard ./
RUN yarn build
# run
FROM nginx
EXPOSE 80
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build-home /usr/src/app/dist /usr/share/nginx/html/home
COPY --from=build-dashboard /usr/src/app/dist /usr/share/nginx/html/dashboard
Here building two react application and then artifacts of build are put in nginx. To improve build performance, I need to cache the dist folder in the build-home andbuild-dashboard build-stages.
For this i create a volume in docker-compose.yml
...
web:
container_name: web
build:
context: ./web
volumes:
- ./web-build-cache:/usr/src/app
ports:
- 80:80
depends_on:
- api
I’ve stopped at this stage because I don’t understand how to add volume created bydocker-compose first for the build-home stage, and after adding thisvolume to the build-dashboard.
Maybe i should be create a two volumes and attach each to each of build stages, but how do this?
UPDATE:
Initial build.
Home application:
Install modules: 100.91s
Build app: 39.51s
Dashboard application:
Install modules: 100.91s
Build app: 50.38s
Overall time:
real 8m14.322s
user 0m0.560s
sys 0m0.373s
Second build (without code or dependencies change):
Home application:
Install modules: Using cache
Build app: Using cache
Dashboard application:
Install modules: Using cache
Build app: Using cache
Overall time:
real 0m2.933s
user 0m0.309s
sys 0m0.427s
Third build (with small change in code in first app):
Home application:
Install modules: Using cache
Build app: 50.04s
Dashboard application:
Install modules: Using cache
Build app: Using cache
Overall time:
real 0m58.216s
user 0m0.340s
sys 0m0.445s
Initial build of home application without Docker: 89.69s
real 1m30.111s
user 2m6.148s
sys 2m17.094s
Second build of home application without Docker, the dist folder exists on disk (without code or dependencies change): 18.16s
real 0m18.594s
user 0m20.940s
sys 0m2.155s
Third build of home application without Docker, the dist folder exists on disk (with small change in code): 20.44s
real 0m20.886s
user 0m22.472s
sys 0m2.607s
In the docker-container, the third builds of the application is 2 times longer. This shows that if the result of the first build is on disk, other builds completed faster. In the docker container, all assemblies after the first are executed as long as the first, because there is no dist folder.
If you're using multi-stage builds then there's a problem with docker cache. The final image don't have layers with build steps. By using --target and --cache-from together you can save this layers and reuse them in rebuild.
You need something like
docker build \
--target build-home \
--cache-from build-home:latest \
-t build-home:latest
docker build \
--target build-dashboard \
--cache-from build-dashboard:latest \
-t build-dashboard:latest
docker build \
--cache-from build-dashboard:latest \
--cache-from build-home:latest \
-t my-image:latest \
You can find more details at
https://andrewlock.net/caching-docker-layers-on-serverless-build-hosts-with-multi-stage-builds---target,-and---cache-from/
You can't use volumes during image building, and in any case Docker already does the caching you're asking for. If you leave your Dockerfile as-is and don't try to add your code in volumes in the docker-compose.yml, you should get caching of the built Javascript files access rebuilds as you expect.
When you run docker build, Docker looks at each step in turn. If the input to the step hasn't changed, the step itself hasn't changed, and any files that are being added haven't changed, then Docker will just reuse the result of running that step previously. In your Dockerfile, if you only change the nginx config, it will skip over all of the Javascript build steps and reuse their result from the previous time around.
(The other relevant technique, which you already have, is to build applications in two steps: first copy in files like package.json and yarn.lock that name dependencies, and install dependencies; then copy in and build your application. Since the "install dependencies" step is frequently time-consuming and the dependencies change relatively infrequently, you want to encourage Docker to reuse the last build's node_modules directory.)

Resources