How to use Dockerfile in Gitlab CI - docker

Using gitlab-ci for my node/react app, I'm trying to use phusion/passenger-nodejs as the base docker image
I can specify this easily in .gitlab-ci.yml:
image: phusion/passenger-nodejs:latest
variables:
HOME: /root
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
set_environment:
stage: build
script:
- npm install
tags:
- docker
test_node:
stage: test
script:
- npm install
- npm test
tags:
- docker
However, Phusion Passenger expects you to make configuration changes, e.g. python support, using their special init process, etc. in the Dockerfile.
#FROM phusion/passenger-ruby24:<VERSION>
#FROM phusion/passenger-jruby91:<VERSION>
FROM phusion/passenger-nodejs:<VERSION>
#FROM phusion/passenger-customizable:<VERSION>
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# If you're using the 'customizable' variant, you need to explicitly opt-in
# for features.
#
# N.B. these images are based on https://github.com/phusion/baseimage-docker,
# so anything it provides is also automatically on board in the images below
# (e.g. older versions of Ruby, Node, Python).
#
# Uncomment the features you want:
#
# Ruby support
#RUN /pd_build/ruby-2.0.*.sh
#RUN /pd_build/ruby-2.1.*.sh
#RUN /pd_build/ruby-2.2.*.sh
#RUN /pd_build/ruby-2.3.*.sh
#RUN /pd_build/ruby-2.4.*.sh
#RUN /pd_build/jruby-9.1.*.sh
# Python support.
RUN /pd_build/python.sh
# Node.js and Meteor standalone support.
# (not needed if you already have the above Ruby support)
RUN /pd_build/nodejs.sh
# ...put your own build instructions here...
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Is there a way to use a Dockerfile with gitlab-ci? Is there a good work around other than apt-get install and adding shell scripts?

Yes, create a second Gitlab repository where you place your Dockerfile in. There you add a gitlab-ci.yml file with a script command that builds you modified image and push it to your private registry or the Gitlab embedded Docker registry, eg:
script:
docker build . -t http://myregistry:5000/mymodified image
docker push http://myregistry:5000/mymodified
Inside your other Gitlab repository, change the image: line accordingly:
image: http://myregistry:5000/mymodified
Information on the Gitlab embedded Docker registry can be found here -> here

Related

cloud build pass secret env to dockerfile

I am using google cloud build to build a docker image and deploy in cloud run. The module has dependencies on Github that are private. In the cloudbuild.yaml file I can access secret keys for example the Github token, but I don't know what is the correct and secure way to pass this token to the Dockerfile.
I was following this official guide but it would only work in the cloudbuild.yaml scope and not in the Dockerfile. Accessing GitHub from a build via SSH keys
cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "."]
- name: gcr.io/cloud-builders/docker
args: [ "push", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA" ]
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args: [
"run", "deploy", "$REPO_NAME",
"--image", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA",
"--platform", "managed",
"--region", "us-east1",
"--allow-unauthenticated",
"--use-http2",
]
images:
- gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/GITHUB_USER/versions/1
env: "GITHUB_USER"
- versionName: projects/$PROJECT_ID/secrets/GITHUB_TOKEN/versions/1
env: "GITHUB_TOKEN"
Dockerfile
# [START cloudrun_grpc_dockerfile]
# [START run_grpc_dockerfile]
FROM golang:buster as builder
# Create and change to the app directory.
WORKDIR /app
# Create /root/.netrc cred github
RUN echo machine github.com >> /root/.netrc
RUN echo login "GITHUB_USER" >> /root/.netrc
RUN echo password "GITHUB_PASSWORD" >> /root/.netrc
# Config Github, this create file /root/.gitconfig
RUN git config --global url."ssh://git#github.com/".insteadOf "https://github.com/"
# GOPRIVATE
RUN go env -w GOPRIVATE=github.com/org/repo
# Do I need to remove the /root/.netrc file? I do not want this information to be propagated and seen by third parties.
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
# RUN go build -mod=readonly -v -o server ./cmd/server
RUN go build -mod=readonly -v -o server
# Use the official Debian slim image for a lean production container.
# https://hub.docker.com/_/debian
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM debian:buster-slim
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
# [END run_grpc_dockerfile]
# [END cloudrun_grpc_dockerfile]
After trying for 2 days I have not found a solution, the simplest thing I could do was to generate the vendor folder and commit it to the repository and avoid go mod download.
You have several way to do things.
With Docker, when you run a build, you run it in an isolated environment (it's the principle of isolation). So, you haven't access to your environment variables from inside the build process.
To solve that, you can use build args and put your secret values in that parameter.
But, there is a trap: you have to use bash code and not built in step code in Cloud Build. Let me show you
# Doesn't work
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
args: ["build", "-t", "gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA", "--build-args=GITHUB_USER=$GITHUB_USER,GITHUB_TOKEN=$GITHUB_TOKEN","."]
# Working version
- name: gcr.io/cloud-builders/docker
secretEnv: ["GITHUB_USER","GITHUB_TOKEN"]
entrypoint: bash
args:
- -c
- |
docker build -t gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA --build-args=GITHUB_USER=$$GITHUB_USER,GITHUB_TOKEN=$$GITHUB_TOKEN .
You can also perform the actions outside of the Dockerfile. It's roughly the same thing: load a container, perform operation, load another container and continue.

How to install docker-compose along with openjdk in gitlab-ci file?

I have a spring boot application I want to test via .gitlab-ci.yml.
It's set up already like this:
image: openjdk:12
# services:
# - docker:dind
stages:
- build
before_script:
# - apk add --update python py-pip python-dev && pip install docker-compose
# - docker version
# - docker-compose version
- chmod +x mvnw
build:
stage: build
script:
# - docker-compose up -d
- ./mvnw package
artifacts:
paths:
- target/rest-SNAPSHOT.jar
The commented out portions are from the answer to Run docker-compose build in .gitlab-ci.yml which I noticed has a fully distinct docker image.
Obviously I need java installed to run my spring boot application, so does that mean docker is just not an option?

Share Kaniko Cache for Multi Stage Docker Builds with CloudBuild

I am working on a CloudBuild script that builds a multistage Docker image for integration testing. To optimize the build script I opted to use Kaniko. The relevant portions of the Dockerfile and cloudbuild.yaml files are available below.
cloudbuild.yaml
steps:
# Build BASE image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: buildinstaller
args:
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-installer:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-installer:$SHORT_SHA
- --cache=true
- --cache-ttl=24h
- --cache-repo=gcr.io/$PROJECT_ID/<MY_REPO>/cache
- --target=installer
# Build TEST image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: buildtest
args:
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-test:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>-test:$SHORT_SHA
- --cache=true
- --cache-ttl=24h
- --cache-repo=gcr.io/$PROJECT_ID/<MY_REPO>/cache
- --target=test-image
waitFor:
- buildinstaller
# --- REMOVED SOME CODE FOR BREVITY ---
# Build PRODUCTION image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: build
args:
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>:$SHORT_SHA
- --destination=gcr.io/$PROJECT_ID/<MY_REPO>:latest
- --cache=true
- --cache-ttl=24h
- --cache-dir=/cache
- --target=production-image
waitFor:
- test # TODO: This will run after tests which were not included here for brevity
images:
- gcr.io/$PROJECT_ID/<MY_REPO>
Dockerfile
FROM ruby:2.5-alpine AS installer
# Expose port
EXPOSE 3000
# Set desired port
ENV PORT 3000
# set the app directory var
ENV APP_HOME /app
RUN mkdir -p ${APP_HOME}
WORKDIR ${APP_HOME}
# Install necessary packanges
RUN apk add --update --no-cache \
build-base curl less libressl-dev zlib-dev git \
mariadb-dev tzdata imagemagick libxslt-dev \
bash nodejs
# Copy gemfiles to be able to bundle install
COPY Gemfile* ./
#############################
# STAGE 1.5: Test build #
#############################
FROM installer AS test-image
# Set environment
ENV RAILS_ENV test
# Install gems to /bundle
RUN bundle install --deployment --jobs $(nproc) --without development local_gems
# Add app files
ADD . .
RUN bundle install --with local_gems
#############################
# STAGE 2: Production build #
#############################
FROM installer AS production-image
# Set environment
ENV RAILS_ENV production
# Install gems to /bundle
RUN bundle install --deployment --jobs $(nproc) --without development test local_gems
# Add app files
ADD . .
RUN bundle install --with local_gems
# Precompile assets
RUN DB_ADAPTER=nulldb bundle exec rake assets:precompile assets:clean
# Puma start command
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]
Since my Docker image is a multi-stage build with 2 separate end stages that share a common base build, I want to share the cache between the common portion and the other two. To accomplish this, I set all builds to share the same cache repository - --cache-repo=gcr.io/$PROJECT_ID/<MY_REPO>/cache. It has worked in all my tests thus far. However, I have been unable to ascertain if this is best practice or if another manner of caching a base image would be recommended. Is this an acceptable implementation?
I have come across Kaniko-warmer but I have been unable to use it for my situation.
Before mentioning any best practices on how to cache your base image, there are some best practices in order to optimize the performance of your build. Since you already use Kaniko and you are caching the image from your repository, I believe your implementation is following the Best Practices above.
The only suggestion I would make, is to use Google Cloud Storage to reuse the results from your previous builds. If your build is taking a long time and the files produced are not a lot and it doesn't take a lot of time to copy them from and to Cloud Storage, this would speed up more your build.
Furthermore there are some best practices that are stated in the following article, regarding the Optimization of your build cache. I believe the most important of them is to:
"position the build steps that change often at the bottom of the Dockerfile. If you put them at the top, Docker cannot use its build cache for the other build steps that are changing less often. Because a new Docker image is usually built for each new version of your source code, add the source code to the image as late as possible in the Dockerfile".
Finally another thing I would take into consideration is the cache expiration time.
Please keep in mind it must be configured appropriately in order not to lose any updates for the dependencies but not running builds without any use.
More links you may consider useful (bare in mind that these are not Google sources):
Docker documentation about Multi-stage Builds
Using Multi-Stage Builds to Simplify And Standardize Build Processes

Missing installed dependencies when docker image is used

Here is my Dockerfile
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
When the docker file is built it installs all dependencies, and the last command RUN bluebase plugins outputs the list of plugins installed. But when this image is pushed and used in github actions, bluebase is available globally but no plugins are installed. What am I doing wrong?
Github Workflow
name: Development CI
on:
push:
# Sequence of patterns matched against refs/heads
branches:
- '*' # Push events on all branchs
- '*/*'
- '!master' # Exclude master
- '!next' # Exclude next
- '!alpha' # Exclude alpha
- '!beta' # Exclude beta
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase #Outputs list of comamnds available with bluebase
- name: Check BlueBase Plugins
run: bluebase plugins #Outputs no plugins installed
This was a tricky problem! Here is the solution that worked for me. I'll try and explain why below.
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase
- name: Check BlueBase Plugins
run: HOME=/root bluebase plugins
- name: Check web plugin
run: HOME=/root bluebase web:build --help
Background
Firstly the Docker image. The command bluebase plugins:add seems to be very dependent on the $HOME environment variable. Your Docker image is built as the root user, so $HOME is /root. The bluebase plugins:add command installs plugin dependencies at $HOME/.cache/#bluebase so they end up at /root/.cache/#bluebase.
Now the jobs.<id>.container feature. When your container is run there is some rather complicated Docker networking and volume mounts that take place. One of those mounts is -v "/home/runner/work/_temp/_github_home":"/github/home". This mounts local files from the host, including a copy of your checked out repository, into the container. Then it changes $HOME to point to /github/home.
Problem
The reason bluebase plugins doesn't work is because it depends on $HOME pointing to /root but now GitHub Actions has changed it to /github/home.
Solutions
A solution I tried was to install the plugins at /github/home instead of /root in the Docker image.
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN mkdir -p /github/home
ENV HOME /github/home
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
The problem with this is that the volume mount that GitHub Actions creates overwrites the /github/home directory. So then I tried a few tricks like symlinks or moving the .cache/#bluebase directory around to avoid it being clobbered by the mount. None of those worked.
So the only solution seemed to be changing $HOME back to /root. This should NOT be done permanently in the workflow because GitHub Actions depends on HOME=/github/home to work correctly. So the solution is to set it temporarily for each command.
HOME=/root bluebase web:build --help
Takeaway
The main takeaway from this is that any tooling pre-built in a container that relies on $HOME pointing to a specific location may not work correctly when used in the jobs.<container_id>.container syntax.
I do not think the issue with the image, it's easy to confirm on local image and you will see the that the plugin is available in Docker image.
Just try to run
docker build -t plugintest .
#then run the image on local system to verify plugin
docker run -it --rm --entrypoint "/bin/sh" plugintest -c "bluebase plugins"
Seems like the issue with your YML config file.
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
This line runs-on: ubuntu-latest make does not sense, I think it should be
runs-on:ashimsohail/bluebase-image.

How to conditionally update a CI/CD job image?

I just got into the (wonderful) world of CI/CD and have working pipelines. They are not optimal, though.
The application is a dockerized website:
the source needs to be compiled by webpack and end up in dist
this dist directory is copied to a docker container
which is then remotely built and deployed
My current setup is quite naïve (I added some comments to show why I believe the various elements are needed/useful):
# I start with a small image
image: alpine
# before the job I need to have npm and docker
# the problem: I need one in one job, and the second one in the other
# I do not need both on both jobs but do not see how to split them
before_script:
- apk add --update npm
- apk add docker
- npm install
- npm install webpack -g
stages:
- create_dist
- build_container
- stop_container
- deploy_container
# the dist directory is preserved for the other job which will make use of it
create_dist:
stage: create_dist
script: npm run build
artifacts:
paths:
- dist
# the following three jobs are remote and need to be daisy chained
build_container:
stage: build_container
script: docker -H tcp://eu13:51515 build -t widgets-sentinels .
stop_container:
stage: stop_container
script: docker -H tcp://eu13:51515 stop widgets-sentinels
allow_failure: true
deploy_container:
stage: deploy_container
script: docker -H tcp://eu13:51515 run --rm -p 8880:8888 --name widgets-sentinels -d widgets-sentinels
This setups works bit npm and docker are installed in both jobs. This is not needed and slows down the deployment. Is there a way to state that such and such packages need to be added for specific jobs (and not globally to all of them)?
To make it clear: this is not a show stopper (and in reality not likely to be an issue at all) but I fear that my approach to such a job automation is incorrect.
You don't necessarily need to use the same image for all jobs. Let me show you one of our pipelines (partially) which does a similar thing, just with composer for php instead of npm:
cache:
paths:
- vendor/
build:composer:
image: registry.example.com/base-images/php-composer:latest # use our custom base image where only composer is installed on to build the dependencies)
stage: build dependencies
script:
- php composer.phar install --no-scripts
artifacts:
paths:
- vendor/
only:
changes:
- composer.{json,lock,phar} # build vendor folder only, when relevant files change, otherwise use cached folder form s3 bucket (configured in runner config)
build:api:
image: docker:18 # use docker image to build the actual application image
stage: build api
dependencies:
- build:composer # reference dependency dir
script:
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
- docker build -t $CI_REGISTRY_IMAGE:latest.
- docker push $CI_REGISTRY_IMAGE:latest
The composer base image contains all necessary packages to run composer, so in your case you'd create a base image for npm:
FROM alpine:latest
RUN apk add --update npm
Then, use this image in your create_dist stage and use image: docker:latest as image in the other stages.
As well as referncing different images for different jobs you may also try gitlab anchors which provides reusable templates for the jobs:
.install-npm-template: &npm-template
before_script:
- apk add --update npm
- npm install
- npm install webpack -g
.install-docker-template: &docker-template
before_script:
- apk add docker
create_dist:
<<: *npm-template
stage: create_dist
script: npm run build
...
deploy_container:
<<: *docker-template
stage: deploy_container
...
Try multistage builder, you can intermediate temporary images and copy generated content final docker image. Also, npm should be part on docker image, create one npm image and use in final docker image as builder image.

Resources