How to conditionally update a CI/CD job image? - docker

I just got into the (wonderful) world of CI/CD and have working pipelines. They are not optimal, though.
The application is a dockerized website:
the source needs to be compiled by webpack and end up in dist
this dist directory is copied to a docker container
which is then remotely built and deployed
My current setup is quite naïve (I added some comments to show why I believe the various elements are needed/useful):
# I start with a small image
image: alpine
# before the job I need to have npm and docker
# the problem: I need one in one job, and the second one in the other
# I do not need both on both jobs but do not see how to split them
before_script:
- apk add --update npm
- apk add docker
- npm install
- npm install webpack -g
stages:
- create_dist
- build_container
- stop_container
- deploy_container
# the dist directory is preserved for the other job which will make use of it
create_dist:
stage: create_dist
script: npm run build
artifacts:
paths:
- dist
# the following three jobs are remote and need to be daisy chained
build_container:
stage: build_container
script: docker -H tcp://eu13:51515 build -t widgets-sentinels .
stop_container:
stage: stop_container
script: docker -H tcp://eu13:51515 stop widgets-sentinels
allow_failure: true
deploy_container:
stage: deploy_container
script: docker -H tcp://eu13:51515 run --rm -p 8880:8888 --name widgets-sentinels -d widgets-sentinels
This setups works bit npm and docker are installed in both jobs. This is not needed and slows down the deployment. Is there a way to state that such and such packages need to be added for specific jobs (and not globally to all of them)?
To make it clear: this is not a show stopper (and in reality not likely to be an issue at all) but I fear that my approach to such a job automation is incorrect.

You don't necessarily need to use the same image for all jobs. Let me show you one of our pipelines (partially) which does a similar thing, just with composer for php instead of npm:
cache:
paths:
- vendor/
build:composer:
image: registry.example.com/base-images/php-composer:latest # use our custom base image where only composer is installed on to build the dependencies)
stage: build dependencies
script:
- php composer.phar install --no-scripts
artifacts:
paths:
- vendor/
only:
changes:
- composer.{json,lock,phar} # build vendor folder only, when relevant files change, otherwise use cached folder form s3 bucket (configured in runner config)
build:api:
image: docker:18 # use docker image to build the actual application image
stage: build api
dependencies:
- build:composer # reference dependency dir
script:
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
- docker build -t $CI_REGISTRY_IMAGE:latest.
- docker push $CI_REGISTRY_IMAGE:latest
The composer base image contains all necessary packages to run composer, so in your case you'd create a base image for npm:
FROM alpine:latest
RUN apk add --update npm
Then, use this image in your create_dist stage and use image: docker:latest as image in the other stages.

As well as referncing different images for different jobs you may also try gitlab anchors which provides reusable templates for the jobs:
.install-npm-template: &npm-template
before_script:
- apk add --update npm
- npm install
- npm install webpack -g
.install-docker-template: &docker-template
before_script:
- apk add docker
create_dist:
<<: *npm-template
stage: create_dist
script: npm run build
...
deploy_container:
<<: *docker-template
stage: deploy_container
...

Try multistage builder, you can intermediate temporary images and copy generated content final docker image. Also, npm should be part on docker image, create one npm image and use in final docker image as builder image.

Related

Gitlab dependency on assets from another task in the same stage without makingthem downloadable

Summary of Problem
I have a task in gitlab that requires an npm build to run.
This build then generates the static folder that is needed for my docker build for the server which copies the generated files into the build. I think I can use artifacts and depends_on to make the second task wait on the npm build and get the files it needs, but this makes the artifacts downloadable from the UI which is not desirable. I found a gitlab issue that seems stale and unlikely to ever go anywhere. Is there any other method I can use?
Dependency build
build-web:
stage: build
image: node:17.6.0-slim
before_script:
- set -euo pipefail
- set -x
- cd web
- npm install
- npm run check || true
- npm run lint || true
script:
- npm run build
```yml
## Server build
build-server:
stage: build
tags:
- shell
before_script:
- echo Building server image with tag $CI_COMMIT_REF_NAME
script:
- DOCKER_BUILDKIT=1 BUILDKIT_INLINE_CACHE=1 docker build --tag "server:$CI_COMMIT_REF_NAME" -f ./deployment/server/Dockerfile .
Relative Dockerfile Lines
COPY ./api .
COPY ./api/web ./web
Notes/edits
I host my own runners. I use a shell executor for docker build instead of dind

Pass docker image between jobs in gitlab runner

I'm trying to build an image in one job and push to AWS ECR in another, since the steps are different I'm trying to pass the image as an artifact:
.gitlab-ci.yml:
stages:
- build
- push
build_image:
stage: build
image: docker
services:
- docker:19.03.12-dind
script:
# building docker image....
- mkdir image
- docker save apis_server > image/apis_server.tar
artifacts:
paths:
- image
push_image:
stage: push
image: docker
services:
- docker:19.03.12-dind
before_script:
- apk add --no-cache python3 py3-pip && pip3 install --upgrade pip && pip3 install --no-cache-dir awscli
script:
- ls
- docker load -i image/apis_server.tar
- docker images
# ecr auth and push to repo...
I get the following warning in the pipeline:
Uploading artifacts for successful job
Uploading artifacts...
WARNING: image: no matching files. Ensure that the artifact path is relative to the working directory
The second job fails with the following message:
$ docker load -i image/apis_server.tar
open image/apis_server.tar: no such file or directory
This approach is based on the answer provided here
For your question, use the full directory address for artifacts.
I have some recommendations for you to speed up you pipeline. If you always install some packages in your pipeline, make a docker image based on your requirements, then use that image in your pipeline instead.
If you need to deploy an image in another place, I recommend you to use docker hub, or make a self hosted docker repository. It is more efficient. Because in docker deployment, the changed layers will be downloaded. But the way you are using, you download all the layers.

How to use a script of a Docker container from CI pipeline

Newbie in Docker & Docker containers over here.
I'm trying to realize how can I run a script which is in the image from my bitbucket-pipeline process.
Some context about where I am and some knowledge
In a Bitbucket-Pipelines step you can add any image to run in that specific step. What I already tried and works without problem for example is get an image like alpine:node so I can run npm commands in my pipeline script:
definitions:
steps:
- step: &runNodeCommands
image: alpine/node
name: "Node commands"
script:
- npm --version
pipelines:
branches:
master:
- step: *runNodeCommands
This means that each push on master branch will run a build where using the alpine/node image we can run npm commands like npm --version and install packages.
What I've done
Now I'm working with a custom container where I'm installing a few node packages (like eslint) to run commands. I.E. eslint file1.js file2.js
Great!
What I'm trying but don't know how to
I've a local bash script awesomeScript.sh with some input params in my repository. So my bitbucket-pipelines.yml file looks like:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- ./awesomeScript.sh -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
I'm using the same awesomeScript.sh in different repositories and I want to move that functionality inside my Docker container and get rid of that script in the repository
How can I build my Dockerfile to be able to run that script "anywhere" where I use the docker image?
PS:
I've been thinking in build a node_module, installing the module in the Docker Image like the eslint module... but I would like to know if this is possible
Thanks!
If you copy awesomeScript.sh to the my-container-with-eslint Docker image then you should be able to use it without needing the script in each repository.
Somewhere in the Dockerfile for my-container-with-eslint you can copy the script file into the image:
COPY awesomeScript.sh /usr/local/bin/
Then in Bitbucket-Pipelines:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- awesomeScript -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
As peterevans said, If you copy the script to your docker image, then you should be able to use it without needing the script in each repository.
In your Dockerfile add the following line:
COPY awesomeScript.sh /usr/local/bin/ # you may use ADD too
In Bitbucket-Pipelines:
pipelines:
branches:
master:
- step:
image: <your user name>/<image name>
name: "Run script from the image"
script:
- awesomeScript -a $PARAM1 -e $PARAM2

Gitlab CI with Docker and NPM

I'm trying to setup a basic pipeline in Gitlab that does the following:
Run tests command, compile the client and deploy the application using docker-compose.
The problem comes when I'm trying to use npm install.
My .gitlab-ci.yml file looks like:
# This file is a template, and might need editing before it works on your
project.
# Official docker image.
image: docker:latest
services:
- docker:dind
stages:
- test
- build
- deploy
build:
stage: build
script:
- cd packages/public/client/
- npm install --only=production
- npm run build
test:
stage: test
only:
- develop
- production
script:
- echo run tests in this section
step-deploy-production:
stage: deploy
only:
- production
script:
- docker-compose up -d --build
environment: production
when: manual
And the error is:
Skipping Git submodules setup
$ cd packages/public/client/
$ npm install --only=production
bash: line 69: npm: command not found
ERROR: Job failed: exit status 1
I'm using the last docker image, so, I'm wondering whether I can define a new service on my build stage or should I use a different image for the whole process?
Thanks
A new service will not help you, you'll need to use a different image.
You can use a node image just for your build-stage like this:
build:
image: node:8
stage: build
script:
- cd packages/public/client/
- npm install --only=production
- npm run build

Docker in bitbucket pipelines

I am creating a node.js(angular/cli) application and want to run the tests by using docker container. I successfully created docker image and uploaded to docker hub where it runs automated build after each commit.
The problem is: I don't know what to put in bitbucket-pipelines.yml file.
I want that after each commit it would take a newly built container from dockerhub and run the tests against it.
Also, I want to minimize bitbucket build time.
bitbucket-pipelines.yml
pipelines:
default:
- step:
image: [Enter your docker image here, as on hub.docker.com]
script:
- npm install
- npm install -g #angular/cli
- ng build --prod
- ls -ltr
- pwd

Resources