github actions: run multiple jobs in the same docker - docker

I'm learning to deploy github actions to run multiple jobs with docker, and this is what I have so far:
github actions yml file is shown as follow. There are 2 jobs: job0 builds docker with Dockerfile0 and job1 builds docker with Dockerfile1.
# .github/workflows/main.yml
name: docker CI
on: push
jobs:
job0:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile0 --tag job0
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile1 --tag job1
Dockerfile0 and Dockerfile1 share basically the same content, except for the argument in the last line:
FROM ubuntu:20.04
ADD . /docker_ci
RUN apt-get update -y
RUN apt-get install -y ... ...
WORKDIR /docker_ci
RUN python3 script.py <arg>
I wonder, can I build a docker for the 1st job, and then invoke multiple jobs execute command within the docker built from the 1st job? Thus I don't have to keep multiple Dockerfile and save some docker building time.
It would be better to build my docker locally from Dockerfile so I hope to avoid using container from docker hub.
runs-for-docker-actions looks relevant but I have trouble finding example deploying action locally (without publishing).

It definitely sounds like you should not build two different images - not for CI, and not for local development purposes (if it matters).
From the details you have provided, I would consider the following approach:
Define a Dockerfile with an ENTRYPOINT which is the lowest common denominator for your needs (it can be bash or python script.py).
In GitHub Actions, have a single job with multiple steps - one for building the image, and the others for running it with arguments.
For example:
FROM ubuntu
RUN apt-get update && apt-get install -y python3
WORKDIR /app
COPY script.py .
ENTRYPOINT ["python3", "script.py"]
This Dockerfile can be executed with any argument which will be passed on to the script.py entrypoint:
$ docker run --rm -it imagename some arguments
A sample GitHub Actions config might look like this:
jobs:
jobname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the image
run: docker build --tag job .
- name: Test 1
run: docker run --rm -it job arg1
- name: Test 2
run: docker run --rm -it job arg2
If you insist on separating these to different jobs, as far as I understand it, your easiest option would still be to rebuild the image (but still using a single Dockerfile), since sharing a docker image built in one job, to another job, will be a more complicated task that I would recommend trying to avoid.

Related

Cloud Build, Container Registry, Cloud Run: Run tests without exposing env var

Cloud build do the following:
Build image from dockerfile (see dockerfile below)
Push image to container registry
Update service in Cloud Run
My issue is the following:
As I'm running my tests on build time, I need my MONGODB_URI secret on build time, but I've read that using --build-arg is not safe to expose secrets.
I could run npm install and npm run test in Cloud Build container, but it would make build time longer as I'll have to run npm install two times.
Is there a way I can run only once npm install without having to expose secrets ?
Dockerfile
FROM node:16
COPY . ./
WORKDIR /
RUN npm install
ARG env
ARG mongodb_uri
ENV ENVIRONMENT=$env
ENV DB_REMOTE_PROD=$mongodb_uri
RUN npm run test
RUN npm run build
CMD ["node", "./build/index.js"]
And my cloudbuild.yaml config:
steps:
- name: gcr.io/cloud-builders/docker
args:
- '-c'
- >-
docker build --no-cache -t
$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--build-arg ENVIRONMENT=staging --build-arg mongodb_uri=$$MONGODB_URI -f
Dockerfile .
id: Build
entrypoint: bash
secretEnv:
- MONGODB_URI
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- '-c'
- >-
gcloud run services update $_SERVICE_NAME --platform=managed
--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID
--region=$_DEPLOY_REGION --update-env-vars=ENVIRONMENT=staging
--update-env-vars=DB_REMOTE_PROD=$$MONGODB_URI --quiet
id: Deploy
entrypoint: bash
secretEnv:
- MONGODB_URI
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
logging: CLOUD_LOGGING_ONLY
substitutions:
_TRIGGER_ID: 44b16efe-0219-41af-b32b-9b98438728c3
_GCR_HOSTNAME: eu.gcr.io
_PLATFORM: managed
_SERVICE_NAME: app-staging
_DEPLOY_REGION: europe-southwest1
availableSecrets:
secretManager:
- versionName: projects/PROJECT_ID/secrets/mongodb_app_staging/versions/1
env: MONGODB_URI
With my method, the secret is exposed not only in the container since I pass it as build-args, but it is also exposed to all users with access to cloud run...
Your container build step is not so bad. You provide your secrets to your container, but it is not visible. It's only reference.
You can hardly do better. A solution could be to perform the secret access directly inside the Dockerfile, but I'm not sure that the "security gain" worth the Dockerfile increased complexity. (If you want more details, let me know, I will be able to share sample)
About Cloud Run, you are totally true: exposing secret in plain text in environment variable is totally not acceptable.
For that, you can use the Cloud Run - Secret Manager integration. Instead of doing this
gcloud run services update $_SERVICE_NAME --platform=managed
--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID
--region=$_DEPLOY_REGION --update-env-vars=ENVIRONMENT=staging
--update-env-vars=DB_REMOTE_PROD=$$MONGODB_URI --quiet
You can do that (and you now longer need to set the secret as env var of your Cloud Build step)
gcloud run services update $_SERVICE_NAME --platform=managed
--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID
--region=$_DEPLOY_REGION --update-env-vars=ENVIRONMENT=staging
--update-secrets=DB_REMOTE_PROD=mongodb_app_staging:1 --quiet

Github Actions Mkdocs and Docker containers ? not playing nicely

I am having trouble getting mkdocs to work within a container being run by GitHub actions on commit.
Hi all,
I have been trying to get my python code documentation up on GitHub. I have managed to do this via GitHub actions running
mkdocs gh-deploy --force
using the below GitHub action workflow:
name: ci
on:
push:
branches:
- master
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-python#v4
with:
python-version: 3.x
- run: pip install mkdocs
- run: pip install mkdocs-material
- run: pip install mkdocstrings[python]
- run: mkdocs gh-deploy --force --config-file './docs/mkdocs.yml'
The issue with this is that mkdocstrings did not work, and so no source code was shown on the webpage. I have made a docker container with access via volume binding to the .github folder on my local computer.
Dockerfile:
FROM ubuntu:20.04
# This stops being asked for geographical location with apt-get
ARG DEBIAN_FRONTEND=noninteractive
WORKDIR /
COPY requirements.txt /
# TODO: #1 Maybe should not use update (as this can change environment from update to update)
RUN apt-get update -y
RUN apt-get install -y python3.10 python3-pip git-all expect
RUN pip install -r requirements.txt
Docker compose:
version: "3.9"
services:
mkdocs:
build: .
container_name: mkdocs
ports:
- 8000:8000
env_file:
- ../.env
volumes:
- ../:/project
working_dir: /project/docs
command:
sh -c "./gh-deploy.sh"
This works when I run the docker container on my computer, but of course when it is run as a workflow on GitHub actions it does not have access to a .github folder. The GitHub action is:
name: dockerMkdocs
on:
push:
branches:
- master
- main
jobs:
build:
runs-on: ubuntu-latest
env:
GH_user: ${{ secrets.GH_user }}
GH_token: ${{ secrets.GH_token }}
steps:
- uses: actions/checkout#v2
- name: Build the Docker image and run
run: docker compose --file ./docs/Docker-compose_GA.yml up
Anyone know how mkdocs knows it is running in a github action when run in the first example above but it then does not have access to the same "environment" when running in a container in docker? If I could answer this, then I can get 'mkdocs gh-deploy --force' to work within github actions and speed up CI/CD.
My GitHub repo is at: https://github.com/healthENV/healthENVsandbox
Many thanks
I think you have two options:
1. Run the entire job inside of a container
In that case, the checkout action will get your repository and then the script you run can find the necessary files. This works because all steps in the job are executed inside of the container.
2. Mount the $GITHUB_WORKSPACE folder
Mount the folder with the checked out repo in the container. You already mount a folder to the project folder, but it seems that is not the correct folder. You can run a check to see what the current folder is before you run docker compose (and maybe an extra one inside of the script as well.

Run docker commands from gitlab-ci

I have this gitlab-ci file:
services:
- docker:18.09.7-dind
variables:
SONAR_TOKEN: "$PROJECT_SONAR_TOKEN"
GIT_DEPTH: 0
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: overlay2
sonarqube-check:
image: maven:latest
stage: test
before_script:
- "docker version"
- "mkdir $PWD/.m2"
- "cp -f /cache/settings.xml $PWD/.m2/settings.xml"
script:
- mvn $MAVEN_CLI_OPTS clean verify sonar:sonar -Dsonar.qualitygate.wait=true -Dsonar.login=$SONAR_TOKEN -Dsonar.projectKey="project-key"
after_script:
- "rm -rf $PWD/.m2"
allow_failure: false
only:
- merge_requests
For some reason docker in docker service does not find the binaries for docker (the docker version command, line 16):
/bin/bash: line 111: docker: command not found
I'm wondering if there is a way of doing this inside of the gitlab-ci file because I need to run docker for the tests, if there is an image that contains both maven and docker binaries or if I'll have to create my own docker image.
It has to be all in one stage, I cannot divide it in two stages (or at least I don't know how to compile in maven in one stage and run the tests witha docker image in another stage)
Thank you!
As you correctly pointed out. You need mvn and docker binaries in the image you are using for that GitLab-CI job.
The quickest win is probably to install docker in your maven:latest build image during run time in the before_script section.
before_script:
- apt-get update && apt-get install -y docker.io
- docker version
If that's slowing down your job too much you might want to build your own custom docker image that contains both Maven and Docker.
Also have a look at the article about dind on Gitlab if you end up moving to Docker 19.03+

How to use a script of a Docker container from CI pipeline

Newbie in Docker & Docker containers over here.
I'm trying to realize how can I run a script which is in the image from my bitbucket-pipeline process.
Some context about where I am and some knowledge
In a Bitbucket-Pipelines step you can add any image to run in that specific step. What I already tried and works without problem for example is get an image like alpine:node so I can run npm commands in my pipeline script:
definitions:
steps:
- step: &runNodeCommands
image: alpine/node
name: "Node commands"
script:
- npm --version
pipelines:
branches:
master:
- step: *runNodeCommands
This means that each push on master branch will run a build where using the alpine/node image we can run npm commands like npm --version and install packages.
What I've done
Now I'm working with a custom container where I'm installing a few node packages (like eslint) to run commands. I.E. eslint file1.js file2.js
Great!
What I'm trying but don't know how to
I've a local bash script awesomeScript.sh with some input params in my repository. So my bitbucket-pipelines.yml file looks like:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- ./awesomeScript.sh -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
I'm using the same awesomeScript.sh in different repositories and I want to move that functionality inside my Docker container and get rid of that script in the repository
How can I build my Dockerfile to be able to run that script "anywhere" where I use the docker image?
PS:
I've been thinking in build a node_module, installing the module in the Docker Image like the eslint module... but I would like to know if this is possible
Thanks!
If you copy awesomeScript.sh to the my-container-with-eslint Docker image then you should be able to use it without needing the script in each repository.
Somewhere in the Dockerfile for my-container-with-eslint you can copy the script file into the image:
COPY awesomeScript.sh /usr/local/bin/
Then in Bitbucket-Pipelines:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- awesomeScript -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
As peterevans said, If you copy the script to your docker image, then you should be able to use it without needing the script in each repository.
In your Dockerfile add the following line:
COPY awesomeScript.sh /usr/local/bin/ # you may use ADD too
In Bitbucket-Pipelines:
pipelines:
branches:
master:
- step:
image: <your user name>/<image name>
name: "Run script from the image"
script:
- awesomeScript -a $PARAM1 -e $PARAM2

How to conditionally update a CI/CD job image?

I just got into the (wonderful) world of CI/CD and have working pipelines. They are not optimal, though.
The application is a dockerized website:
the source needs to be compiled by webpack and end up in dist
this dist directory is copied to a docker container
which is then remotely built and deployed
My current setup is quite naïve (I added some comments to show why I believe the various elements are needed/useful):
# I start with a small image
image: alpine
# before the job I need to have npm and docker
# the problem: I need one in one job, and the second one in the other
# I do not need both on both jobs but do not see how to split them
before_script:
- apk add --update npm
- apk add docker
- npm install
- npm install webpack -g
stages:
- create_dist
- build_container
- stop_container
- deploy_container
# the dist directory is preserved for the other job which will make use of it
create_dist:
stage: create_dist
script: npm run build
artifacts:
paths:
- dist
# the following three jobs are remote and need to be daisy chained
build_container:
stage: build_container
script: docker -H tcp://eu13:51515 build -t widgets-sentinels .
stop_container:
stage: stop_container
script: docker -H tcp://eu13:51515 stop widgets-sentinels
allow_failure: true
deploy_container:
stage: deploy_container
script: docker -H tcp://eu13:51515 run --rm -p 8880:8888 --name widgets-sentinels -d widgets-sentinels
This setups works bit npm and docker are installed in both jobs. This is not needed and slows down the deployment. Is there a way to state that such and such packages need to be added for specific jobs (and not globally to all of them)?
To make it clear: this is not a show stopper (and in reality not likely to be an issue at all) but I fear that my approach to such a job automation is incorrect.
You don't necessarily need to use the same image for all jobs. Let me show you one of our pipelines (partially) which does a similar thing, just with composer for php instead of npm:
cache:
paths:
- vendor/
build:composer:
image: registry.example.com/base-images/php-composer:latest # use our custom base image where only composer is installed on to build the dependencies)
stage: build dependencies
script:
- php composer.phar install --no-scripts
artifacts:
paths:
- vendor/
only:
changes:
- composer.{json,lock,phar} # build vendor folder only, when relevant files change, otherwise use cached folder form s3 bucket (configured in runner config)
build:api:
image: docker:18 # use docker image to build the actual application image
stage: build api
dependencies:
- build:composer # reference dependency dir
script:
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
- docker build -t $CI_REGISTRY_IMAGE:latest.
- docker push $CI_REGISTRY_IMAGE:latest
The composer base image contains all necessary packages to run composer, so in your case you'd create a base image for npm:
FROM alpine:latest
RUN apk add --update npm
Then, use this image in your create_dist stage and use image: docker:latest as image in the other stages.
As well as referncing different images for different jobs you may also try gitlab anchors which provides reusable templates for the jobs:
.install-npm-template: &npm-template
before_script:
- apk add --update npm
- npm install
- npm install webpack -g
.install-docker-template: &docker-template
before_script:
- apk add docker
create_dist:
<<: *npm-template
stage: create_dist
script: npm run build
...
deploy_container:
<<: *docker-template
stage: deploy_container
...
Try multistage builder, you can intermediate temporary images and copy generated content final docker image. Also, npm should be part on docker image, create one npm image and use in final docker image as builder image.

Resources