Github Actions workflow fails when running steps in a container - docker

I've just started setting up a Github-actions workflow for one of project.I attempted to run the workflow steps inside a container with this workflow definition:
name: TMT-Charts-CI
on:
push:
branches:
- master
- actions-ci
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://alpine/helm:2.13.0
steps:
- name: Checkout Code
uses: actions/checkout#v1
- name: Validate and Upload Chart to Chart Museum
run: |
echo "Hello, world!"
export PAGER=$(git diff-tree --no-commit-id --name-only -r HEAD)
echo "Changed Components are => $PAGER"
export COMPONENT="NOTSET"
for CHANGE in $PAGER; do ENV_DIR=${CHANGE%%/*}; done
for CHANGE in $PAGER; do if [[ "$CHANGE" != .* ]] && [[ "$ENV_DIR" == "${CHANGE%%/*}" ]]; then export COMPONENT="$CHANGE"; elif [[ "$CHANGE" == .* ]]; then echo "Not a Valid Dir for Helm Chart" ; else echo "Only one component per PR should be changed" && exit 1; fi; done
if [ "$COMPONENT" == "NOTSET" ]; then echo "No component is changed!" && exit 1; fi
echo "Initializing Component => $COMPONENT"
echo $COMPONENT | cut -f1 -d"/"
export COMPONENT_DIR="${COMPONENT%%/*}"
echo "Changed Dir => $COMPONENT_DIR"
cd $COMPONENT_DIR
echo "Install Helm and Upload Chart If Exists"
curl -L https://git.io/get_helm.sh | bash
helm init --client-only
But Workflow fails stating the container stopped due immediately.
I have tried many images including "alpine:3.8" image described in official documentation, but container stops.
According to Workflow syntax for GitHub Actions, in the Container section: "A container to run any steps in a job that don't already specify a container." My assumption is that the container would be started and the steps would be run inside the Docker container.

We can achieve this my making custom docker images, Actually Github runners somehow stops the running container after executing the entrypoint command, I made docker image with entrypoint the make container alive, so container doesn't die after start.
Here is the custom Dockerfile (https://github.com/rizwan937/Helm-Image)
You can publish this image to dockerhub and use it in workflow file like
container:
image: docker://rizwan937/helm
You can add this entrypoint to any docker image so that It remains alive for further steps execution.
This is a temporary solution, if anyone have better one, let me know.

Related

How should I format a unit test inside a multistage dock-image.yml

My goal is to export data from a unit test inside a multistage docker container. I have a docker create, docker cp, and docker rm that work in my terminal but when I added it to my docker-image.yml it fails to run and displays this error "Error: Process completed with exit code .". Also, I added in the unit test code for a github action that can't be accessed since the build fails.
[enter image description here][1]
- name: Build the Docker image
run: |
echo "${{ env.app_version }}"
echo "${{ github.run_number }}"
BUILD_NUMBER=${{ github.run_number }}
VERSION_NUMBER=${{ env.app_version }}
FULL_VERSION=${VERSION_NUMBER}.${BUILD_NUMBER}
docker build . --file Dockerfile --tag placeholder/${SERVICE_NAME}:${FULL_VERSION} --build-arg BUILD_NUMBER=${BUILD_NUMBER}
docker tag placeholder/${SERVICE_NAME}:${FULL_VERSION} placeholder/${SERVICE_NAME}:latest
echo "full_version=$FULL_VERSION" >> $GITHUB_ENV
**docker create --name unit_test test-export
docker cp unit_test:/app/surefire-reports extracted
docker rm unit_test**
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test and deploy your project.
ls -lath target/surefire-reports/
- name: Publish Unit Test Results
# You may pin to the exact commit or the version.
# Uses: EnricoMi/publish-unit-test-result-action#4a00ba50806e7658e5005bb91acdb3274714595a
uses: EnricoMi/publish-unit-test-result-action#v1.31
with:
files: target/surefire-reports/*.xml

Build and Run Docker Container in Jenkins

I need to run docker container in Jenkins so that installed libraries like pycodestyle can be runnable in the following steps.
I successfully built Docker Container (in Dockerfile)
How do I access to the container so that I can use it in the next step? (Please look for >> << code in Build step below)
Thanks
stage('Build') {
// Install python libraries from requirements.txt (Check Dockerfile for more detail)
sh "docker login -u '${DOCKER_USR}' -p '${DOCKER_PSW}' ${DOCKER_REGISTRY}"
sh "docker build \
--tag '${DOCKER_REGISTRY}/${DOCKER_TAG}:latest' \
--build-arg HTTPS_PROXY=${PIP_PROXY} ."
>> sh "docker run -ti ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest sh" <<<
}
}
stage('Linting') {
sh '''
awd=$(pwd)
echo '===== Linting START ====='
for file in $(find . -name '*.py'); do
filename=$(basename $file)
if [[ ${file:(-3)} == ".py" ]] && [[ $filename = *"test"* ]] ; then
echo "perform PEP8 lint (python pylint blah) for $filename"
cd $awd && cd $(dirname "${file}") && pycodestyle "${filename}"
fi
done
echo '===== Linting END ====='
'''
}
You need to mount the workspace of your Jenkins job (containing your python project) as volume (see "docker run -v" option) to your container and then run the "next step" build step inside this container. You can do this by providing a shell script as part of your project's source code, which does the "next step" or write this script in a previous build stage.
It would be something like this:
sh "chmod +x build.sh"
sh "docker run -v $WORKSPACE:/workspace ${DOCKER_REGISTRY}/${DOCKER_TAG}:latest /workspace/build.sh"
build.sh is an executable script, which is part of your project's workspace and performans the "next step".
$WORKSPACE is the folder that is used by your jenkins job (normally /var/jenkins_home/jobs//workspace - it is provided by Jenkins as a build variable.
Please note: This solution requires that the Docker daemon is running on the same host as Jenkins! Otherwise the workspace will not be available to your container.
Another solution would be to run Jenkins as Docker container, so you can share the Jenkins home/workspaces easily with the containers you run within your build jobs, like described here:
Running Jenkins tests in Docker containers build from dockerfile in codebase

Comparing local and remote image built in Docker

I am trying to write a script for simple tagging docker images based on the contents of Dockerfile, basically something like "auto-versioning".
The current process is:
Check the latest version in Docker repository (I am using AWS ECR)
Get the digest for that image
Build image from Dockerfile locally
Compare digests from the remote image and local image
Now here is the problem. The locally built image doesn't have the RepoDigest that I want to compare against, because it wasn't in the repository yet.
Here's the error:
Template parsing error: template: :1:2: executing "" at <index .RepoDigests 0>: error calling index: index out of range: 0
The other approach I could think of is pulling the remote image, building the local one and comparing layers, if the layers are identical, no action, if they are different = new version and I can issue a new tag and push the image. I am not so sure if the layers are reliable for this manner.
Another possible approach would be building the image with some temporary tag e.g. pointer, pushing anyways and in case the tag is identical with the latest version, not issuing a new version and stopping there. That would mean there would always be pointer tag somewhere in the repository. (I am also thinking that this could be a definiton of the latest tag?)
This is the script that I am using for building the images:
#!/usr/bin/env bash
repository=myrepo
path=mypath.dkr.ecr.ohio-1.amazonaws.com/${repository}/
set -e
set -o pipefail
if [[ $# -gt 0 ]]; then
if [[ -d "$1" ]]; then
latest=$(aws ecr describe-images --repository-name ${repository}/$1 --output text --query 'sort_by(imageDetails,& imagePushedAt)[*].imageTags[*]' | tr '\t' '\n' | grep -e '^[0-9]$' | tail -1 ) || true
if [[ -z "$latest" ]]; then
latest=0
fi
else
echo "$1 is not a directory"
exit 1
fi
else
echo "Provide build directory"
exit 1
fi
image="$path$1"
temporaryImage="$image:build"
echo "Building $image..."
docker build -t ${temporaryImage} $1
if [[ ${latest} -gt 0 ]]; then
latestDigest=$(aws ecr describe-images --repository-name ${repository}/$1 --image-ids "imageTag=${latest}" | jq -r '.imageDetails[0].imageDigest')
buildDigest=$(docker inspect --format='{{index .RepoDigests 0}}' ${temporaryImage})
if [[ "$image#$latestDigest" == "$buildDigest" ]]; then
echo "The desired version of the image is already present in the remote repository"
exit 1
fi
version=$((latest+1))
else
version=1
fi
versionedImage="$image:$version"
latestImage="$image:latest"
devImage="$image:dev"
devVersion="$image:$version-dev"
docker tag ${temporaryImage} ${versionedImage}
docker tag ${versionedImage} ${latestImage}
docker push ${versionedImage}
docker push ${latestImage}
echo "Image '$versionedImage' pushed successfully!"
docker build -t ${devImage} $1/dev/
docker tag ${devImage} ${devVersion}
docker push ${devImage}
docker push ${devVersion}
echo "Development image '$devImage' pushed successfully!"

Docker: permission denied while trying to connect to Docker Daemon with local CircleCI build

I have a very simple config.yml:
version: 2
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
steps:
- checkout
- run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')"
- run: yarn
- setup_remote_docker
- run: docker build .
All it does: boot a node image, test if node is running, do a yarn install and a docker build.
My dockerfile is nothing special; it has a COPY and ENTRYPOINT.
When I run circleci build on my MacBook Air using Docker Native, I get the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix://[...]
If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build.
However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally.
Is this something the CircleCI staff has to fix, or is there something I can do?
For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself.
In the very first step of the config.yml, I run this command:
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "This is a local build. Enabling sudo for docker"
echo sudo > ~/sudo
else
echo "This is not a local build. Disabling sudo for docker"
touch ~/sudo
fi
Afterwards, you can do this:
eval `cat ~/sudo` docker build .
Explanation:
The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine.
If true, it creates a file called sudo with contents sudo in the home directory.
If false, it creates a file called sudo with NO contents in the home directory.
The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored.
This way, both the local (circleci build) builds and remote builds will work.
To iterate on the answer of Jeff Huijsmans,
an alternative version is to use a Bash variable for docker:
- run:
name: Set up docker
command: |
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "export docker='sudo docker'" >> $BASH_ENV
else
echo "export docker='docker'" >> $BASH_ENV
fi
Then you can use it in your config
- run:
name: Verify docker
command: $docker --version
You can see this in action in my test for my Dotfiles repository
Documentation about environment variables in CircleCi
You might also solve your issue by running the docker image as root. Specify user: root under the image parameter:
...
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
user: root
steps:
- checkout
...
...

Run sonarqube scanner with gitlab ci

I am trying to put together a CI environment for a .NET application using the following stack (just the relevant ones):
Debian + mono
Docker
Gitlab CI
Gitlab-multi-runner (as a docker container)
Sonarqube + Postgre
I've used docker-compose to create the container for sonarqube and postgre, both are running and working. I am sadly stuck with executing sonarqube analysis for my build executed by the gitlab runner and all examples I found were using Maven. I've tried to use sonar-scanner as well, no luck so far.
Here are the contents of my gitlab-ci.yml:
image: mono:latest
cache:
paths:
- ./src/T_GitLabCi/packages/
stages:
- build
.shared: &restriction
only:
- master
tags:
- docker
build:
<<: *restriction
stage: build
script:
- nuget restore ./src/T_GitLabCi
- MONO_IOMAP=case xbuild /t:Build /p:Configuration="Release" /p:Platform="Any CPU" ./src/T_GitLabCi/T_GitLabCi.sln
- mono ./tools/NUnitConsoleRunner/nunit3-console.exe ./src/T_GitLabCi/T_GitLabCi.sln --work=./src/T_GitLabCi/test --config=Release
- << EXECUTE SONAR ANALYSIS >>
I am definitely missing something here. Could somebody point me the right direction?
I have projects written in PHP but that shouldn't matter. Here's what I did.
I enabled a private registry hosted on my GitLab installation
In this registry I have a "sonar-scanner" image built from this Dockerfile (it's based on one of the images available on Docker hub):
FROM java:alpine
ENV SONAR_SCANNER_VERSION 2.8
RUN apk add --no-cache wget && \
wget https://sonarsource.bintray.com/Distribution/sonar-scanner-cli/sonar-scanner-${SONAR_SCANNER_VERSION}.zip && \
unzip sonar-scanner-${SONAR_SCANNER_VERSION} && \
cd /usr/bin && ln -s /sonar-scanner-${SONAR_SCANNER_VERSION}/bin/sonar-scanner sonar-scanner && \
apk del wget
COPY files/sonar-scanner-run.sh /usr/bin
and here's the files/sonar-scanner-run.sh file:
#!/bin/sh
URL="<YOUR SONARQUBE URL>"
USER="<SONARQUBE USER THAT CAN ACCESS THE PROJECTS>"
PASSWORD="<USER PASSWORD>"
if [ -z "$SONAR_PROJECT_KEY" ]; then
echo "Undefined \"projectKey\"" && exit 1
else
COMMAND="sonar-scanner -Dsonar.host.url=\"$URL\" -Dsonar.login=\"$USER\" -Dsonar.password=\"$PASSWORD\" -Dsonar.projectKey=\"$SONAR_PROJECT_KEY\""
if [ ! -z "$SONAR_PROJECT_VERSION" ]; then
COMMAND="$COMMAND -Dsonar.projectVersion=\"$SONAR_PROJECT_VERSION\""
fi
if [ ! -z "$SONAR_PROJECT_NAME" ]; then
COMMAND="$COMMAND -Dsonar.projectName=\"$SONAR_PROJECT_NAME\""
fi
if [ ! -z $CI_BUILD_REF ]; then
COMMAND="$COMMAND -Dsonar.gitlab.commit_sha=\"$CI_BUILD_REF\""
fi
if [ ! -z $CI_BUILD_REF_NAME ]; then
COMMAND="$COMMAND -Dsonar.gitlab.ref_name=\"$CI_BUILD_REF_NAME\""
fi
if [ ! -z $SONAR_BRANCH ]; then
COMMAND="$COMMAND -Dsonar.branch=\"$SONAR_BRANCH\""
fi
if [ ! -z $SONAR_ANALYSIS_MODE ]; then
COMMAND="$COMMAND -Dsonar.analysis.mode=\"$SONAR_ANALYSIS_MODE\""
if [ $SONAR_ANALYSIS_MODE="preview" ]; then
COMMAND="$COMMAND -Dsonar.issuesReport.console.enable=true"
fi
fi
eval $COMMAND
fi
Now in my project in .gitlab-ci.yml I have something like this:
SonarQube:
image: <PATH TO YOUR IMAGE ON YOUR REGISTRY>
variables:
SONAR_PROJECT_KEY: "<YOUR PROJECT KEY>"
SONAR_PROJECT_NAME: "$CI_PROJECT_NAME"
SONAR_PROJECT_VERSION: "$CI_BUILD_ID"
script:
- /usr/bin/sonar-scanner-run.sh
That't pretty much all. The above example of .gitlab-ci.yml is simplified since I'm using diffrent builds for master and other branches (like when: manual) and I use this plugin to get feedback in GitLab: https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin
Feel free to ask if you have any questions. It took me some time to put this all together the way I want it :) Actually I'm still finetuning it.
You need to install sonar-scanner first. You can find portage of sonar-scanner for almost any recent language, for example for npm you don't have to use directly the java executor:
I only add to do this :
npm install --save sonar-scanner
Then I needed to add this in my package.json
"scripts": {
"sonar-scanner": "node_modules/sonar-scanner/bin/sonar-scanner"
}
This is my job in .gitlab-ci.yml:
job_testmaster:
stage: test
script:
- PACKAGE_VERSION=$(node -p "require('./package.json').version")
- echo sonar.projectVersion=${PACKAGE_VERSION} >> sonar-project.properties
- npm run build
- npm run sonar-scanner -- -Dsonar.login=${SONAR_LOGIN}
only:
- master
tags:
- docker
With this, I am able to start sonar analysis, but I am not able to use the quality gates after.
Hope this help.

Resources