Docker Hub and git submodules - docker

I have a repository that uses git submodules, and I configured the automated build on Docker Hub. At the beginning of the build process, it looks like Docker Hub pulls the repository from the default branch (master), update submodules and then checkout to the particular branch (let's say branch feature-a) that triggered the build. It works fine if feature-a branch has the very same submodules as master, but if the submodules are different (let's say, pull one submodule from a different repo), the build fails.
Is there a way to make Docker Hub clone the correct branch directly?

You need to use hooks: https://docs.docker.com/docker-hub/builds/advanced/#custom-build-phase-hooks
TL;DR: Place this in hooks/post_checkout:
#!/bin/bash
# Docker hub does a recursive clone, then checks the branch out,
# so when a PR adds a submodule (or updates it), it fails.
git submodule update --init

It might be failing because the submodule is private.
You can add a build environment variable SSH_PRIVATE. And add a private key with access to the private submodule repository.
A word of caution though… you may want to generate a diff private key than the one you use for anything else and add that to the private submodules repo.
Edit: This is required even if your linked github account has access to the repo because you're most likely specifying the submodule url as ssh based (e.g. git#github.com:Account/repo.git)
Edit2: Adding docs https://docs.docker.com/docker-hub/builds/#build-repositories-with-linked-private-submodules

SSH_PRIVATE sugested by Clintm (and in official documention) doesn't work for us. As far as I understand, this is because Docker hub interface to set environment variables doesn't allow to fill in a line break (!?)
I spent time on my side to find a workaround that match with our needs and it works for us.
I let it here if it can help some of you.
/hooks/post_checkout
#!/bin/bash
# Docker documentation to set up private git submodule for build does not work
# since it's not possible to set environment variable with line break in Docker
# Hub interface. It makes impossible to set SSH_PRIVATE as suggested here:
# https://docs.docker.com/docker-hub/builds/#build-repositories-with-linked-private-submodules
#
# To use the script below:
# - As suggested in the official doc, generate a keypair and push the public
# part to your source code provider account
# - In Docker Hub build configuration, set var SSH_PRIVATE_ESCAPED with the output of
# `awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' /path/to/the/private/key`
SSH_PRIVATE_FILE=~/git_id_rsa
echo "${SSH_PRIVATE_ESCAPED}" | awk '{gsub(/\\n/,"\n")}1' > "${SSH_PRIVATE_FILE}"
chmod 0400 "${SSH_PRIVATE_FILE}"
export GIT_SSH_COMMAND="ssh -i ${SSH_PRIVATE_FILE}"
git submodule update --init

Related

Can't find SSH keys settings under travis project settings

My CI project is dependent on another private repo. So I refer to the document to upload the private key using
➜ travis sshkey --upload ~/.ssh/id_travis_rsa --pro
Updating ssh key for Jeff-Tian/uni-sso with key from /Users/tianjef/.ssh/id_travis_rsa
Current SSH key: key for clone k8s-config
Finger print: 65:25:66:26:4d:5d:9f:ac:25:ba:ea:be:c4:d5:e3:5f
From the above I double checked the finger print, and compares to the github ssh keys:
They are matched.
However, the travis build still fails by:
(https://travis-ci.com/github/Jeff-Tian/uni-sso/builds/161350192)
$ git clone git#github.com:Jeff-Tian/k8s-config.git ${HOME}/k8s-config
Cloning into '/home/travis/k8s-config'...
Warning: Permanently added the RSA host key for IP address '140.82.114.4' to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
The command "git clone git#github.com:Jeff-Tian/k8s-config.git ${HOME}/k8s-config" failed and exited with 128 during .
And then I check the settings on travis settings, can't find the ssh keys settings pane:
Help:
Where goes wrong? Is it a Travis CI bug?
Seems the ssh keys config is only available for private repos.
The issue here is the main repo is public, but when deploy it, a private repo need to be downloaded. This scenario is not covered by the official document.
The workaround is to switch copying the private repo via https instead of ssh, so no need to upload the ssh keys.
By setting up the GH_TOKEN in the setting, and then write that token to .netrc file. Then copy the private repo using https is working:
.travis.yml:
- echo -e "machine github.com\n login $GH_TOKEN" > ~/.netrc
- git clone https://github.com/Jeff-Tian/k8s-config.git ${HOME}/k8s-config

Get latest tag in .gitlab-ci.yml for Docker build

I want to add a tag when building a Docker image, I'm doing this so far but I do not know how to get the latest tag on the repository being deployed.
docker build -t company/app .
My goal
docker build -t company/app:$LATEST_TAG_IN_REPO? .
Since you're looking for the "latest" git tag which is an ancestor of the currently building commit you probably want to use
git describe --tags --abbrev=0
to get it and use it like:
docker build -t company/app:$(git describe --tags --abbrev=0) .
Read here for the finer points on git describe
You can try using $CI_COMMIT_TAG or $CI_COMMIT_REF_NAME, this is part of the predefined variables accessible during builds.
If you want to see what are all the available environment variables during build step this should work as one of your jobs:
script:
- env
The pre-defined variable $CI_COMMIT_TAG is empty if no git tag points to the current commit in the pipeline. If you either want the current tag or the current SHA of the commit (as a fallback), you can use IMAGE_VERSION=${CI_COMMIT_TAG:-"$CI_COMMIT_SHORT_SHA"}

How do you specify additional trusted files with the Jenkins Github Branch Source plugin?

I've created a Jenkins Multibranch Pipeline with the GitHub Branch Source plugin. The Jenkinsfile essentially just calls a Cake Build script (build.ps1, build.cake) that contains all the build/deploy logic. This allows me to move to another CI service easily.
Unfortunately, I cannot seem to figure out how to add my Cake Build scripts as a trusted file so that PR's from forks will pull the files from the source repo instead. The Trust setting of the Discover pull requests from forks behavior seems to indicate that there can be other trusted files besides Jenkinsfile:
Nobody
Pull requests from forks will all be treated as untrusted. This means that where Jenkins requires a trusted file (e.g. Jenkinsfile) the contents of that file will be retrieved from the target branch on the origin repository and not from the pull request branch on the fork repository.
However, I cannot seem to find any documentation on adding other trusted files. The primary reason for this is to prevent a PR from a fork from accessing credentials from the Cake script. They wouldn't be able to change Jenkinsfile, but they could still change the Cake script to expose the credentials.
Is it actually possible to add other trusted files?
It seems like Jenkins does not support this. My solution is checking out the untrusted files manually from the base version instead. First getting the commit 's hash of the base version with:
def commit = sh(
script: 'git rev-parse HEAD',
returnStdout: true
).trim()
def base = sh(
script: "git rev-list --parents -n 1 ${commit}",
returnStdout: true
).trim().split('\\s+')[2]
git rev-list --parents -n 1 ${commit} will return the hash of current commit, which is a merge commit that was created by Jenkins; the latest commit of the PR and the latest commit of the target branch, separated by a space (e.g. 05e9322574ea03003f87dcbb44f172e6fa62581f b3f6ef892af9c645f490106757d7d05df3a26060 069ffd55ae36414a51b4de166aef86966f9447a8). Hence, we grab the hash of the latest commit of the target branch by trim().split('\\s+')[2].
Now we can do sh "git checkout ${base} FILE" on any file that we don't trust from the PR.
This does not works if the PR is already merged with the latest version of target branch. So what I did is something like this:
// revert untrusted files to the base version and backup it before we execute any untrusted code so the attacker
// don't have a chance to put a malicious content
def latest = sh(script: 'git rev-parse HEAD', returnStdout: true).trim()
sh "git checkout origin/${env.CHANGE_TARGET}"
def baseCompose = readFile('docker-compose.yml')
// switch back to latest commit
sh "git checkout ${latest}"
sh 'git clean -d -f -f -q -x'

Trigger step in Bitbucket pipelines

I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)

Let Jenkins build project from a Mercurial commit

Is there a way to specify a hook in the single repository?
Now we have specified the hook in the "/etc/mercurial/hgrc" file, but every time it builds twice, and it builds for each commit in each repository.
So we want to specify a build per repository.
This is how we implemented the hook:
[hooks]
changegroup = curl --silent http://jenkins:8080/job/ourProject/build
It's on a Ubuntu server.
Select the Poll SCM option under Build Triggers.
Make sure that schedule form is empty.
You should be creating in the .hg directory, /home/user/mercurial/.hg/hgrc and add hooks as:
[hooks]
commit.jenkins = wget -q http://localhost:8080/mercurial/notifyCommit?url=file:///home/user/mercurial > /dev/null
incoming.jenkins = wget -q http://localhost:8080/mercurial/notifyCommit?url=file:///home/user/mercurial > /dev/null
You should make sure that
Your Jenkins project doesn't poll
You use the proper notifyCommit URLs for your Mercurial hooks: https://wiki.jenkins-ci.org/display/JENKINS/Mercurial+Plugin
Ok, I found what I looked for (I'm the bounty; my case is Mercurial with a specific branch).
In the main/origin repository, place a hook with your desired build script. Pregroupchange is to maintain the incoming changes. I have a rhodecode installed on the main repository and itself has its own hooks.
In this way, I still trigger Jenkins and still have the changes afther the trigger for rhodecode push notifications and others.
[hooks]
pregroupchange = /path/to/script.extention
In the script, place your desired actions, also a trigger for Jenkins. Don't forget to enable in Jenkins:Job:Configure:Build Triggers:checkbox Trigger builds remotely + put here your desired_token (for my case: Mercurial).
Because you can't trigger only to a specific branch in Mercurial, I found the branch name in this way. Also, to trigger from a remote script, you need to give in Jenkins read permission for anonymous overall, or create a specific user with credentials and put them into the trigger URL.
Bash example:
#!/bin/bash
BRANCH_NAME=`hg tip --template "{branch}"`
if [ $BRANCH_NAME = "branch_name" ]; then
curl --silent http://jenkins_domain:port/path/to/job?token=desired_token
fi
For the original question:
In this way you only execute one build, for a desired branch. Hooks are meant only for the main repository in case you work with multiple clones and multiple developers. You may have your local hooks, but don't trigger Jenkins from you local, for every developer. Trigger Jenkins only from the main repository when a push came (commit, incoming, and groupchange). Your local hooks are for other things, like email, logs, configuration, etc.

Resources