Azure DevOps Server (onprem) - container job - checkout not working - docker

I'm trying to run my build inside a container with azure-pipelines in Azure DevOps Server(onprem). Following the official guide https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops-2019
I do have a self-hosted linux agent with ubuntu18.04 installed.
My azure-pipelines.yml
pool: linux-container-build
container: ubuntu:16.04
steps:
- script: whoami
The container initialization works fine and creates the container properly. Afterwards the checkout steps fails without much information.
Picture of pipeline: pipeline
Checkout step just does this:
##[section]Starting: Checkout ***** to s
==============================================================================
Task : Get sources
Description : Get sources from a repository. Supports Git, TfsVC, and SVN repositories.
Version : 1.0.0
Author : Microsoft
Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=798199)
==============================================================================
##[error]Collection was modified; enumeration operation may not execute.
##[section]Finishing: Checkout **** to s

I updated my task definition to:
- checkout: none
This will skip the checkout step and the 'whoami' step succeeds with proper output inside the container
It seems I need git inside my container? ..also probably all other packages..
Can I somehow add git and all required applications to the _work folder or to externels because this will get mounted in the docker volume?

Related

circleci permission denied error when using machine executor

I would like to use the machine executor so that I can run some component tests with docker-compose. My workflow fails on the checkout step and throws this error: Making checkout directory "/opt/my-app" Error: mkdir /opt/my-app: permission denied
Here is the yaml for the component_test stage in my workflow:
component_test:
machine: true
working_directory: /opt/my-app
steps:
- checkout
If I use docker instead of the machine executor then I don't get any permission issues:
component_test:
machine: true
working_directory: /opt/my-app
steps:
- checkout
But, I'd like to be able to use docker-compose and thus need to be able to run the machine executor. Has anyone seen a permission issue like this before?
You need to either change the working directory into something in /home/circleci or just exclude it complete as it's optional.
Right now, the circleci user runs the checkout step, which doesn't have permission to git clone to the working directory you choose.
Also, I wouldn't use machine: true as that is deprecated. Specify an image: https://circleci.com/docs/2.0/configuration-reference/#available-machine-images

Jenkins user not in passwd on dynamic jnlp slave in kubernetes

I am building a system to do c++ cmake builds primarily. I have Jenkins firing the dynamic pods, firing off shell scripts, etc. But, I can't get it to checkout the code. Now, my Jenkinsfile launches a container that the actual compile is supposed to be run in. That "sub" container is tuned to compile C++ code. Now, I have jenkins running scripts and such in that pod, but, when i try
checkout scm
im getting errors saying
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --force --progress git#gitlab.com:mystuff/hello-world-cmake.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: No user exists for uid 1000080000
fatal: Could not read from remote repository.
my home folder is the standard /home/jenkins and the workspace folder is there, etc, etc. But, when I dump the /etc/passwd file, the jenkins user isn't listed in it.
Whats the appropriate way to add the jenkins user to that file?
What image are you using for Jenkins slave? Does it have user jenkins? If it has you need to specify this in your spec for Jenkins slave:
spec:
securityContext:
runAsUser: 1000
UPDATE:
You cannot run default Jenkins image in Openshift, because Openshift runs containers as random user. You should run Jenkins from builtin Jenkins template "Jenkins Persistent". If you don't have this template and don't have Jenkins image stream - you can try to use image openshift/jenkins-2-centos7. See details at:
https://github.com/openshift/jenkins/issues/168
https://github.com/openshift/jenkins

Trigger step in Bitbucket pipelines

I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)

How do I chain Jenkins pipelines from a checked out git repo?

I want to checkout a git repo and then run its build so I tried:
sh "git clone --depth 1 -b master git#github.com:user/repo.git"
build './repo'
but that yields:
ERROR: No item named ./repo found
I've tried to use dir('repo') but apparently that errors when you run it from within docker (because kubernetes is stuck on an old version of docker that doesnt support this).
Any idea on how to run the build pipeline from the checked out repo?
The 'build' pipeline steps expect a job name, not a pipeline folder with a Jenkinsfile in its root folder.
The correct way to do this is to set the pipeline job with the Jenkinsfile, as described here ('In SCM' section), and call it by its Job name from your pipeline.
Pipelines are not built for chaining unless you use shared libraries where you put the Pipeline code in a Groovy class or as a step, but that it is a subject for a full article.

Jenkins Pipeline push Docker image

My Jenkins job is Pipeline that running in Dockers:
node('docker') {
//Git checkout
git url: 'ssh://blah.blah:29411/test.git'
//Build
sh 'make'
//Verify/Run
sh './runme'
}
I'm working with kernel and my sources take a lot of time to get it from GIT (it's about 2GB). I'm looking on how I can push the docker image to use it for the next build so it will already contain most of the sources. I probably need to do:
docker push blahdockergit.blah/myjenkinsslaveimage
but it should run outside of the container.
Found in pipeline syntax that following class can be used for building external jobs

Resources