I'm having a pipeline in azure devops that is supposed to be executed on a linux agent.
One task in this pipeline is to run parasoft static code analysis.
We are using a docker container that includes parasoft and which is executin g a bash script to compile our sources and run sca.
My issue is, that on the next run of the pipeline, I'll get an error because it is impossible to checkout the code from git. The reason for that is, during the build with cmake some directories will be created as root.
Here is the related task of the pipeline
- task: Bash#3
displayName: "Run sca"
inputs:
targetType: 'inline'
script: |
docker run -t -v "$(System.DefaultWorkingDirectory):/host" --platform=linux/amd64 -w /host/ containerregistry.azurecr.io/diag/parasoft:2.8.0 /bin/bash -c "\
cmake -S . -B build -DCMAKE_BUILD_TYPE=Debug && \
cpptestcli -localsettings /host/parasoft.properties -config \"someconfig\" \
-bdf /host/build/compile_commands.json -report /host/sca-report -include \"host/src/**/*.cpp\" \
"
Here is the content of the folder after the docker run
completed:agent#host:~/az_agent/_work/1/s$ ll
total 100
drwxr-xr-x 16 agent agents 4096 Dez 23 16:14 ./
drwxr-xr-x 6 agent agents 4096 Dez 23 15:37 ../
drwxr-xr-x 7 root root 4096 Dez 23 15:38 build/
drwxr-xr-x 2 agent agents 4096 Dez 23 15:31 cmake/
-rw-r--r-- 1 agent agents 994 Dez 23 15:31 CMakeLists.txt
-rw-r--r-- 1 root root 4605 Dez 23 15:42 c++test_static_problems.txt
drwxr-xr-x 7 agent agents 4096 Dez 23 15:31 diagmaster/
drwxr-xr-x 5 agent agents 4096 Dez 22 12:47 doc/
drwxr-xr-x 2 agent agents 4096 Dez 23 15:31 docker/
drwxr-xr-x 5 agent agents 4096 Dez 23 15:31 etc/
drwxr-xr-x 6 agent agents 4096 Dez 23 15:31 gen/
drwxr-xr-x 8 agent agents 4096 Dez 23 15:37 .git/
-rw-r--r-- 1 agent agents 70 Dez 23 15:31 .gitignore
-rw-r--r-- 1 agent agents 1478 Dez 23 15:31 parasoft.properties
drwxr-xr-x 4 root root 4096 Dez 23 15:38 parasoft_workspace/
drwxr-xr-x 2 agent agents 4096 Dez 23 15:37 pipelines/
-rw-r--r-- 1 agent agents 723 Dez 23 15:31 README.md
drwxr-xr-x 5 agent agents 4096 Dez 23 15:31 src/
-rwxrwxrwx 1 agent agents 640 Dez 23 16:14 test.sh*
drwxr-xr-x 3 agent agents 4096 Dez 23 15:31 utils/
If I change the ownership manually e.g. by this command the next execution of the pipeline works without issues.
sudo chown -R --reference=cmake *
Of sourse I don't want to run this command in my pipeline.
From your description, the issue now focuses on the error 'UnauthorizedAccessException due to pipeline created folders as root'.
You could try to use things like this to give permission to the run agent.
chown -R agent:agents *
Then the issue is supposed to resolve.
Related
I have a cloudbuild.yaml file that looks like this:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: [ "-m", "rsync", "-r", "gs://${_BUCKET}/maven-repository", "/cache/.m2" ]
volumes:
- path: '/cache/.m2'
name: 'm2_cache'
- name: docker/compose:debian-1.29.2
entrypoint: bash
args:
- -c
- |
./test.sh
volumes:
- path: '/cache/.m2'
name: 'm2_cache'
timeout: 2700s
substitutions:
_BUCKET: 'my-bucket'
In the first step we download our maven settings.xml file from GCS. This file is crucial for subsequent build steps since it contain the username/password to our Artifact Registry Maven repository (I've simplified this example as we don't actually store the credential in the settings.xml as plain text). Without these credentials, our Maven build won't run. Normally the script that we call in the second step starts several docker containers and then run our maven tests. But I've replaced it with test.sh to easier show what the problem is. The test.sh file is shown below:
#!/bin/bash
echo "### [Host] Contents in /cache/.m2"
ls -la /cache/.m2
mkdir ~/test
echo "Johan" > ~/test/ikk.txt
echo "### [Host] Contents in ~/test"
ls -la ~/test
docker run --rm -v /cache/.m2:/cache/.m2 -v ~/test:/root/test -w /usr/src/somewhere ubuntu bash -c 'echo "### [Docker] Contents in /cache/.m2" && ls -la /cache/.m2 && echo "### [Docker] Contents in /root/test" && ls -la /root/test'
I.e. we try to mount two volumes to the ubuntu container that we start in the test.sh file. I list the contents in two directors both outside (### [Host]) and inside (### [Docker]) the ubuntu container. Here's the relevant output of running this in cloud build:
### [Host] Contents in /cache/.m2
total 16
drwxr-xr-x 2 root root 4096 Sep 15 08:55 .
drwxr-xr-x 3 root root 4096 Sep 15 08:55 ..
-rw-r--r-- 1 root root 8063 Sep 13 11:03 settings.xml
### [Host] Contents in ~/test
total 12
drwxr-xr-x 2 root root 4096 Sep 15 08:55 .
drwxr-xr-x 6 root root 4096 Sep 15 08:55 ..
-rw-r--r-- 1 root root 6 Sep 15 08:55 ikk.txt
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
Digest: sha256:20fa2d7bb4de7723f542be5923b06c4d704370f0390e4ae9e1c833c8785644c1
Status: Downloaded newer image for ubuntu:latest
### [Docker] Contents in /cache/.m2
total 8
drwxr-xr-x 2 root root 4096 Sep 15 08:55 .
drwxr-xr-x 3 root root 4096 Sep 15 08:55 ..
### [Docker] Contents in /root/test
total 8
drwxr-xr-x 2 root root 4096 Sep 15 08:55 .
drwx------ 1 root root 4096 Sep 15 08:55 ..
As you can see, the volume mounts doesn't seem to work when I run the ubuntu container from the test.sh file in cloud build (since the contents of /root/test and /cache/.m2 are empty).
Running the test.sh locally on my machine yields the expected outcome:
### [Host] Contents in /cache/.m2
total 40
drwxr-xr-x 7 johan staff 224 Mar 15 2022 .
drwxr-x---+ 87 johan staff 2784 Sep 15 10:58 ..
-rw-r--r-- 1 johan staff 2344 Sep 14 11:37 copy_reference_file.log
drwxr-xr-x 221 johan staff 7072 Sep 14 10:52 repository
-rw-r--r-- 1 johan staff 327 Nov 24 2021 settings-docker.xml
-rw-r--r--# 1 johan staff 9842 Mar 15 2022 settings.xml
drwxr-xr-x 3 johan staff 96 Nov 19 2021 wrapper
### [Host] Contents in ~/test
total 8
drwxr-xr-x# 3 johan staff 96 Sep 15 10:53 .
drwxr-xr-x# 135 johan staff 4320 Sep 15 10:49 ..
-rw-r--r-- 1 johan staff 6 Sep 15 10:58 ikk.txt
### [Docker] Contents in /cache/.m2
total 24
drwxr-xr-x 7 root root 224 Mar 15 2022 .
drwxr-xr-x 3 root root 4096 Sep 15 08:58 ..
-rw-r--r-- 1 root root 2344 Sep 14 09:37 copy_reference_file.log
drwxr-xr-x 221 root root 7072 Sep 14 08:52 repository
-rw-r--r-- 1 root root 327 Nov 24 2021 settings-docker.xml
-rw-r--r-- 1 root root 9842 Mar 15 2022 settings.xml
drwxr-xr-x 3 root root 96 Nov 19 2021 wrapper
### [Docker] Contents in /root/test
total 8
drwxr-xr-x 3 root root 96 Sep 15 08:53 .
drwx------ 1 root root 4096 Sep 15 08:58 ..
-rw-r--r-- 1 root root 6 Sep 15 08:58 ikk.txt
Here you can see that the volumes are mounted correctly and I can access the files inside the ubuntu container.
How can I mount volumes inside a container in cloud build?
I am building a a "custom" jenkins agent (based on the public jenkins/agent:jdk8 image) docker image using the below dockerfile:
FROM jenkins/agent:jdk8
USER root
RUN apt-get -qq update \
&& apt-get -qq -y install \
curl
RUN curl -sSL https://get.docker.com/ | sh
RUN date > /home/jenkins/build-date-root.txt
USER jenkins
RUN date > build-date.txt
I build with:
docker build -t internal/jenkins-custom-agent docker
When I run the image on my local machine with
docker run -it --rm internal/jenkins-custom-agent bash
I can see the 2 added files as expected inside the container:
jenkins#local:~$ pwd
/home/jenkins
jenkins#local:~$ ls -la
total 48
drwxr-xr-x 1 jenkins jenkins 4096 Feb 20 11:35 .
drwxr-xr-x 1 root root 4096 Feb 15 20:28 ..
-rw-r--r-- 1 jenkins jenkins 220 Apr 18 2019 .bash_logout
-rw-r--r-- 1 jenkins jenkins 3526 Apr 18 2019 .bashrc
drwxr-xr-x 2 jenkins jenkins 4096 Feb 20 09:38 .gradle
drwxr-xr-x 2 jenkins jenkins 4096 Feb 15 20:28 .jenkins
-rw-r--r-- 1 jenkins jenkins 807 Apr 18 2019 .profile
drwxr-xr-x 2 jenkins jenkins 4096 Feb 15 20:28 agent
-rw-r--r-- 1 root root 29 Feb 20 09:38 build-date-root.txt
-rw-r--r-- 1 jenkins jenkins 29 Feb 20 09:38 build-date.txt
But when I run a container from the exact same image (pushed to our internal registry) in jenkins running in Kubernetes those files are not there:
11:18:30 + ls -la /home/jenkins
11:18:30 total 40
11:18:30 drwxrwxrwx 10 root root 4096 Feb 20 10:18 .
11:18:30 drwxr-xr-x 1 root root 4096 Feb 15 20:28 ..
11:18:30 drwxr-xr-x 3 jenkins jenkins 4096 Feb 20 10:18 .cache
11:18:30 drwxr-xr-x 3 jenkins jenkins 4096 Feb 20 10:18 .config
11:18:30 drwxr-xr-x 2 jenkins jenkins 4096 Feb 15 20:28 .jenkins
11:18:30 drwx------ 2 jenkins jenkins 4096 Feb 20 10:18 .ssh
11:18:30 drwxr-xr-x 2 jenkins jenkins 4096 Feb 15 20:28 agent
11:18:30 drwxr-xr-x 3 jenkins jenkins 4096 Feb 20 10:18 caches
11:18:30 drwxr-xr-x 4 jenkins jenkins 4096 Feb 20 10:18 remoting
11:18:30 drwxr-xr-x 3 jenkins jenkins 4096 Feb 20 10:18 workspace
I suspect it has something to do with how its started in kubernetes/jenkins and the cat command:
But I don't understand how that can prevent files that I know is in the image to suddenly disappear.
Is it not possible to "extend" jenkins/agent:jdk8 and add my own custom files/folders etc?
UPDATE:
I found this:
https://support.cloudbees.com/hc/en-us/articles/360031223512-What-you-need-to-know-when-using-Kaniko-from-Kubernetes-Jenkins-Agents?mobile_site=true&page=7
And based on that I have now changed to using
workingDir: /tmp/jenkins
for the two containers and now I can find the files in the image:
16:23:50 + ls -la /home/jenkins
16:23:50 total 40
16:23:50 drwxr-xr-x 1 jenkins jenkins 4096 Feb 20 09:38 .
16:23:50 drwxr-xr-x 1 root root 4096 Feb 15 20:28 ..
16:23:50 -rw-r--r-- 1 jenkins jenkins 220 Apr 18 2019 .bash_logout
16:23:50 -rw-r--r-- 1 jenkins jenkins 3526 Apr 18 2019 .bashrc
16:23:50 drwxr-xr-x 2 jenkins jenkins 4096 Feb 20 09:38 .gradle
16:23:50 drwxr-xr-x 2 jenkins jenkins 4096 Feb 15 20:28 .jenkins
16:23:50 -rw-r--r-- 1 jenkins jenkins 807 Apr 18 2019 .profile
16:23:50 drwxr-xr-x 2 jenkins jenkins 4096 Feb 15 20:28 agent
16:23:50 -rw-r--r-- 1 root root 29 Feb 20 09:38 build-date-root.txt
16:23:50 -rw-r--r-- 1 jenkins jenkins 29 Feb 20 09:38 build-date.txt
The do "explain" that in the article but it to low-level for me to understand.
I do not understand why it works locally (Windows 10) when it does not work on my Jenkins which uses Docker on unix.
withMaven(globalMavenSettingsConfig: 'empty-global-settings', mavenSettingsConfig: Constants.CONFIG_SETTINGS_ID) {
sh "pwd"
sh "ls -lrt"
sh "ls -lrt /home/jenkins/workspace/myProject"
sh "ls -lrt /home/jenkins/workspace/myProject/règles"
sh "$MVN_CMD deploy -X"
}
and my result is:
+ ls -lrt /home/jenkins/workspace/myProject
total 8
drwxr-xr-x. 2 jenkins root 125 Jul 15 21:32 bom
-rw-r--r--. 1 jenkins root 657 Jul 15 21:32 assembly.xml
drwxr-xr-x. 3 jenkins root 27 Jul 15 21:32 ressources
-rw-r--r--. 1 jenkins root 1492 Jul 15 21:32 pom.xml
drwxr-xr-x. 2 jenkins root 57 Jul 15 21:32 data
drwxr-xr-x. 3 jenkins root 23 Jul 15 21:32 règles
drwxr-xr-x. 3 jenkins root 82 Jul 15 21:32 target
[Pipeline] sh
+ ls -lrt $'/home/jenkins/workspace/myProject/r\303\250gles'
total 0
total 0 but my folder règles is not empty
EDIT:
I try with mavenOpts:
withMaven(globalMavenSettingsConfig: 'empty-global-settings', mavenSettingsConfig: Constants.CONFIG_SETTINGS_ID, mavenOpts: '-Dfile.encoding=UTF-8') {
...
}
I change my Docker image and it works.
I have a docker image https://github.com/carnellj/spmia-chapter1 which does not find its CMD ./run.sh executable although it is there in the file system.
I was able to run /bin/sh in the container, and I can ls -l:
D:\Dokumente\ws\spring-microservices\spmia-chapter1 (master)
λ docker run -i -t johncarnell/tmx-simple-service:chapter1 /bin/sh
/ # ls -l
total 56
drwxr-xr-x 2 root root 4096 Mar 3 11:20 bin
drwxr-xr-x 5 root root 360 Apr 22 07:10 dev
drwxr-xr-x 1 root root 4096 Apr 22 07:10 etc
drwxr-xr-x 2 root root 4096 Mar 3 11:20 home
drwxr-xr-x 1 root root 4096 Apr 22 06:01 lib
drwxr-xr-x 5 root root 4096 Mar 3 11:20 media
drwxr-xr-x 2 root root 4096 Mar 3 11:20 mnt
dr-xr-xr-x 123 root root 0 Apr 22 07:10 proc
drwx------ 1 root root 4096 Apr 22 07:10 root
drwxr-xr-x 2 root root 4096 Mar 3 11:20 run
-rwxr-xr-x 1 root root 245 Apr 22 06:50 run.sh
drwxr-xr-x 2 root root 4096 Mar 3 11:20 sbin
drwxr-xr-x 2 root root 4096 Mar 3 11:20 srv
dr-xr-xr-x 13 root root 0 Apr 22 07:10 sys
drwxrwxrwt 2 root root 4096 Mar 3 11:20 tmp
drwxr-xr-x 1 root root 4096 Mar 7 01:04 usr
drwxr-xr-x 1 root root 4096 Mar 7 01:04 var
/ # ./run.sh
/bin/sh: ./run.sh: not found
/ # ls run.sh
run.sh
/bin/sh does not find ./run.sh although it is there in the file system, as proven by ls run.sh. Also, cat shows the content of run.sh:
/ # cat run.sh
#!/bin/sh
echo "********************************************************"
echo "Starting simple-service "
echo "********************************************************"
java -jar /usr/local/simple-service/simple-service-0.0.1-SNAPSHOT.jar
When I run vi from sh and copy the content of run.sh into a new file myrun.sh and make myrun.sh executable, I can execute ./myrun.sh and the spring service starts.
What is going on here? Why would sh not see an executable which is there in the filesystem? Executables from PATH or executables which I add manually run fine.
I am running Docker on Windows 10.
OK the reason is, run.sh is created with Windows line endings in the docker image if you check out with automatic lf->crlf conversion. One possible solution is to tell git not to convert line endings.
I am trying to set up Jenkins HA with one active and one cold standby node on Ubuntu-14.
I was looking at this question:
How to setup Jenkins with HA?
I see that I just need to replicate the contents of /var/lib/jenkins, which is my $JENKINS_HOME.
#ls -alh /var/lib/jenkins
drwxrwxr-x 3 jenkins jenkins 4.0K Oct 16 19:45 .bundle
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 26 14:54 .cache
-rw-r--r-- 1 jenkins jenkins 2.4K Oct 24 21:09 config.xml
-rw-r--r-- 1 jenkins jenkins 950 Oct 16 20:34 credentials.xml
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 16 19:53 .groovy
-rw-r--r-- 1 jenkins jenkins 159 Oct 16 20:02 hudson.model.UpdateCenter.xml
-rw-r--r-- 1 jenkins jenkins 370 Oct 16 19:52 hudson.plugins.git.GitTool.xml
-rw------- 1 jenkins jenkins 1.7K Oct 16 19:40 identity.key.enc
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 16 19:40 .java
-rw-r--r-- 1 jenkins jenkins 6 Oct 16 20:02 jenkins.install.InstallUtil.lastExecVersion
-rw-r--r-- 1 jenkins jenkins 6 Oct 16 19:54 jenkins.install.UpgradeWizard.state
drwxr-xr-x 5 jenkins jenkins 4.0K Oct 26 14:43 jobs
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 16 19:40 logs
-rw-r--r-- 1 jenkins jenkins 907 Oct 16 20:02 nodeMonitors.xml
drwxr-xr-x 2 jenkins jenkins 4.0K Oct 16 19:40 nodes
-rw-r--r-- 1 jenkins jenkins 56 Nov 4 19:57 .owner
drwxr-xr-x 81 jenkins jenkins 12K Oct 16 19:59 plugins
drwxr-xr-x 5 jenkins jenkins 4.0K Oct 16 20:43 .puppetlabs
-rw-r--r-- 1 jenkins jenkins 129 Oct 16 20:02 queue.xml.bak
-rw-r--r-- 1 jenkins jenkins 64 Oct 16 19:40 secret.key
-rw-r--r-- 1 jenkins jenkins 0 Oct 16 19:40 secret.key.not-so-secret
drwx------ 4 jenkins jenkins 4.0K Oct 16 20:43 secrets
drwxr-xr-x 2 jenkins jenkins 4.0K Nov 4 20:02 updates
drwxr-xr-x 2 jenkins jenkins 4.0K Oct 16 19:40 userContent
drwxr-xr-x 4 jenkins jenkins 4.0K Oct 17 13:09 users
drwxr-xr-x 2 jenkins jenkins 4.0K Oct 16 19:53 workflow-libs
drwxr-xr-x 2 jenkins jenkins 4.0K Oct 28 15:26 workspace
Should I replicate all of the above items? If not, then which ones should I sync? Any gotchas or anything else I need to know?
Thanks!
I have tested this scenario and have chosen to replicate everything with the following exclusions. Initial testing after this seems to indicate success with plugins, jobs, credentials etc.
rsync -e "ssh -o StrictHostKeyChecking=no" -rvh --delete --exclude '.bash_history' \
--exclude 'logs' --exclude '.ssh' --exclude '.viminfo' \
--exclude '.cache' ./ jenkins#JENKINS-STANDBY-NODE:/var/lib/jenkins/