Why can't Jenkins copy these files? - jenkins

I am new to Jenkins. I am trying to get it to build a project and then copy the artifacts to a particular location. I believe I have the permissions setup correctly, but I get "permission denied" errors when that build step is run.
The code is written in Clojure. Jenkins correctly pulls the code from Github. Then Jenkins runs "lein uberjar". The files are successfully created.
I have an "Execute shell" as a build step in Jenkins. The shell command is:
cp /var/lib/jenkins/jobs/api/workspace/target/*.jar /home/jenkins/api
If I ssh to the server, and sudo to root, and then su to "jenkins", I can run the above command and it works perfectly. However, when Jenkins does this, the output contains:
+ cp /var/lib/jenkins/jobs/api/workspace/target/instaphoto-0.1-standalone.jar /var/lib/jenkins/jobs/api/workspace/target/instaphoto-0.1.jar /home/jenkins/api
cp: cannot create regular file ‘/home/jenkins/api/instaphoto-0.1-standalone.jar’: Permission denied
cp: cannot create regular file ‘/home/jenkins/api/instaphoto-0.1.jar’: Permission denied
Build step 'Execute shell' marked build as failure
Sending e-mails to: developer#sunflowerforce.com
Finished: FAILURE
If I do this:
groups jenkins
I see:
jenkins : jenkins root run-server-software
and if I do this:
cd /home/jenkins
ls -al
drwxrwx--- 4 jenkins run-server-software 4096 Jul 30 20:21 .
drwxr-xr-x 8 root root 4096 Jul 30 18:59 ..
drwxrwxr-x 2 sunflower run-server-software 4096 Jul 31 01:06 api
drwxrwxr-x 10 sunflower run-server-software 4096 Jul 30 19:59 nlp
So jenkins belongs to group run-server-software and group run-server-software should have permission to do whatever it wants in folders such as api. So why do I get "permission denied"?

Related

docker-compose.yml not found - error on build

I created a new VM with ubuntu 22.04 and asked to install docker
When I create a docker-compose file and having to run the build, the following errors occur:
pilati#ubuntu-web-containers:/var/www/mysql$ ls -la
total 12
drwxr-xr-x 2 root root 4096 out 1 16:42 .
drwxr-xr-x 4 root root 4096 out 1 16:40 ..
-rwxrwxrwx 1 root root 473 out 1 16:42 docker-compose.yml
pilati#ubuntu-web-containers:/var/www/mysql$ sudo docker-compose build
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml, compose.yml, compose.yaml
pilati#ubuntu-web-containers:/var/www/mysql$ sudo docker-compose -f /var/www/mysql/docker-compose.yml build
ERROR: .FileNotFoundError: [Errno 2] No such file or directory: '/var/www/mysql/docker-compose.yml'
pilati#ubuntu-web-containers:/var/www/mysql$
I reinstalled the VM from scratch and nothing works.
Any way to solve this problem?
Put your compose to your home folder, it should work from there. That is because you installed docker with snap, install it as is from the official site.

Google Cloud Build Error: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1

Note: There is a similar post regarding this issue but it involves a CI/CD workflow and a considerably more complicated Dockerfile. The solutions presented do not seem to apply to my situation.
Per Google documentation I am attempting to build an image by running gcloud run deploy in the directory where the files mentioned in my Dockerfile are located. The Dockerfile appears as:
FROM python:3.9-alpine
WORKDIR /app
COPY main.py /app/main.py
COPY requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
CMD ["python3", "main.py"]
I receive a message that the build failed, and when checking the logs I see the following:
starting build "..."
FETCHSOURCE
Fetching storage object: gs://my-app_cloudbuild/source/....
Copying gs://my-app_cloudbuild/source/...
/ [0 files][ 0.0 B/ 1.5 KiB]
/ [1 files][ 1.5 KiB/ 1.5 KiB]
Operation completed over 1 objects/1.5 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
Can anyone explain the reason for this error? I suspect it has to do with how files are copied to the image, but I was able to build and run this container without problem on my local machine. Any idea why this fails in Cloud Run Build?
Running ls -la in the directory where I ran gcloud run deploy returns:
drwxr-xr-x 9 user staff 288 May 20 16:04 .
drwxr-xr-x 6 user staff 192 May 20 13:35 ..
drwxr-xr-x 14 user staff 448 May 20 16:06 .git
-rw-r--r-- 1 user staff 27 May 20 15:06 .gitignore
-rw-r--r-- 1 user staff 424 May 20 16:54 Dockerfile
-rw-r--r-- 1 user staff 3041 May 20 15:55 main.py
-rw-r--r-- 1 user staff 144 May 19 09:42 requirements.txt
drwxr-xr-x 6 user staff 192 May 19 09:09 venv
Contents of .gitignore:
Dockerfile
venv
*.gz
*.tar
*.pem
Full console output when attempting two-step build (see comments):
user#users-MacBook-Pro TwitterBotAQI % gcloud builds submit --tag gcr.io/missoula-aqi/aqi
Creating temporary tarball archive of 2 file(s) totalling 3.1 KiB before compression.
Some files were not included in the source upload.
Check the gcloud log [/Users/user/.config/gcloud/logs/2022.05.20/18.40.53.921436.log] to see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).
Uploading tarball of [.] to [gs://missoula-aqi_cloudbuild/source/1653093653.998995-48d4ba15b3274455a21e16b7abc7d65b.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/missoula-aqi/locations/global/builds/0c22d976-171e-4e7b-92d8-ec91704d6d52].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/0c22d976-171e-4e7b-92d8-ec91704d6d52?project=468471228522].
------------------------------------------------------------------------------------ REMOTE BUILD OUTPUT -------------------------------------------------------------------------------------
starting build "0c22d976-171e-4e7b-92d8-ec91704d6d52"
FETCHSOURCE
Fetching storage object: gs://missoula-aqi_cloudbuild/source/1653093653.998995-48d4ba15b3274455a21e16b7abc7d65b.tgz#1653093655000531
Copying gs://missoula-aqi_cloudbuild/source/1653093653.998995-48d4ba15b3274455a21e16b7abc7d65b.tgz#1653093655000531...
/ [1 files][ 1.5 KiB/ 1.5 KiB]
Operation completed over 1 objects/1.5 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
BUILD FAILURE: Build step failure: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
ERROR: (gcloud.builds.submit) build 0c22d976-171e-4e7b-92d8-ec91704d6d52 completed with status "FAILURE"
I had added Dockerfile to .gitignore as it contained API keys stored as environment variables. Removing Dockerfile from .gitignore resolved the issue.

Docker volume mounts not working in Azure DevOps Pipeline

Docker volume mounts not working in Azure DevOps Pipeline, please find my code below:
I tried two approaches to run my docker container in the pipeline - please refer below - both returning empty volume - volume mount not happening. I'm not sure what mistake I'm doing here. It would be really appreciated if someone can help me to fix this issue.
I would like to mount run.sh, test.sh and test.txt under /test
In entrypoint.sh - I'm listing all the files inside docker - but it's returning empty - not mounted all these files run.sh, test.sh and test.txt
I'm struggling for the last two days to fix this issue but not getting any resolution - any help would be really appreciated.
This is my folder structure:
my-app/
├─ test/
│ ├─ run.sh
│ ├─ test.sh
│ ├─ test.txt
├─ azure-pipeline.yml
test.sh
#!/bin/bash
rootPath=$1
echo "Root path: $rootPath"
./run.sh $rootPath
run.sh
#!/bin/bash
echo "starting run script"
NAME="testApp"
IMAGE="sample/test-app:1.1"
ROOTPATH=$1
echo "$ROOTPATH"
# Finally run
docker stop $NAME > /dev/null 2>&1
docker rm $NAME > /dev/null 2>&1
docker run --name $NAME -i -v $ROOTPATH:/test -w /test $IMAGE
azure-pipeline.yml (Approach -1)
trigger:
- none
jobs:
- job: test
pool:
name: my-Linux-agents
displayName: Run tests
steps:
- task: Bash#3
displayName: Docker Prune
inputs:
targetType: inline
script: |
docker system prune -f -a
- task: Docker#2
displayName: Docker Login
inputs:
containerRegistry: myRegistry w/ asdf
command: login
- task: Bash#3
displayName: Execute Sample Java
inputs:
targetType: filePath
filePath: 'test/test.sh'
arguments: '$PWD'
workingDirectory: test
azure-pipeline.yml (Approach -2)
trigger:
- none
jobs:
- job: test
pool:
name: my-Linux-agents
displayName: Run tests
steps:
- task: Bash#3
displayName: Docker Prune
inputs:
targetType: inline
script: |
docker system prune -f -a
- task: Docker#2
displayName: Docker Login
inputs:
containerRegistry: myRegistry w/ asdf
command: login
- bash: |
echo "Executing docker run command"
echo $(Build.SourcesDirectory)
echo $PWD
docker run --name testApp -i -v $(Build.SourcesDirectory):/test -w /test sample/test-app:1.1
My Docker Image - files
Dockerfile
FROM alpine:3.12
COPY entrypoint.sh /
RUN echo "hello"
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
echo "START Running Docker"
echo "Listing Files"
ls -la
TL;DR
Docker volume mounts work in Azure DevOps Pipelines (at least on the Microsoft Hosted Agents).
As can be seen below both the approaches described in the OP works with the ubuntu-latest agent pool. If self hosted agents are used in my-Linux-pool the problem is likely with them rather than with the Dockerfile or pipeline config that was shared in the original post.
Please see the full working demo with git repo and Pipeline
EDIT: If the self hosted pipeline agents are run as docker containers, the problems could come from that the inner container references a path that exists only in the outer container, but not on the host. For more details on how to mount volumes when launching a docker container from another docker container, please see the section on Mounting volumes using Docker within a Docker container from the Azure Pipeline docs, or this answer from Mounting docker run in Azure Pipeline job
Test setup
I have modified the azure-pipelines.yaml to create a fully self contained example:
The following changes were made to the azure-pipelines.yml in the OP
Setting the agentpool to ubuntu-latest rather than my-Linux-agents
Changing the Docker login step to a docker build step that actually builds the image as part of the pipeline. (Not necessary, but it makes the pipeline independent of a custom image registry)
Add a step that lists all the files in the repo recursively and their permission (So that we can easily verify that all files in the /test folder are readable by all)
Adding the steps that runs the containers from the two approaches into the same pipeline so that both approaches are demonstrated in the same pipeline
The azure-pipelines.yml now looks like this:
trigger:
- none
jobs:
- job: test
pool:
vmImage: 'ubuntu-latest'
displayName: Run tests
steps:
- task: Docker#2
displayName: Build Docker Image
inputs:
repository: sample/test-app
command: build
Dockerfile: image/Dockerfile
tags: '1.1'
- bash: |
echo $(Build.SourcesDirectory)
ls -lrtR
displayName: List files
- bash: |
echo "Executing docker run command"
docker run --name testApp -i -v $(Build.SourcesDirectory)/test:/test -w /test sample/test-app:1.1
displayName: Run docker inline in pipeline
- task: Bash#3
displayName: Run test.sh
inputs:
targetType: filePath
filePath: 'test/test.sh'
arguments: '$PWD'
workingDirectory: test
The rest of the files looks the same, the full test setup can be found here
Test Results
When running the pipeline the following output is obtained (Full pipeline run can be found here here
List Files
/home/vsts/work/1/s
.:
total 16
drwxr-xr-x 2 vsts docker 4096 Sep 18 16:51 test
drwxr-xr-x 2 vsts docker 4096 Sep 18 16:51 image
-rw-r--r-- 1 vsts docker 849 Sep 18 16:51 azure-pipelines.yml
-rw-r--r-- 1 vsts docker 198 Sep 18 16:51 README.md
./test:
total 8
-rw-r--r-- 1 vsts docker 0 Sep 18 16:51 test.txt
-rwxr-xr-x 1 vsts docker 72 Sep 18 16:51 test.sh
-rwxr-xr-x 1 vsts docker 258 Sep 18 16:51 run.sh
./image:
total 8
-rwxr-xr-x 1 vsts docker 67 Sep 18 16:51 entrypoint.sh
-rwxr-xr-x 1 vsts docker 87 Sep 18 16:51 Dockerfile
Run docker inline in pipeline
Executing docker run command
START Running Docker
Listing Files
total 16
drwxr-xr-x 2 1001 121 4096 Sep 18 16:51 .
drwxr-xr-x 1 root root 4096 Sep 18 16:51 ..
-rwxr-xr-x 1 1001 121 258 Sep 18 16:51 run.sh
-rwxr-xr-x 1 1001 121 72 Sep 18 16:51 test.sh
-rw-r--r-- 1 1001 121 0 Sep 18 16:51 test.txt
Run test.sh
Root path: /home/vsts/work/1/s/test
starting run script
/home/vsts/work/1/s/test
START Running Docker
Listing Files
total 16
drwxr-xr-x 2 1001 121 4096 Sep 18 16:51 .
drwxr-xr-x 1 root root 4096 Sep 18 16:51 ..
-rwxr-xr-x 1 1001 121 258 Sep 18 16:51 run.sh
-rwxr-xr-x 1 1001 121 72 Sep 18 16:51 test.sh
-rw-r--r-- 1 1001 121 0 Sep 18 16:51 test.txt
i believe solution is relatively simple, idea derived from here,
give this a try:
in pipelines
pool:
vmImage: 'ubuntu-latest'
in Dockerfile
VOLUME ["/test"]
RUN ls -la >> my.log
and then check log

docker build Error checking context: 'can't stat '\\?\C:\Users\username\AppData\Local\Application Data''

docker build failed on windows 10,
After docker installed successfully, While building docker image using below command.
docker build -t drtuts:latest .
Facing below issue.
Kindly let me know if any one resolved same issue.
The problem is that the current user is not the owner of the directory.
I got the same problem in Ubuntu, this line solves the issue:
Ubuntu
sudo chown -R $USER <path-to-folder>
Source: Change folder permissions and ownership
Windows
This link shows how to do the same in Windows:
Take Ownership of a File / Folder through Command Prompt in Windows 10
Just create a new directory and enter it:
$ mkdir dockerfiles
$ cd dockerfiles
Create your file in that directory:
$ touch Dockerfile
Edit it and add the commands with vi:
$ vi Dockerfile
Fnally run it:
$ docker build -t tag .
Explanation of the problem
When you run the docker build command, the Docker client gathers all files that need to be sent to the Docker daemon, so it can build the image. This 'package' of files is called the context.
What files and directories are added to the context?
The context contains all files and subdirectories in the directory that you pass to the docker build command. For example, when you call docker build img:tag dir_to_build, the context will be everything inside dir_to_build.
If the Docker client does not have sufficient permissions to read some of the files in the context, you get the error checking context: 'can't stat ' <FILENAME> error.
There are two solutions to this problem:
Move your Dockerfile, together with the files that are needed to build your image, to a separate directory separate_dir that contains no other files/subdirectories. When you now call docker build img:tag separate_dir, the context will only contain the files that are actually required for building the image. (If the error still persists that means you need to change the permissions on your files so that the Docker client can access them).
Exclude files from the context using a .dockerignore file. Most of the time, this is probably what you want to be doing.
From the official Docker documentation:
Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it.
To answer the question
I would create a .dockerignore file in the same directory as the Dockerfile: ~/.dockerignore with the following contents:
# By default, ignore everything
*
# Add exception for the directories you actually want to include in the context
!project-source-code
!project-config-dir
# source files
!*.py
!*.sh
Further reading:
Official Docker documentation
I found this blog very helpful
Docker grants read and write rights to only to the owner of the file, and sometimes the error will be thrown if the user trying to build is different from the owner.
You could create a docker group and add the users there.
in debian would be as follows
sudo groupadd docker
sudo usermod -aG docker $USER
I was also getting the same error message on Windows 10 Home version.
I resolved it by the following steps:
Create a directory called 'dockerfiles' under C:\Users\(username)
Keep the {dockerfile without any extension} under the newly created directory as mentioned in step (1).
Now run the command {from C:\Users\(username) directory}:
docker build -t ./dockerfiles
It worked like a breeze!
I was having the same issue but working with Linux
$ docker build -t foramontano/openldap-schema:0.1.0 --rm .
$ error checking context: 'can't stat '/home/isidoronm/foramontano/openldap_docker/.docker/config/cn=config''.
I was able to solve the problem with the issue... by including the directory referred in the log (/home/isidoronm/foramontano/openldap_docker/.docker) inside the .dockerignore file located in the directory i have the Dockerfile file.(/home/isidoronm/foramontano/openldap_docker )
isidoronm#vps811154:~/foramontano/openldap_docker$ ls -al
total 48
drwxrwxr-x 5 isidoronm isidoronm 4096 Jun 16 18:04 .
drwxrwxr-x 9 isidoronm isidoronm 4096 Jun 15 17:01 ..
drwxrwxr-x 4 isidoronm isidoronm 4096 Jun 16 17:08 .docker
-rw-rw-r-- 1 isidoronm isidoronm 43 Jun 13 17:25 .dockerignore
-rw-rw-r-- 1 isidoronm isidoronm 214 Jun 9 22:04 .env
drwxrwxr-x 8 isidoronm isidoronm 4096 Jun 13 17:37 .git
-rw-rw-r-- 1 isidoronm isidoronm 5 Jun 13 17:25 .gitignore
-rw-rw-r-- 1 isidoronm isidoronm 408 Jun 16 18:03 Dockerfile
-rw-rw-r-- 1 isidoronm isidoronm 1106 Jun 16 17:10 Makefile
-rw-rw-r-- 1 isidoronm isidoronm 18 Jun 13 17:36 README.md
-rw-rw-r-- 1 isidoronm isidoronm 1877 Jun 12 12:11 docker-compose.yaml
drwxrwxr-x 3 isidoronm isidoronm 4096 Jun 13 10:51 service
Maybe it's valid something similar in Windows 10.
I got the same problem in Ubuntu, I just added a sudo before docker build........
Here are my steps on Windows 10 that worked. Open Command Prompt Window as Administrator:
cd c:\users\ashok\Documents
mkdir dockerfiles
cd dockerfiles
touch dockerfile
notepad dockerfile # Write/paste here the contents of dockerfile
docker build -t jupyternotebook -f dockerfile .
Due to permission issue, this problem caused.
I recommend checking the permission of the respected user, which is accessible in Dockerfile.
Nothing wrong with any path.
I faced a similar situation on Windows 10 and was able to resolve it using the following approach:
Navigate to the directory where your Dockerfile is stored from your CLI (I used PowerShell).
Run the docker build . command
It seems like you are using the bash shell in Windows 10, when I tried using it, docker wasn't even recognized as a command (can check using docker --version).
On Windows 10
Opened command line
c:\users\User>
mkdir dockerfiles
cd dockerfiles
notepad Dockerfile.txt
//after notepad opens to type your docker commands and then save your
Dockerfile.
back to the command line type "python-hello-world" is an example.
docker image build -t python-hello-world -f ./Dockerfile.txt .
I am writing as it will be helpful to people who play with apparmor
I also got this problem on my Ubuntu machine. It happened because I ran "aa-genprof docker" command to scan "docker" command to create apparmor profile. So the scanned process created a file usr.bin.docker in directory "/etc/apparmor.d", which added some permissions for docker command. So after removing that file and rebooting the machine docker runs again perfectly.
If you arrive here with this issue on a Mac, for me I just had to cd back out of the directory containing the Dockerfile, then cd back in and rerun the command and it worked.
I had same issue :-
error checking context: 'no permission to read from '\?\C:\Users\userprofile \AppData\Local\AMD\DxCache\36068d231d1a87cd8b4bf677054f884d500b364064779e16..bin''.
it was keep barking this issue , there was no problem with permission tried to run th command and different switch but haven't worked.
created dockerfiles folder and put docker file there and ran command from the folder path
C:\Users\ShaikhNaushad\dockerfiles> docker build -t naushad/debian .
And it worked
I faced the same issue while trying to build a docker image from a Dockerfile placed in /Volumes/work. It worked fine when I created a folder dockerfiles within /Volumes/work and placed my Dockerfile in that.
Error ? : The folder or directory in that particular location that Docker/current user is trying to modify, has a "different owner"(remember only owners or creators of particular files/documents have the explicit rights to modify those same files).
Or it could also be that the file has already been created and thus this is throwing the Error when a new create statement for a similar file is issued.
Solution ? : Revert ownership of that particular folder and its contents to current user, this can be done in two ways;
sudo rm file (delete file) that particular file and the recreate it using the command "mkdir file".This way you will be the owner/creator of that file. Do this if the data in that file can be rebuilt or is not that crucial
Change the file permissions for that file, follow the following tutorial for linux users :https://www.tomshardware.com/how-to/change-file-directory-permissions-linux
In my case, I am using a different hard disk only for data, but I use another solid disk for the system, in my case Linux Mint. And the problem was solved when a I change the files to the system disk (solid disk).
PS: I used the command sudo chown -R $USER <path-to-folder> and the problem wasn't solved.

jenkins unable to run " /build-tools/17.0.0/aapt "

does anyone know why is jenkins unable to run this file and gives a no such file or directory error ?
the file is there. it has the following permissions :
root#ott-ci-01:/usr/local/lib/android-sdk-linux/build-tools/17.0.0# ls -la /usr/local/lib/android-sdk-linux/build-tools/17.0.0/aapt
-rwxr-xr-x 1 root root 1122758 Jun 17 10:07 /usr/local/lib/android-sdk-linux/build-tools/17.0.0/aapt
any ideas ?

Resources