Jenkins pipeline docker compose build
PermissionError: [Errno 13] Permission denied: '/var/lib/jenkins/workspace
Error
Jenkins pipeline build script
docker-compose build --no-cache
PermissionError: [Errno 13] Permission denied: '/var/lib/jenkins/workspace
This problem was due to permissions issues. The following command fixed it:
chown -R jenkins /var/lib/jenkins/workspace
Related
When I specify docker directory to NFS(/data) following
https://www.ibm.com/docs/en/cloud-private/3.2.x?topic=pyci-specifying-default-docker-storage-directory-by-using-bind-mount
sudo rm -rf /var/lib/docker
sudo mkdir /var/lib/docker
sudo mkdir /data/docker
sudo mount --rbind /data/docker /var/lib/docker
Docker constantly causes problems as follows.
When I tring to install torch redundantly by dockerfile
Installing collected packages: torch, pillow, torchvision, torchaudio
Attempting uninstall: torch
Found existing installation: torch 1.9.0a0+c3d40fd
Uninstalling torch-1.9.0a0+c3d40fd:
Successfully uninstalled torch-1.9.0a0+c3d40fd
ERROR: Could not install packages due to an OSError: [Errno 16] Device or resource busy: 'Modules_CUDA_fix'
When I trying to use multiprocessing in docker container
File "/root/anaconda3/envs/xi-tts-ml/lib/python3.8/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/root/anaconda3/envs/xi-tts-ml/lib/python3.8/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/root/anaconda3/envs/xi-tts-ml/lib/python3.8/multiprocessing/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/root/anaconda3/envs/xi-tts-ml/lib/python3.8/shutil.py", line 718, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/root/anaconda3/envs/xi-tts-ml/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/root/anaconda3/envs/xi-tts-ml/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs00000001c05e719f0004e474'
Is there a way to solve this without unmounting or disabling docker containers?
I am trying to work on a project which involves using rasa, when I run sudo docker-compose up , I get the following error .
Starting ask-my-doctor_rasa_1 ... done
Attaching to ask-my-doctor_ngrok_1, ask-my-doctor_rasa_1
rasa_1 | bash: line 15: /app/credentials.yml: Permission denied
rasa_1 | bash: line 16: /app/train_logs.txt: Permission denied
rasa_1 | bash: line 17: /app/run_actions_logs.txt: Permission denied
rasa_1 | 2021-06-07 14:02:52 DEBUG rasa.telemetry - Could not read telemetry settings from configuration file: Configuration 'metrics' key not found.
rasa_1 | 2021-06-07 14:02:52 WARNING rasa.utils.common - Failed to write global config. Error: [Errno 13] Permission denied: '/.config'. Skipping.
rasa_1 | 2021-06-07 14:02:53 DEBUG rasa.cli.run - 'models' not found. Using default location 'models' instead.
rasa_1 | Traceback (most recent call last):
rasa_1 | File "/opt/venv/bin/rasa", line 8, in <module>
rasa_1 | sys.exit(main())
rasa_1 | File "/opt/venv/lib/python3.8/site-packages/rasa/__main__.py", line 117, in main
rasa_1 | cmdline_arguments.func(cmdline_arguments)
rasa_1 | File "/opt/venv/lib/python3.8/site-packages/rasa/cli/run.py", line 118, in run
rasa_1 | args.model = _validate_model_path(args.model, "model", DEFAULT_MODELS_PATH)
rasa_1 | File "/opt/venv/lib/python3.8/site-packages/rasa/cli/run.py", line 71, in _validate_model_path
rasa_1 | os.makedirs(default, exist_ok=True)
rasa_1 | File "/usr/lib/python3.8/os.py", line 223, in makedirs
rasa_1 | mkdir(name, mode)
rasa_1 | PermissionError: [Errno 13] Permission denied: 'models'
ask-my-doctor_rasa_1 exited with code 1
I have also tried to just keep the container running and tried to login and create a file just to check and there as well I get the "permission denied" message.
How do I solve this permission issue?
Any help would be appreciated.
When using the docker-compose method to run Rasa, several folders will be mounted into the containers.
I suspect that the group and permissions of the mounted directories are not correct.
Can you please try this:
# Go to root folder of your deployment folder.
# Default is /etc/rasa, but from your error image, I see it is different.
cd ~/Project/ask-my-doctor
# Set group & permissions for everything in that folder
sudo chgrp -R root * && sudo chmod -R 770 *
# Correct group & permissions for the database
sudo chown -R 1001 db && sudo chmod -R 750 /db
A detailed explanation of these steps can be found in the rasa docs, for Linux/macOS users:
The Rasa containers are following Docker’s best practices and are not running as root user. Hence, please make sure that the root group has read and write access to the following directories and their content:
/etc/rasa/credentials.yml
/etc/rasa/endpoints.yml
/etc/rasa/environments.yml
/etc/rasa/auth
/etc/rasa/certs
/etc/rasa/credentials
/etc/rasa/models
/etc/rasa/logs
To set the permissions and group for everything in /etc/rasa you can use this command, but make sure to correct it for the /etc/rasa/db directory as described in the next step:
sudo chgrp -R root /etc/rasa/* && sudo chmod -R 770 /etc/rasa/*
If you are mounting different or extra directories, please adapt their permissions accordingly.
Postgres Database Storage
Configure persistent Postgres database storage
On Linux, a local directory is used for persistent Postgres database storage.
You must set the correct owner and permissions of the database persistence directory using this command:
sudo chown -R 1001 /etc/rasa/db && sudo chmod -R 750 /etc/rasa/db
I'm running Ansible in a container and getting:
ansible-playbook --version
Unhandled error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/ansible/utils/path.py", line 85, in makedirs_safe
os.makedirs(b_rpath, mode)
File "/usr/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: b'/.ansible'
and more errors including
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 62, in <module>
import ansible.constants as C
File "/usr/local/lib/python3.8/dist-packages/ansible/constants.py", line 174, in <module>
config = ConfigManager()
File "/usr/local/lib/python3.8/dist-packages/ansible/config/manager.py", line 291, in __init__
self.update_config_data()
File "/usr/local/lib/python3.8/dist-packages/ansible/config/manager.py", line 571, in update_config_data
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
ansible.errors.AnsibleError: Invalid settings supplied for DEFAULT_LOCAL_TMP: Unable to create local directories(/.ansible/tmp): [Errno 13] Permission denied: b'/.ansible'
This is the Dockerfile I'm using:
FROM ubuntu
ENV ANSIBLE_VERSION 2.9.9
# Install Ansible.
RUN apt-get update && apt-get install -y curl unzip ca-certificates python3 python3-pip \
&& pip3 install ansible==${ANSIBLE_VERSION} \
&& apt-get clean all
# Define default command.
CMD ["/usr/bin/bash"]
This works locally. But it does not inside a docker container in EKS.
Any idea what's wrong?
I was having the same problem. I am running Jenkins in a docker container. I tried three different GitHub ansible images. None of that mattered. What worked was changing this ...
stage('Execute AD Hoc Ansible.') {
steps {
script {
sh """
ansible ${PATTERN} -i ${INVENTORY} -l "${LIMIT}" -m ${MODULE} -a ${DASH_A} ${EXTRA_PARAMS}
"""
}
}
}
... to this ...
stage('Execute AD Hoc Ansible.') {
steps {
script {
env.DEFAULT_LOCAL_TMP = env.WORKSPACE_TMP
env.HOME = env.WORKSPACE
sh """
ansible ${PATTERN} -i ${INVENTORY} -l "${LIMIT}" -m ${MODULE} -a ${DASH_A} ${EXTRA_PARAMS}
"""
}
}
}
Note I had to set env vars with these lines:
env.DEFAULT_LOCAL_TMP = env.WORKSPACE_TMP
env.HOME = env.WORKSPACE
Following this thread, I have solved it successfully.
https://stackoverflow.com/a/35180089/17758190
I have edited ansible.cfg in your ansible
remote_tmp = /tmp/.ansible/tmp
Hi Iam working on CICD implementation on openshift 3.9. I have a jenkins pod running in openshift. Iam running selenium scripts in jenkins and below is the error which iam getting like missing a package
Running TestSuite
/var/lib/jenkins/jobs/Pipeline/workspace/src/test/resources/chromedriver: error while loading shared libraries: libgconf-2.so.4: cannot open shared object file: No such file or directory
Nov 21, 2018 8:25:36 AM org.openqa.selenium.os.OsProcess checkForError
SEVERE: org.apache.commons.exec.ExecuteException: Process exited with an error: 127 (Exit value: 127)
Tests run: 8, Failures: 1, Errors: 0, Skipped: 7, Time elapsed: 21.9 sec <<< FAILURE! - in TestSuite
BrowserSettings(SecurityCheckList) Time elapsed: 21.273 sec <<< FAILURE!
org.openqa.selenium.WebDriverException: Timed out waiting for driver server to start.
Build info: version: '3.9.1', revision: '63f7b50', time: '2018-02-07T22:25:02.294Z'
System info: host: 'jenkins-1-7zgld', ip: '10.131.0.32', os.name: 'Linux', os.arch: 'i386', os.version: '3.10.0-957.el7.x86_64', java.version: '1.8.0_181'
Driver info: driver.version: ChromeDriver
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at com.google.common.util.concurrent.SimpleTimeLimiter.callWithTimeout(SimpleTimeLimiter.java:148)
at org.openqa.selenium.net.UrlChecker.waitUntilAvailable(UrlChecker.java:75)
at org.openqa.selenium.remote.service.DriverService.waitUntilAvailable(DriverService.java:187)
at org.openqa.selenium.remote.service.DriverService.start(DriverService.java:178)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:79)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:601)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:219)
For that i want to install libgconf-2-4 in my jenkins container through the below command
yum install libgconf-2-4
When i try to install the below error is coming in my jenkins container
sh-4.2$ yum install libgconf2-4
Loaded plugins: ovl, product-id, search-disabled-repos, subscription-manager
[Errno 13] Permission denied: '/etc/pki/entitlement-host'
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/.dbenv.lock'
You need to be root to perform this command.
When i goto specified location and try to change the permissions as chmod 777 .dbenv.lock
sh-4.2$ cd /var/lib/rpm/
sh-4.2$ ls -latr
total 19560
-rw-r--r--. 1 root root 0 Aug 9 18:21 .dbenv.lock
it is throwing error as
sh-4.2$ chmod 777 .dbenv.lock
chmod: changing permissions of ‘.dbenv.lock’: Operation not permitted
My question is how to enter into jenkins pod as root user and install the rpm package libgconf-2-4 through yum install libgconf-2-4 in openshift?
It seems you should customize the jenkins images as follows.[0]
Create the Dockerfile.
FROM registry.access.redhat.com/openshift3/jenkins-2-rhel7
USER 0
RUN yum -y install libgconf2-4 && yum clean all -y
USER 1001
Build the image using the Dockerfile.
docker build .
Login the internal registry of OpenShift for pushing image.
docker login -u admin -p docker-registry.default.svc:5000
Retag as OpenShift image format and your tag policy.
docker tag docker-registry.default.svc:5000/openshift/jenkins-2-rhel7-custom
Push the image.
docker push docker-registry.default.svc:5000/openshift/jenkins-2-rhel7-custom
Edit your deploymentConfig
oc edit dc/jenkins
...
containers:
...
image: "openshift/jenkins-2-rhel7-custom"
...
I hope it help you. :^)
[0]General Container Image Guidelines
you can use USER root in your dockerfile that will solve your problem
I've created a MLOPS project on BlueData 4.0 and mounted the Project Repo (NFS) folder. I created the NFS service on Centos 7x as below:
sudo yum -y install nfs-utils
sudo mkdir /nfsroot
echo '/nfsroot *(rw,no_root_squash,no_subtree_check)' | sudo tee /etc/exports
sudo exportfs -r
sudo systemctl enable nfs-server.service
sudo systemctl start nfs-server.service
I'm now trying to access a data set stored in the NFS Project Repo, but I'm receiving the following error:
PermissionError: [Errno 13] Permission denied: '/bd-fs-mnt/path/data.csv'
Any idea how I can fix this?
It appears the project repo is created with root as owner and no write permissions on group level.
To fix, you need to:
create a notebook cluster
open a Jupyter Terminal
sudo chmod -R 777 /bd-fs-mnt/nfsrepo (this only works if you create that cluster as tenant admin, as user you don't have sudo permission)