TLDR;
Does anyone know how to solve the "Failed asserting that ownership of "/var/opt/gitlab/git-data" was git" error?
Background:
I want to set up the Gitlab Docker on WindowsServer2012R2 running Docker toolbox, version 17.04.0-ce, build 4845c56.
Issue/Question
I can't get the shared folder to work properly on the D drive of the server. I read that I needed to add the folder to the VirtualBox VM, which I did via the settings/shared folder menu in the VB GUI. I set a name "gitlab" to the path "D:\data\gitlab" then checked auto-mount, make permanent, and set it to full access.
I started the docker machine and ran "docker-machine ssh $machine-name". I noticed that there was no /media directory and so I added a folder at the home directory (/home/docker/gitlab) and then mounted the shared folder using the following command I found in several forums:
sudo mount -t vboxsf gitlab /home/docker/gitlab
At this point I can add files to the Windows host directory or the Docker VM and it seems to work fine and the test files show up.
Now when I spin up the Gitlab Docker image, I use the following command modified from their documentation:
docker run --detach --hostname gitlab.example.com --publish 80:80 --name gitlab --volume /home/docker/gitlab:/etc/gitlab:Z --volume /home/docker/gitlab/logs:/var/log/gitlab:Z --volume /home/docker/gitlab/data:/var/opt/gitlab:Z gitlab/gitlab-ce
Now I know that it appears to be writing to the shared drive, because all of these files are generated, but then it crashes after a few seconds and I receive the following error log.
Error Log:
Thank you for using GitLab Docker Image!
Current version: gitlab-ce=9.3.6-ce.0
Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file
And restart this container to reload settings.
To do it use docker exec:
docker exec -it gitlab vim /etc/gitlab/gitlab.rb
docker restart gitlab
For a comprehensive list of configuration options please see the Omnibus GitLab readme
https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md
If this container fails to start due to permission problems try to fix it by executing:
docker exec -it gitlab update-permissions
docker restart gitlab
Installing gitlab.rb config...
Generating ssh_host_rsa_key...
Generating public/private rsa key pair.
Your identification has been saved in /etc/gitlab/ssh_host_rsa_key.
Your public key has been saved in /etc/gitlab/ssh_host_rsa_key.pub.
The key fingerprint is:
SHA256:GyFlf9tl7ZuEbuE+dwZUYiyahdsRzpC1T7kwyUvoD+o root#gitlab.example.com
The key's randomart image is:
+---[RSA 2048]----+
| o .+oo |
| o .o*+o+.o|
| . . o*#+oo+|
| . o+o.Oo= |
| S o o++..|
| + oo + o|
| o .+ + |
| . o. .o|
| E .o..|
+----[SHA256]-----+
Generating ssh_host_ecdsa_key...
Generating public/private ecdsa key pair.
Your identification has been saved in /etc/gitlab/ssh_host_ecdsa_key.
Your public key has been saved in /etc/gitlab/ssh_host_ecdsa_key.pub.
The key fingerprint is:
SHA256:Kb99jG8EtMuTSdIuqBT3GLeD1D0wwTEcQhKgVJUlBjs root#gitlab.example.com
The key's randomart image is:
+---[ECDSA 256]---+
| .o+=*=+=+ |
|.. oo..=.. |
|. E . * . |
| o + +.B |
| +.BS* * |
| . +o= B . |
| . . .o = |
| . o. + |
| . .+. |
+----[SHA256]-----+
Generating ssh_host_ed25519_key...
Generating public/private ed25519 key pair.
Your identification has been saved in /etc/gitlab/ssh_host_ed25519_key.
Your public key has been saved in /etc/gitlab/ssh_host_ed25519_key.pub.
The key fingerprint is:
SHA256:lVxpu0UoyNPWVY6D9c+m/bUTyvKP6vuR4cTOYwQ0j+U root#gitlab.example.com
The key's randomart image is:
+--[ED25519 256]--+
| . o +.=o..|
| +.=o#o.+ |
| o+=.Eo o|
| . + .o.|
| S B +|
| B o= |
| .Oo +|
| ..o+.+|
| .+*+.oo|
+----[SHA256]-----+
Preparing services...
Starting services...
Configuring GitLab package...
/opt/gitlab/embedded/bin/runsvdir-start: line 24: ulimit: pending signals: cannot modify limit: Operation not permitted
/opt/gitlab/embedded/bin/runsvdir-start: line 34: ulimit: max user processes: cannot modify limit: Operation not permitted
/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system
Configuring GitLab...
================================================================================
Error executing action `run` on resource 'ruby_block[directory resource: /var/opt/gitlab/git-data]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Failed asserting that ownership of "/var/opt/gitlab/git-data" was git
---- Begin output of set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] ----
STDOUT:
STDERR: + readlink -f /var/opt/gitlab/git-data
+ stat --printf=%U /var/opt/gitlab/git-data
+ [ UNKNOWN = git ]
---- End output of set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] ----
Ran set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] returned 1
Cookbook Trace:
---------------
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:124:in `validate_command'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:112:in `block in validate'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:111:in `each_index'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:111:in `validate'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/libraries/storage_directory_helper.rb:87:in `validate!'
/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb:35:in `block (3 levels) in from_file'
Resource Declaration:
---------------------
# In /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb
26: ruby_block "directory resource: #{params[:path]}" do
27: block do
28: # Ensure the directory exists
29: storage_helper.ensure_directory_exists(params[:path])
30:
31: # Ensure the permissions are set
32: storage_helper.ensure_permissions_set(params[:path])
33:
34: # Error out if we have not achieved the target permissions
35: storage_helper.validate!(params[:path])
36: end
37: not_if { storage_helper.validate(params[:path]) }
38: end
39: end
Compiled Resource:
------------------
# Declared in /opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb:26:in `block in from_file'
ruby_block("directory resource: /var/opt/gitlab/git-data") do
params {:path=>"/var/opt/gitlab/git-data", :owner=>"git", :group=>nil, :mode=>"0700", :name=>"/var/opt/gitlab/git-data"}
action [:run]
retries 0
retry_delay 2
default_guard_interpreter :default
block_name "directory resource: /var/opt/gitlab/git-data"
declared_type :ruby_block
cookbook_name "gitlab"
recipe_name "gitlab-shell"
block #<Proc:0x000000054a99a8#/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/definitions/storage_directory.rb:27>
not_if { #code block }
end
Platform:
---------
x86_64-linux
Does anyone know how to solve the "Failed asserting that ownership of "/var/opt/gitlab/git-data" was git" error? I'm still somewhat new to Docker/setting up Gitlab, so it's very possible I could have overlooked something simple. I've spent several hours Googling this, and it seems that others also have a lot of issues getting shared folders to work from Windows using the Docker Toolbox, so hopefully this will help others as well.
Background
One solution (maybe not the best) for those of us stuck in a world without native docker, is to use vdi drives and shared folders. The vdi drive can live on an drive we want (which is important if you don't want to use the C drive) and is used to allow the Gitlab docker the ability to chown anything it wants, so this is where we'll store the persistent volumes. The downside is that a vdi is not as transparent as a simple shared folder, thus for backups, a shared folder makes things a little bit easier/transparent.
Disclaimer
I'm not an expert on any of this, so please use caution and take what I say with a grain of salt.
Steps to perform
Create a new vdi drive and shared folder on any drive you'd like
Turn off your docker machine you want to use for gitlab
In virtualbox go into the settings on your docker-machine, then Storage, and click Add Hard Disk icon, then Create new disk
Select VDI (VirtualBox Disk Image) and click Next
Select Dynamically allocated and click Next
Select the name and location you want to store the vdi by clicking the folder with green carrot symbol, then select the max size the vdi can grow to, and click Create
Now in the settings menu, switch to Shared Folders and click Adds new shared folder icon
Create a gitlabbackups folder to where ever you want and select Auto-mount and Make Permanent
Now partition and format the drive
Start/enter the docker machine (either use VBox window or docker-machine ssh <your docker machine name> from cmd prompt)
Run fdisk -l to list the available drives, and if you've only mounted the one extra vdi drive, you should see something like /dev/sdb
The next steps are irreversible, so perform it at your own discretion: enter command fdisk /dev/sdb then n for new partition, p for primary, and 1
Now format the new partition (you might need sudo as well): mkfs.ext4 /dev/sdb1
Run docker with persistent volumes on second vdi and backups in shared folder
Sample Dockerfile:
FROM gitlab/gitlab-ce:latest
RUN apt-get update
RUN apt-get install -y cron
# Add a cron job to backup everyday
RUN echo "0 5 * * * /opt/gitlab/bin/gitlab-rake gitlab:backup:create STRATEGY=copy CRON=1" | crontab -
# For an unknown reason, the cron job won't actually run unless cron is restarted
CMD service cron restart && \
/assets/wrapper
Sample docker-compose.yml:
version: "3.0"
services:
gitlab:
build: .
restart: always
ports:
- "80:80"
volumes:
# These volumes are on the vdi we created above
- "/mnt/sdb1/etc/gitlab/:/etc/gitlab"
- "/mnt/sdb1/var/log/gitlab:/var/log/gitlab"
- "/mnt/sdb1/var/opt/gitlab:/var/opt/gitlab"
# This volume sits in the shared folder defined above
- "/gitlabbackups:/var/opt/gitlab/backups"
cap_add:
# These seem to be necessary for the mounted drive to work properly
# https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
- SYS_ADMIN
- DAC_READ_SEARCH
Because there seems to be an issue with auto mounting the vdi, use a startup script, for example (assuming you used a D drive, just replace anything inside <...> as needed), sample run.bat:
#cd /d D:\<path to docker-compose.yml, assuming it's on the D drive>
#docker-machine start <docker machine name>
#FOR /f "tokens=*" %%i IN ('docker-machine env <docker machine name>') DO #%%i
#docker-machine ssh <docker machine name> sudo mount /dev/sdb1 /mnt/sdb1
#docker-compose build
#docker-compose up -d
#REM If the docker machine was completely off, running only 'docker-compose up -d' will
#REM not mount the volumes properly. Stopping and restarting the container results in
#REM the volumes mounting properly.
#docker stop <gitlab container name>
#docker start <gitlab container name>
#pause
Note: the gitlab container name can be found by running docker-compose up once and then docker ps -a to check it, but it's usually follows the convention <directory compose file is in>_<name in the compose file, e.g. gitlab here>_1
Assuming all went well and you change the stuff in the <...>'s above for your situation, you should be able to run the batch file and have gitlab up and running in such a way that it stores everything on the alternate drive, persistent working files in the vdi (to get around VBox POSIX limitations), and backups transparently stored in the shared folder.
Hope this helps other poor souls that don't have access to native docker yet.
Related
I'm trying to install pihole on a Kubernetes cluster on Docker via helm. I'm following this guide to do so. Everything seems to go smoothly. I get a completion:
NAME: pihole
LAST DEPLOYED: Wed Sep 30 22:22:15 2020
NAMESPACE: pihole
STATUS: deployed
REVISION: 1
TEST SUITE: None
But the pihole never reaches the ready state, it just restarts after a couple minutes. Upon inspecting the pod I see:
lastState:
terminated:
containerID: docker://16e2a318b460d4d5aebd502175fb688fc150993940181827a506c086e2cb326a
exitCode: 0
finishedAt: "2020-09-30T22:01:55Z"
reason: Completed
startedAt: "2020-09-30T21:59:17Z"
How do I prevent this from continually restarting once it's complete?
Here is the output of kubectl logs <POD_NAME>:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 01-resolver-resolv: applying...
[fix-attrs.d] 01-resolver-resolv: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 20-start.sh: executing...
::: Starting docker specific checks & setup for docker pihole/pihole
[✓] Update local cache of available packages
[i] Existing PHP installation detected : PHP version 7.0.33-0+deb9u8
[i] Installing configs from /etc/.pihole...
[i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
[✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf
chown: cannot access '': No such file or directory
chmod: cannot access '': No such file or directory
chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
::: Pre existing WEBPASSWORD found
Using default DNS servers: 8.8.8.8 & 8.8.4.4
DNSMasq binding to default interface: eth0
Added ENV to php:
"PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
"ServerIP" => "0.0.0.0",
"VIRTUAL_HOST" => "pi.hole",
Using IPv4 and IPv6
::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early))
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
https://mirror1.malwaredomains.com/files/justdomains
::: Testing pihole-FTL DNS: FTL started!
::: Testing lighttpd config: Syntax OK
::: All config checks passed, cleared for startup ...
::: Docker start setup complete
[✗] DNS resolution is currently unavailable
You are not alone with this issue.
Resolution is here - chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
This happens for me as well. I used that same tutorial to set up my
cluster. If you are using a persistent volume as well, use a ssh
connection to get to your drive and run these two commands.
ls -l ----> this will show the owner and user of each file they all should be www-data if not run this cmd
sudo chown -R www-data:www-data pihole from the /mnt/ssd directory described in the tutorial. This will allow you to add more whitelists/blacklists/adlists from the web portal.
I have my Symfony application run perfectly fine on my development machine, on several servers except for one where it doesn't read the .env file.
The env file contains this variable:
root#719c10cf645d:/app# grep application_version .env
application_version=3.0.3-64-g642ed7f
The shell environment doesn't:
root#719c10cf645d:/app# env | grep application_version
root#719c10cf645d:/app# set | grep application_version
Symfony doesn't read the .env file:
root#719c10cf645d:/app# ./bin/console >/dev/null
[WARNING] Some commands could not be registered:
In EnvVarProcessor.php line 131:
Environment variable not found: "application_version".
[WARNING] Some commands could not be registered:
In EnvVarProcessor.php line 131:
Environment variable not found: "application_version".
On another server I have:
root#4535704c813a:/app# env | grep application_version
root#4535704c813a:/app# set | grep application_version
root#4535704c813a:/app# grep application_version .env
application_version=3.0.3-64-g642ed7f
root#4535704c813a:/app# ./bin/console >/dev/null
root#4535704c813a:/app#
I'm using the same docker version (19.03.5, build 633a0ea838) on both servers, same OS (Debian), and same docker images (different registry, but same SHA1). So everything should be perfectly identical.
What could be wrong here?
The environment on the two servers were different, and my config/bootstrap.php file didn't load .env files on one machine.
The bottom line is that I didn't follow these updates: https://symfony.com/doc/current/configuration/dot-env-changes.html
I don't know how my app survived without these changes for one year.
A few days ago I started a little side project: Dockerizing my Hugo build on my Windows 10 machine. The Hugo container itself, which runs as a Linux container, was the easy part and seems to work (at least by looking at the console output
$ docker run --rm -it -p 1313:1313/tcp hugo:latest
Building sites …
Replace Autoprefixer browsers option to Browserslist config.
Use browserslist key in package.json or .browserslistrc file.
Using browsers option cause some error. Browserslist config
can be used for Babel, Autoprefixer, postcss-normalize and other tools.
If you really need to use option, rename it to overrideBrowserslist.
Learn more at:
https://github.com/browserslist/browserslist#readme
https://twitter.com/browserslist
WARN 2019/11/23 14:05:35 found no layout file for "HTML" for "section": You should create a template file which matches Hugo Layouts Lookup Rules for this combination.
| DE | EN
+------------------+----+----+
Pages | 9 | 7
Paginator pages | 0 | 0
Non-page files | 0 | 0
Static files | 25 | 25
Processed images | 0 | 0
Aliases | 1 | 0
Sitemaps | 2 | 1
Cleaned | 0 | 0
Total in 680 ms
Watching for changes in /app/{assets,content,i18n,layouts,static}
Watching for config changes in /app/config.yaml
Environment: "development"
Serving pages from memory
Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
My Dockerfile the I run looks like this
FROM node:13-alpine
ENV VERSION 0.59.1
EXPOSE 1313
RUN apk add --no-cache git openssl py-pygments libc6-compat g++ curl
RUN curl -L https://github.com/gohugoio/hugo/releases/download/v${VERSION}/hugo_extended_${VERSION}_Linux-64bit.tar.gz | tar -xz \
&& cp hugo /usr/bin/hugo \
&& apk del curl \
&& hugo version
WORKDIR /app
COPY assets assets
COPY content content
COPY i18n i18n
COPY layouts layouts
COPY static static
COPY package.json package.json
COPY postcss.config.js postcss.config.js
COPY config.yaml config.yaml
RUN yarn
CMD [ "hugo", "server", "--buildDrafts","--watch" ]
The hard part for me now is to connect to the running Hugo server on my host's systems (Windows 10 Pro) browser.
I basically tried everything: localhost:1313 & http://172.17.0.2:1313/ (the container IP I get by running docker inspect <container ID>), with firewall enabled and disabled, but nothing seems to work.
To verify that it should work I ran hugo server --buildDrafts --watch directly on my host system and can access the server just fine. I also invested several hours in reading up on the issue, but none of the solutions seem to work in my case.
How can I solve this issue?
Here's your problem:
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Hugo is binding to the loopback address (127.0.0.1) inside the container. It does this by default because hugo serve is meant strictly as a development tool, not for actually serving pages in production. In order to avoid any security issues, it defaults to binding to the loopback interface so that you can only connect to it from the local machine.
Unfortunately, in the context of a container, localhost means "this container". So with Hugo bound to 127.0.0.1 inside a container you'll never be able to connect to it.
The solution is to provide a different bind address using the --bind option. You probably want to modify your Dockerfile so that it looks like:
CMD [ "hugo", "server", "--buildDrafts", "--watch", "--bind", "0.0.0.0" ]
This will cause hugo to bind to "all interfaces" inside the container, which should result in it working as you expect.
I am trying to scan BLE devices with bluepy. My scan.py code is --
from bluepy.btle import Scanner, DefaultDelegate
class ScanDelegate(DefaultDelegate):
def __init__(self):
DefaultDelegate.__init__(self)
def handleDiscovery(self, dev, isNewDev, isNewData):
if isNewDev:
print "Discovered device", dev.addr
elif isNewData:
print "Received new data from", dev.addr
# prepare scanner
scanner = Scanner().withDelegate(ScanDelegate())
# scan for 5 seconds
devices = scanner.scan(5.0)
for dev in devices:
print "Device %s (%s), RSSI=%d dB" % (dev.addr, dev.addrType, dev.rssi)
for (adtype, desc, value) in dev.getScanData():
print " %s = %s" % (desc, value)
According to the documentation (mentioned at the very last as Note) --
(1) LE scanning must be run as root
That means we need to run the script with sudo. I run it as --
sudo python scan.py
Basically bluepy-helper requires the sudo to scan. It is required set the capabilities for blupe-helper to run the code without sudo. According to the solution, I did --
sudo setcap 'cap_net_raw,cap_net_admin+eip' /usr/local/lib/python2.7/site-packages/bluepy/bluepy-helper
From the Terminal, the scan code is now run without sudo like --
python scan.py
Finally, I made a Dockerfile --
FROM arm32v7/python:2.7.15-jessie
WORKDIR /usr/app/gfi_ble
COPY . /usr/app/gfi_ble
RUN chmod 755 ./setcap_for_bluepy_helper.sh
RUN pip install -r requirements.txt
CMD ["./setcap_for_bluepy_helper.sh", "--", "python", "src/scan.py"]
The content of the setcap_for_bluepy_helper.sh is --
#!/bin/bash
cmd="$#"
>&2 setcap 'cap_net_raw,cap_net_admin+eip' /usr/local/lib/python2.7/site-packages/bluepy/bluepy-helper
exec $cmd
The image is created successfully but when I run the container I am getting the error like --
Creating con_gfi_ble ... done
Attaching to con_gfi_ble
con_gfi_ble | 2019-01-12 23:06:24+0000 [-] Unhandled Error
con_gfi_ble | Traceback (most recent call last):
con_gfi_ble | File "/usr/app/gfi_ble/src/scan.py", line 17, in new_devices
con_gfi_ble | devices = scanner.scan(5.0)
con_gfi_ble | File "/usr/local/lib/python2.7/site-packages/bluepy/btle.py", line 852, in scan
con_gfi_ble | self.start(passive=passive)
con_gfi_ble | File "/usr/local/lib/python2.7/site-packages/bluepy/btle.py", line 789, in start
con_gfi_ble | self._startHelper(iface=self.iface)
con_gfi_ble | File "/usr/local/lib/python2.7/site-packages/bluepy/btle.py", line 284, in _startHelper
con_gfi_ble | preexec_fn = preexec_function)
con_gfi_ble | File "/usr/local/lib/python2.7/subprocess.py", line 394, in __init__
con_gfi_ble | errread, errwrite)
con_gfi_ble | File "/usr/local/lib/python2.7/subprocess.py", line 1047, in _execute_child
con_gfi_ble | raise child_exception
con_gfi_ble | exceptions.OSError: [Errno 1] Operation not permitted
con_gfi_ble |
Question: What does exceptions.OSError: [Errno 1] Operation not permitted?
My code is fine when I run it from Terminal. What's wrong with the container? Any idea!
Docker containers run with reduced capabilities. This prevents root inside a container from escaping the container by running kernel commands without namespaces, and accessing parts of the host outside of the container, like raw network interfaces or physical devices. You need to add capabilities to the container externally if you need them, but understand this reduces the security provided by docker's default settings.
From docker run, this looks like:
docker run --cap-add=NET_ADMIN --cap-add=NET_RAW ...
https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
In a compose file, this looks like:
version: '2'
services:
app:
image: your_image
cap_add:
- NET_ADMIN
- NET_RAW
Ref: https://docs.docker.com/compose/compose-file/
This will not work with swarm mode. Work is ongoing for adding the ability to run commands with added capabilities within swarm mode. There are ugly workarounds if you need this.
Note that you should not be running sudo inside of a container. Doing so means everything has access to promote itself to root and defeats the purpose of running anything as a user. Instead you should start the container as root and drop to a regular user as soon as possible, which is a one way operation.
I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.
Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp