Permission denied inside Docker container on shared directory - docker

I'm new to Docker so I might not have some of the terminology correct. Inside the container I'm getting a permission denied error on a directory shared with the host. They appear to have matching uid:gid and the permissions host side are 777. The container is not for running in the background.
I'm using the container to run a big series of untrusted programs one at a time each needing the same initial conditions. So I don't think it's feasible to copy stuff into the docker image at build time. I felt the optimal thing to do is copy the programs one at a time to a temp directory on the host and then share that directory with the fresh container for each run. I also need to collect the output from the container-run programs and keep them on the host so I can see how each program's output differs from the others.
I have looked at the following questions/answers:
Docker: Copying files from Docker container to host
How to fix docker: Got permission denied issue - successfully used to make docker run as someone other than root
How do I add a user when I'm using Alpine as a base image? and Setting up a new user - used to create the user and group
I am:
running docker as an ordinary user uid 1000, gid 1000, also belonging to the group docker
setting permissions on the shared directory host side to be 777 with uid:gid as 1000:1000 which is the same as the user
setting the uid and gid inside the container to match uid and gid from the host
using the Dockerfile to create a uid and gid each of 1000
I read here that If the first argument begins with a / or ~/, you’re creating a bindmount. Remove that, and you’re naming the volume. So I tried both. The bindmount version seems to have the correct uid:gid but is permission denied, the volume version comes out as root:root.
As a newbie it's hard to know what information to share so here's everything I think might be useful:
Docker command attempt 1
[osboxes#osboxes tmp]$ pwd
/var/tmp
osboxes#osboxes tmp]$ whoami
osboxes
[osboxes#osboxes tmp]$ grep osboxes /etc/passwd
osboxes:x:1000:1000:osboxes.org:/home/osboxes:/bin/bash
[osboxes#osboxes tmp]$ groups
osboxes wheel vboxsf docker
[osboxes#osboxes tmp]$ grep osboxes /etc/group
wheel:x:10:osboxes
osboxes:x:1000:osboxes
vboxsf:x:981:osboxes
docker:x:1001:osboxes
[osboxes#osboxes tmp]$ ls -al
total 2
drwxrwxrwt. 11 root root 4096 Dec 31 12:13 .
drwxr-xr-x. 21 root root 4096 Jul 5 05:00 ..
drwxr-xr-x. 2 abrt abrt 6 Jul 5 05:00 abrt
drwxrwxrwx. 2 osboxes osboxes 6 Dec 31 12:13 host
continues...
[osboxes#osboxes tmp]$ docker run --rm -v /var/tmp/host:/var/tmp/container:rw \
--user appuser:appgroup --workdir /var/tmp/container \
-it alpine_bash_jdk11 /bin/bash
bash-5.0$ pwd
/var/tmp/container
bash-5.0$ ls -al
ls: can't open '.': Permission denied
total 0
bash-5.0$ ls -al ..
total 0
drwxrwxrwt 1 root root 23 Dec 31 12:51 .
drwxr-xr-x 1 root root 17 Dec 16 10:31 ..
drwxrwxrwx 2 appuser appgroup 6 Dec 31 12:13 container
bash-5.0$ whoami
appuser
bash-5.0$ groups
appgroup
bash-5.0$ grep appuser /etc/passwd
appuser:x:1000:1000:Linux User,,,:/home/appuser:/sbin/nologin
bash-5.0$ grep appuser /etc/group
appgroup:x:1000:appuser
Docker command attempt 2
everything as before except
for removing the qualified path to the host's
/var/tmp/host directory
docker run --rm -v host:/var/tmp/container:rw \
--user appuser:appgroup --workdir /var/tmp/container \
-it alpine_bash_jdk11 /bin/bash
bash-5.0$ pwd
/var/tmp/container
bash-5.0$ ls -al
total 0
drwxr-xr-x 2 root root 6 Dec 31 12:13 .
drwxrwxrwt 1 root root 23 Dec 31 13:03 ..
bash-5.0$ ls -al ..
total 0
drwxrwxrwt 1 root root 23 Dec 31 13:03 .
drwxr-xr-x 1 root root 17 Dec 16 10:31 ..
drwxr-xr-x 2 root root 6 Dec 31 12:13 container
bash-5.0$ whoami
appuser
bash-5.0$ groups
appgroup
bash-5.0$ echo hello from contanier > container.msg.txt
bash: container.msg.txt: Permission denied
Docker build command
as user osboxes
docker build -t alpine_bash_jdk11 .
Dockerfile
FROM alpine:latest
RUN apk --no-cache update
RUN apk add --no-cache bash
RUN apk --no-cache add openjdk11 --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community
ENV JAVA_HOME="/usr/lib/jvm/default-jvm"
ENV PATH=$PATH:${JAVA_HOME}/bin
RUN addgroup -g 1000 -S appgroup && adduser -S appuser -G appgroup -u 1000
USER appuser
I haven't used docker compose because I'm still getting my head round basic docker.
Virtual Machine which is the Docker Host
CentOS 7.2003 from osboxes.org, organization's decision, not mine
Linux osboxes 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I did a yum update, then yum installed all the stuff needed to install VirtualBox guest additions which is working ok
Docker version 1.13.1, build 0be3e21/1.13.1
Physical Host
Windows 10 64-bit
VirtualBox 6.1.4r136177
both these are the organization's decisions

tl;dr: had old version of docker due to wrong install command
The answer: install docker-ce instead of docker. Depending on your system that might be
sudo apt-get install -y docker-ce
or
sudo yum -y install docker-ce
instead of sudo apt-get install -y docker
or
sudo yum -y install docker
Solution: update docker
Having found this article I could see that I had the wrong version of docker. I justifiably thought the correct command was
sudo yum install -y docker
but it should have been docker-ce
I had to yum erase -y docker docker-common
Now I have Docker version 20.10.1, build 831ebea

Related

Change rights of `/etc/passwd` in Dockerfile

I am building a Podman image for Openshift, and have an issue with permissions.
Following this guide, and because Openshift assigns a random UID and a GID "0" when running containers, I try to chmod the /etc/passwd to be writable by its group and add my Openshift user in it at runtime; but it seems that I can't always chmod that /etc/passwd file.
Minimal Dockerfile:
FROM ubuntu:20.04
USER root
RUN ls -l /etc/passwd
RUN chmod g+w /etc/passwd
RUN ls -l /etc/passwd
When building (sudo podman build . -t test-image), I see the correct rights on /etc/passwd:
STEP 1: FROM ubuntu:20.04
STEP 2: USER root
--> Using cache 2cb48e4f8eed907e057017011e20ddc47ec8f152bb7afe34ecf6be23413cd08b
STEP 3: RUN ls -l /etc/passwd
-rw-r--r--. 1 root root 926 Sep 21 16:48 /etc/passwd
^
^
^
79f60398d64ece1b413fd904681ad5fd0725bae4b283abce7b7a9ea017d4ebe7
STEP 4: RUN chmod g+w /etc/passwd
6ed13401ab693dbc362ec90b9c3767c1cd2244074ca4e92bd1b31b72b3e80868
STEP 5: RUN ls -l /etc/passwd
-rw-rw-r--. 1 root root 926 Sep 21 16:48 /etc/passwd
^
^
^
STEP 6: COMMIT test
d667b2c05818b5af3c9db85a5e7179c8e7b1c21281e31c34a871e97268716f39
When I run my container with root user, no issue:
[user#server]$ sudo podman run -it test-image /bin/bash
root#295ab7b72a4d:/# ls -l /etc/passwd
-rw-rw-r--. 1 root root 926 Sep 21 16:48 /etc/passwd
^
^
^
But when I run with a random user, /etc/passwd rights aren't changed anymore...
[user#server]$ sudo podman run -it -u 1000701200:0 test-image /bin/bash
1000701200#a63d9bf9f53a:/$ ls -l /etc/passwd
-rw-r--r--. 1 root root 977 Oct 19 08:51 /etc/passwd
^
^
^
Why do the rights on /etc/passwd depend on the user who run the container?
Thanks in advance.
EDIT 1: This is not the case with a random file I create, it works as expected. So there is something special about the /etc/passwd file.
The podman version was at cause - updating to 3.x version solved the issue.

docker volume permission issue

I am trying to launch an app, deployed using wildfly18 in a docker container, which internally connects to my host postgresql database installation. During the container creation process, I am also mapping my container's wildfly log directory to my local i.e "host" directory via a named volume, created using the docker volume create command.
The issue is, I get a "permission denied" error when the app runs and the container tries to create log files inside the mapped volume.
My Dockerfile contents are as below:
FROM jboss/base-jdk:8
ENV WILDFLY_VERSION 18.0.1.Final
ENV WILDFLY_SHA1=ef0372589a0f08c36b15360fe7291721a7e3f7d9
ENV JBOSS_HOME /opt/jboss/wildfly
USER root
RUN cd $HOME \
&& curl -O https://download.jboss.org/wildfly/$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.tar.gz \
&& sha1sum wildfly-$WILDFLY_VERSION.tar.gz | grep $WILDFLY_SHA1 \
&& tar xf wildfly-$WILDFLY_VERSION.tar.gz \
&& mv $HOME/wildfly-$WILDFLY_VERSION $JBOSS_HOME \
&& rm wildfly-$WILDFLY_VERSION.tar.gz
COPY ./bin $JBOSS_HOME/bin
COPY ./standalone/configuration/* $JBOSS_HOME/standalone/configuration/
COPY ./modules/com $JBOSS_HOME/modules/com
COPY ./modules/system/layers/base/org/ $JBOSS_HOME/modules/system/layers/base/org/
COPY ./standalone/waffle_resource $JBOSS_HOME/standalone/waffle_resource
COPY ./standalone/waffle_resource/waffle.ear $JBOSS_HOME/standalone/deployments/
COPY ./standalone/waffle_resource/waffle-war.ear $JBOSS_HOME/standalone/deployments/
RUN chown -R jboss:jboss ${JBOSS_HOME} && chmod -R g+rw ${JBOSS_HOME}
ENV LAUNCH_JBOSS_IN_BACKGROUND true
USER jboss
EXPOSE 8989 9990
WORKDIR $JBOSS_HOME/bin
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0"]
As you can see above, I am using user JBOSS inside the container to kick off wildfly.
The commands used to create an image and run a container and also to create a volume are as below:
docker image build -t viaduct/wildfly .
docker volume create viaduct-wildfly-logs
docker run -d -v viaduct-wildfly-logs:/opt/jboss/wildfly/standalone/log --network=host \
-e "DB_DBNAME=dbname" \
-e "DB_PORT=5432" \
-e "DB_USER=xyz" \
-e "DB_PASS=" \
-e "DB_HOST=127.0.0.1" \
--name petes viaduct/wildfly
I verified the permissions within the container and my local "host" directory created by docker volume create command. Also, it's worth noting,
I am running wildlfy as user JBOSS
.
The containers permissions are as below:
[jboss#localhost /]$ ll /opt/jboss/wildfly/standalone/
total 4
drwxrwxr-x 1 jboss jboss 62 Sep 18 00:24 configuration
drwxr-xr-x 6 jboss jboss 84 Sep 18 00:23 data
drwxrwxr-x 1 jboss jboss 64 Sep 18 00:24 deployments
drwxrwxr-x 1 jboss jboss 17 Nov 15 2019 lib
*drwxr-xr-x 2 root root 6 Sep 17 23:48 log*
drwxrwxr-x 1 jboss jboss 4096 Sep 18 00:24 tmp
drwxrwxr-x 1 jboss jboss 98 Sep 18 00:23 waffle_resource
[jboss#localhost /]$ exit
and the local volume permissions are as below:
[root#localhost xyz]# cd /var/lib/docker/volumes/
[root#localhost volumes]# ll
drwxrwsr-x 3 root root 19 Sep 18 11:48 viaduct-wildfly-logs
The docker volume create command creates directory in my local machine as below:
/var/lib/docker/volumes/viaduct-wildfly-logs/_data
and the permissions for each subdirectories by default are as follows, which definitely is for maintained for security reasons:
drwx--x--x 14 root root 182 Sep 14 09:32 docker
drwx------ 7 root root 285 Sep 18 11:48 volumes
drwxrwsr-x 3 root root 19 Sep 18 11:48 viaduct-wildfly-logs
To start with, please let me know whether my strategy is correct?
Secondly, let me know the best way to fix the permission issue?
You need to create a user with the same UID/GID and give the permission on the host folder for this volume.
The server is run as the jboss user which has the uid/gid set to 1000. doc

Docker - Can mount an NFS share into a container but not a sub-directory of it

I have an NFS share with the following properties:
Mounted on my host on /nfs/external_disk
Owner user is test_user with UID 1234
Group is test_group with GID 2222
Permissions is 750
I have a small Dockerfile with the following content
ARG tag=lts
from jenkins/jenkins:${tag}
user root
# Create a new user and new group that matches what is on the host.
ARG username=test_user
ARG groupname=test_group
ARG uid=1234
ARG gid=2222
RUN groupadd -g ${gid} ${groupname} && \
mkdir -p /users && \
useradd -l -m -u ${uid} -g ${groupname} -s /bin/bash -d /users/${username} ${username}
user ${username}
After building the image (named custom_jenkins), and when I run the following command, the container is started properly and I see the original Jenkins homer stuff now copied to the share.
docker run -td --rm -v /nfs/external_disk:/var/jenkins_home custom_jenkins
However if I want to mount a sub-directory of the NFS share, say ${NFS_SHARE}/jenkins_home, then I get an error:
docker run -td --rm -v /nfs/external_disk/jenkins_home:/var/jenkins_home custom_jenkins
docker: Error response from daemon: error while creating mount source path '/nfs/external_disk/jenkins_home': mkdir /nfs/external_disk/jenkins_home: permission denied.
Now even if I attempt to create the sub-directory myself before starting the container, I still get the same error. Even when I set the permissions of the sub-directory to be 777.
Note that I am running as test_user which has the same UID/GID as in the container and it actually owns the NFS share.
I have a feeling that when docker attempts to create a sub-directory, it attempts to create it as some different user (e.g. the "docker" user) which causes it to fail while creating the folder since it has no access inside the share.
Can anyone help? thanks in advance.
I tried to reproduce. Works just fine for me. Perhaps I am missing some constraint. Hope this helps anyway. Note at step 6 the owner and the group for the file that I created from the container. This might answer one of your questions.
Step 1: I created a NFS share somewhere in my LAN
Step 2: I mounted the share on the machine that's running the docker engine
sudo mount 192.168.0.xxx:/i-data/b4024d5b/nfs/NFS /mnt/nsa320/
neo#neo-desktop:nsa320$ mount | grep NFS
192.168.0.xxx:/i-data/b4024d5b/nfs/NFS on /mnt/nsa320 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.xxx,mountvers=3,mountport=3775,mountproto=udp,local_lock=none,addr=192.168.0.xxx)
Step 3: I created some sample files and a sub-directory:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/
total 12
drwxrwxrwx 3 root root 4096 Jul 21 22:54 .
drwxr-xr-x 3 root root 4096 Jul 21 22:41 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:45 dummyFile
-rw-r--r-- 1 root root 0 Jul 21 22:53 fileCreatedFromContainer << THIS WAS CREATED FROM A CONTAINER THAT WAS NOT LAUNCHED WITH THE --user OPTION
drwxr-xr-x 2 neo neo 4096 Jul 21 22:54 subfolder
Step 4: Launched a dummy container and mounted the sub-directory (1000 is the UID of the user neo in the my OS):
docker run -d -v /mnt/nsa320/subfolder:/var/externalMount --user 1000 alpine tail -f /dev/null
Step 5: Connected in container to check the mount(I can read and write in the subfolder located on the NFS)
neo#neo-desktop:nsa320$ docker exec -ti ded1dc79773e sh
/ $ ls /var/externalMount/
fileInSubfolder
/ $ touch /var/externalMount/fileInSubfolderCreatedFromContainer
Step 6: Back on the host, to whom does the file that I created from the container belongs to:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/subfolder/
total 8
drwxr-xr-x 2 neo neo 4096 Jul 21 23:23 .
drwxrwxrwx 3 root root 4096 Jul 21 22:54 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:54 fileInSubfolder
-rw-r--r-- 1 neo root 0 Jul 21 23:23 fileInSubfolderCreatedFromContainer
Maybe off-topic: whoami executed in the container returns just the UID:
$ whoami
whoami: unknown uid 1000

Docker: created files disappear between layers

Running Docker version 17.06.0-ce, build 02c1d87, I have a dockerfile that looks like this:
FROM maven:3.5.2-jdk-8-alpine as builder
RUN chmod -R 777 /root/.m2 &&\
mkdir -p /root/.m2/repository/com/foo/bar &&\
echo "Text" > /root/.m2/repository/com/foo/bar/baz.txt &&\
ls -R -a -l /root/.m2/repository/com/foo
RUN ls -R -a -l /root/.m2/repository/com/foo
The first RUN command successfully creates a file, but the second command can't find it:
Step 1/46 : FROM maven:3.5.2-jdk-8-alpine as builder
---> 293423a981a7
Step 2/46 : RUN chmod -R 777 /root/.m2 && mkdir -p /root/.m2/repository/com/foo/bar && echo "Text" > /root/.m2/repository/com/foo/bar/baz.txt && ls -R -a -l /root/.m2/repository/com/foo
---> Running in a1c0fd142856
/root/.m2/repository/com/foo:
total 12
drwxr-xr-x 3 root root 4096 Nov 30 13:32 .
drwxr-xr-x 3 root root 4096 Nov 30 13:32 ..
drwxr-xr-x 2 root root 4096 Nov 30 13:32 bar
/root/.m2/repository/com/foo/bar:
total 12
drwxr-xr-x 2 root root 4096 Nov 30 13:32 .
drwxr-xr-x 3 root root 4096 Nov 30 13:32 ..
-rw-r--r-- 1 root root 5 Nov 30 13:32 baz.txt
---> b997ccbfd5b0
Step 3/46 : RUN ls -R -a -l /root/.m2/repository/com/foo
---> Running in 603671c87ecc
ls: /root/.m2/repository/com/foo: No such file or directory
The command '/bin/sh -c ls -R -a -l /root/.m2/repository/com/foo' returned a non-zero code: 1
What's going on? (NB. this is a toy example, but there is a real issue in that JARs installed into the Maven repository seem to disappear between layers.)
The upstream maven image defines this directory as a volume. Once an image does this, you cannot reliably make changes to that directory in the image.
From their Dockerfile:
ARG USER_HOME_DIR="/root"
...
VOLUME "$USER_HOME_DIR/.m2"
The Dockerfile documentation describes this behavior:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
Your options are to:
Use another directory for your build
Request that the upstream image removes this VOLUME definition
Build your own image without this definition (it's fairly easy to fork their repo and do your own build)
For more details, you can see an old blog post by me about this behavior and the problems it creates.

File ownership after docker cp

How can I control which user owns the files I copy in and out of a container?
The docker cp command says this about file ownership:
The cp command behaves like the Unix cp -a command in that directories are copied recursively with permissions preserved if possible. Ownership is set to the user and primary group at the destination. For example, files copied to a container are created with UID:GID of the root user. Files copied to the local machine are created with the UID:GID of the user which invoked the docker cp command. However, if you specify the -a option, docker cp sets the ownership to the user and primary group at the source.
It says that files copied to a container are created as the root user, but that's not what I see. I create two files owned by user id 1005 and 1006. Those owners are translated into the container's user namespace. The -a option seems to make no difference when I copy the file into a container.
$ sudo chown 1005:1005 test.txt
$ ls -l test.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 12:43 test.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -vsandbox1:/data alpine echo OK
OK
$ docker cp test.txt run1:/data/test1005.txt
$ docker cp -a test.txt run1:/data/test1005a.txt
$ sudo chown 1006:1006 test.txt
$ docker cp test.txt run1:/data/test1006.txt
$ docker cp -a test.txt run1:/data/test1006a.txt
$ docker run --rm -vsandbox1:/data alpine ls -l /data
total 16
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005a.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006a.txt
When I copy files out of the container, they are always owned by me. Again, the -a option seems to do nothing.
$ docker run --rm -vsandbox1:/data alpine cp /data/test1006.txt /data/test1007.txt
$ docker run --rm -vsandbox1:/data alpine chown 1007:1007 /data/test1007.txt
$ docker cp run1:/data/test1006.txt .
$ docker cp run1:/data/test1007.txt .
$ docker cp -a run1:/data/test1006.txt test1006a.txt
$ docker cp -a run1:/data/test1007.txt test1007a.txt
$ ls -l test*.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 12:43 test.txt
$
You can also change the ownership by logging in as root user into the container :
docker exec -it --user root <container-id> /bin/bash
chown -R <username>:<groupname> <folder/file>
In addition to #Don Kirkby's answer, let me provide a similar example in bash/shell script for the case that you want to copy something into a container while applying different ownership and permissions than those of the original file.
Let's create a new container from a small image that will keep running by itself:
docker run -d --name nginx nginx:alpine
Now wel'll create a new file which is owned by the current user and has default permissions:
touch foo.bar
ls -ahl foo.bar
>> -rw-rw-r-- 1 my-user my-group 0 Sep 21 16:45 foo.bar
Copying this file into the container will set ownership and group to the UID of my user and preserve the permissions:
docker cp foo.bar nginx:/foo.bar
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -rw-rw-r-- 1 4098 4098 0 Sep 21 14:45 /foo.bar
Using a little tar work-around, however, I can change the ownership and permissions that are applied inside of the container.
tar -cf - foo.bar --mode u=+r,g=-rwx,o=-rwx --owner root --group root | docker cp - nginx:/
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -r-------- 1 root root 0 Sep 21 14:45 /foo.bar
tar options explained:
c creates a new archive instead of unpacking one.
f - will write to stdout instead of a file.
foo.bar is the input file to be packed.
--mode specifies the permissions for the target. Similar to chown, they can be given in symbolic notation or as an octal number.
--owner sets the new owner of the file.
--group sets the new group of the file.
docker cp - reads the file that is to be copied into the container from stdin.
This approach is useful when a file needs to be copied into a created container before it starts, such that docker exec is not an option (which can only operate on running containers).
Just a one-liner (similar to #ramu's answer), using root to make the call:
docker exec -u 0 -it <container-id> chown node:node /home/node/myfile
In order to get complete control of file ownership, I used the tar stream feature of docker cp:
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or to STDOUT.
I launch the docker cp process, then stream a tar file to or from the process. As the tar entries go past, I can adjust the ownership and permissions however I like.
Here's a simple example in Python that copies all the files from /outputs in the sandbox1 container to the current directory, excludes the current directory so its permissions don't get changed, and forces all the files to have read/write permissions for the user.
from subprocess import Popen, PIPE, CalledProcessError
import tarfile
def main():
export_args = ['sudo', 'docker', 'cp', 'sandbox1:/outputs/.', '-']
exporter = Popen(export_args, stdout=PIPE)
tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|')
tar_file.extractall('.', members=exclude_root(tar_file))
exporter.wait()
if exporter.returncode:
raise CalledProcessError(exporter.returncode, export_args)
def exclude_root(tarinfos):
print('\nOutputs:')
for tarinfo in tarinfos:
if tarinfo.name != '.':
assert tarinfo.name.startswith('./'), tarinfo.name
print(tarinfo.name[2:])
tarinfo.mode |= 0o600
yield tarinfo
main()

Resources