I start a docker like this:
`docker run --rm \
-e "http_proxy=${http_proxy}" \
-e "https_proxy=${https_proxy}" \
-e "GOPATH=/usr/src/myapp/.go" \
-v "${PWD}":/usr/src/myapp \
-v "${PWD}/build/foo/bin":"/foo" \
-w /usr/src/myapp \
golang:1.8 /bin/sh -c "ls -l /usr/src/myapp && ls -l /usr/src/myapp/build/foo/bin && cp /usr/src/myapp/build/foo/bin/foo /bin/ && make bin_build"`
One of my machines it works okay but when it runs from jenkins, it throws a strange output:
`ls: cannot access /usr/src/myapp/bar.go: Permission denied
total 0
-?????????? ? ? ? ? ? bar.go`
I suspect that some user access setting messes up the picture but I have not been able to find culprit or the solution yet. If anyone bumped into similar issue before I would appreciate his or her help.
It turned out that the Jenkins server was actually a CentOS where one does not simply attach a volume to docker... Using the following command however did the trick:
sudo chcon -Rt svirt_sandbox_file_t /host/folder/you/want/to/attach
Solution found in the following articles:
Permission denied on accessing host directory in docker
https://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/
Related
I'm trying to create a docker container that will let me run firefox, so I can eventually use a jupyter notebook. Right now, although I have successfully installed firefox, I cannot get a window to open.
Following instructions from running-gui-apps-within-docker, I created an image (i.e. "sample") with Firefox and then tried to run it using
$ docker run -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --net=host sample
When I did so, I got the following error:
root#machine:~# firefox
No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :1
Using man docker run to understand the flags, I was not able to find the --net flag, though I did see a --network flag. However, replacing --net with --network didn't change anything. How do I specify a protocol, that will let me create an image from whose containers I will be able to run firefox?
PS - For what it's worth, when I check the value of DISPLAY, I get the predictable:
~# echo $DISPLAY
:1
I have been running firefox inside docker for quite some time so this is possible. With regards to the security aspects I think the following is the relevant parts:
Building
The build needs to match up uid/gid values with the user that is running the container. I do this with UID and GID build args:
Dockerfile
...
FROM fedora:35 as runtime
ENV DISPLAY=:0
# uid and gid in container needs to match host owner of
# /tmp/.docker.xauth, so they must be passed as build arguments.
ARG UID
ARG GID
RUN \
groupadd -g ${GID} firefox && \
useradd --create-home --uid ${UID} --gid ${GID} --comment="Firefox User" firefox && \
true
...
ENTRYPOINT [ "/entrypoint.sh" ]
Makefile
build:
docker pull $$(awk '/^FROM/{print $$2}' Dockerfile | sort -u)
docker build \
-t $(USER)/firefox:latest \
-t $(USER)/firefox:`date +%Y-%m-%d_%H-%M` \
--build-arg UID=`id -u` \
--build-arg GID=`id -g` \
.
entrypoint.sh
#!/bin/sh
# Assumes you have run
# pactl load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1 auth-anonymous=1
# on the host system.
PULSE_SERVER=tcp:127.0.0.1:4713
export PULSE_SERVER
if [ "$1" = /bin/bash ]
then
exec "$#"
fi
exec /usr/local/bin/su-exec firefox:firefox \
/usr/bin/xterm \
-geometry 160x15 \
/usr/bin/firefox --no-remote "$#"
So I am running firefox as a dedicated non-root user, and I wrap it via xterm so that the container does not die if firefox accidentally exit or if you want to restart. It is a bit annoying having all these extra xterm windows, but I have not found any other way in preventing accidental loss of the .mozilla directory content (mapping out to a volume would prevent running multiple independent docker instances which I definitely want, and also from a privacy point of view not dragging along a long history is something I want. Whenever I do want to save something I make a copy of the .mozilla directory and save it on the host computer (and restore later in a new container)).
Running
run.sh
#!/bin/bash
export XSOCK=/tmp/.X11-unix
export XAUTH=/tmp/.docker.xauth
touch ${XAUTH}
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
DISPLAY2=$(echo $DISPLAY | sed s/localhost//)
if [ $DISPLAY2 != $DISPLAY ]
then
export DISPLAY=$DISPLAY2
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
fi
ARGS=$(echo $# | sed 's/[^a-zA-Z0-9_.-]//g')
docker run -ti --rm \
--user root \
--name firefox-"$ARGS" \
--network=host \
--memory "16g" --shm-size "1g" \
--mount "type=bind,target=/home/firefox/Downloads,src=$HOME/firefox_downloads" \
-v ${XSOCK}:${XSOCK} \
-v ${XAUTH}:${XAUTH} \
-e XAUTHORITY=${XAUTH} \
-e DISPLAY=${DISPLAY} \
${USER}/firefox "$#"
With this you can for instance run ./run.sh https://stackoverflow.com/ and get a container named firefox-httpsstackoverflow.com. If you then want to log into your bank completely isolated from all other firefox instances (protected by operating system process boundaries, not just some internal browser separation) you run ./run.sh https://yourbank.example.com/.
Try run xhost + in your docker host to allow conections with X server.
Is there a way to run Podman inside Podman, similar to the way you can run Docker inside Docker?
Here is a snippet of my Dockerfile which is strongly based on another question:
FROM debian:10.6
RUN apt update && apt upgrade -qqy && \
apt install -qqy iptables bridge-utils \
qemu-kvm libvirt-daemon libvirt-clients virtinst libvirt-daemon-system \
cpu-checker kmod && \
apt -qqy install curl sudo gnupg2 && \
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list && \
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/Release.key | sudo apt-key add - && \
apt update && \
apt -qqy install podman
Now trying some tests:
$ podman run -it my/test bash -c "podman --storage-driver=vfs info"
... (long output; this works fine)
$ podman run -it my/test bash -c "podman --storage-driver=vfs images"
ERRO[0000] unable to write system event: "write unixgram #000ec->/run/systemd/journal/socket: sendmsg: no such file or directory"
REPOSITORY TAG IMAGE ID CREATED SIZE
$ podman run -it my/test bash -c "podman --storage-driver=vfs run docker.io/library/hello-world"
ERRO[0000] unable to write system event: "write unixgram #000ef->/run/systemd/journal/socket: sendmsg: no such file or directory"
Trying to pull docker.io/library/hello-world...
Getting image source signatures
Copying blob 0e03bdcc26d7 done
Copying config bf756fb1ae done
Writing manifest to image destination
Storing signatures
ERRO[0003] unable to write pod event: "write unixgram #000ef->/run/systemd/journal/socket: sendmsg: no such file or directory"
ERRO[0003] Error preparing container 66692b7ff496775499d405d538769a078f2794549955cf2409fcbcbf87f42e94: error creating network namespace for container 66692b7ff496775499d405d538769a078f2794549955cf2409fcbcbf87f42e94: mount --make-rshared /var/run/netns failed: "operation not permitted"
Error: failed to mount shm tmpfs "/var/lib/containers/storage/vfs-containers/66692b7ff496775499d405d538769a078f2794549955cf2409fcbcbf87f42e94/userdata/shm": operation not permitted
I've also tried a suggestion from the other question, passing --cgroup-manager=cgroupfs, but without success:
$ podman run -it my/test bash -c "podman --storage-driver=vfs --cgroup-manager=cgroupfs run docker.io/library/hello-world"
Trying to pull docker.io/library/hello-world...
Getting image source signatures
Copying blob 0e03bdcc26d7 done
Copying config bf756fb1ae done
Writing manifest to image destination
Storing signatures
ERRO[0003] unable to write pod event: "write unixgram #000f3->/run/systemd/journal/socket: sendmsg: no such file or directory"
ERRO[0003] Error preparing container c3fff4d8161903aaebd6f89f3b3c06b55038e11e07b6b561dc6576ca675747a3: error creating network namespace for container c3fff4d8161903aaebd6f89f3b3c06b55038e11e07b6b561dc6576ca675747a3: mount --make-rshared /var/run/netns failed: "operation not permitted"
Error: failed to mount shm tmpfs "/var/lib/containers/storage/vfs-containers/c3fff4d8161903aaebd6f89f3b3c06b55038e11e07b6b561dc6576ca675747a3/userdata/shm": operation not permitted
Seems like some network configuration is needed. I found the project below which suggests that some tweaking on network configurations might be necessary, but I don't know what would be the context of that and whether it would apply here or not.
https://github.com/joshkunz/qemu-docker
EDIT: I've just discovered /var/run/podman.sock, but also without success:
$ sudo podman run -it -v /run/podman/podman.sock:/run/podman/podman.sock my/test bash -c "podman --storage-driver=vfs --cgroup-manager=cgroupfs run docker.io/library/hello-world"
Trying to pull my/test...
denied: requested access to the resource is denied
Trying to pull my:test...
unauthorized: access to the requested resource is not authorized
Error: unable to pull my/text: 2 errors occurred:
* Error initializing source docker://my/test: Error reading manifest latest in docker.io/my/test: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
* Error initializing source docker://quay.io/my/test:latest: Error reading manifest latest in quay.io/my/test: unauthorized: access to the requested resource is not authorized
Seems like root cannot see the images I've created under my user.
Any ideas? Thanks.
Assume we would like to run ls / in a docker.io/library/alpine container.
Standard Podman
podman run --rm docker.io/library/alpine ls /
Podman in Podman
Let's run ls / in a docker.io/library/alpine container, but this time we run podman in a quay.io/podman/stable container.
Update June 2021
A GitHub issue comment shows an example of how to run Podman in Podman as a non-root user both on the host and in the outer container. Slightly modified it would look like this:
podman \
run \
--rm \
--security-opt label=disable \
--user podman \
quay.io/podman/stable \
podman \
run \
--rm \
docker.io/library/alpine \
ls /
Here is a full example:
$ podman --version
podman version 3.2.1
$ cat /etc/fedora-release
Fedora release 34 (Thirty Four)
$ uname -r
5.12.11-300.fc34.x86_64
$ podman \
run \
--rm \
--security-opt label=disable \
--user podman \
quay.io/podman/stable \
podman \
run \
--rm \
docker.io/library/alpine \
ls /
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b
Copying config sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83
Writing manifest to image destination
Storing signatures
bin
dev
etc
home
lib
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
$
To avoid repeatedly downloading the inner container image,
create a volume
podman volume create mystorage
and add the command-line option
-v mystorage:/home/podman/.local/share/containers:rw
to the outer Podman command. In other words
podman \
run \
-v mystorage:/home/podman/.local/share/containers:rw \
--rm \
--security-opt label=disable \
--user podman \
quay.io/podman/stable \
podman \
run \
--rm \
docker.io/library/alpine \
ls /
Podman in Podman (outdated answer)
(The old outdated answer from Dec 2020. I'll probably remove this when it's clear that the method described here is outdated)
Let's run ls / in a docker.io/library/alpine container, but this time we run podman in a quay.io/podman/stable container.
The command will look like this:
podman \
run \
--privileged \
--rm \
--ulimit host \
-v /dev/fuse:/dev/fuse:rw \
-v ./mycontainers:/var/lib/containers:rw \
quay.io/podman/stable \
podman \
run \
--rm \
--user 0 \
docker.io/library/alpine ls
(The directory ./mycontainers is here used for container storage)
Here is a full example
$ podman --version
podman version 2.1.1
$ mkdir mycontainers
$ podman run --privileged --rm --ulimit host -v /dev/fuse:/dev/fuse:rw -v ./mycontainers:/var/lib/containers:rw quay.io/podman/stable podman run --rm --user 0 docker.io/library/alpine ls | head -5
Trying to pull docker.io/library/alpine...
Getting image source signatures
Copying blob sha256:188c0c94c7c576fff0792aca7ec73d67a2f7f4cb3a6e53a84559337260b36964
Copying config sha256:d6e46aa2470df1d32034c6707c8041158b652f38d2a9ae3d7ad7e7532d22ebe0
Writing manifest to image destination
Storing signatures
bin
dev
etc
home
lib
$ podman run --privileged --rm --ulimit host -v /dev/fuse:/dev/fuse:rw -v ./mycontainers:/var/lib/containers:rw quay.io/podman/stable podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/library/alpine latest d6e46aa2470d 4 days ago 5.85 MB
If you would leave out -v ./mycontainers:/var/lib/containers:rw you might see the slightly confusing error message
Error: executable file `ls` not found in $PATH: No such file or directory: OCI runtime command not found error
References:
How to use Podman inside of a container Red Hat blog post from July 2021.
discussion.fedoraproject.org (discussion about not found in $PATH)
github comment (that gives advice about the correct way to run Podman in Podman)
I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.
I used a docker image to run a program on our school's server using this command.
docker run -t -i -v /target/new_directory 990210oliver/mycc.docker:v1 /bin/bash
After I ran it it created a firectory on my account called new_directory. Now I don't have permissions to delete or modify the files.
How do I remove this directory?
I also had this problem.
After:
docker run --name jenkins -p 8080:8080 -v $HOME/jenkins:/var/jenkins_home jenkins jenkins
I couldn't remove files in $HOME/jenkins.
Ricardo Branco's answer didn't work for me because chown gave me:
chown: changing ownership of '/var/jenkins_home': Operation not permitted
Solution:
exec /bin/bash into container as a root user:
docker exec -it --privileged --user root container_id /bin/bash
then:
cd /var/jenkins_home/ && rm -r * .*
I made #siulkilulki's answer into one line:
docker exec --privileged --user root <CONTAINER_ID> chown -R "$(id -u):$(id -g)" <TARGET_DIR>
Note that here the CONTAINER must be up.
Change the owner of all the files on the directory to your used ID within the container running as root, then exit the container and remove the directory.
docker run --rm -v /target/new_directory 990210oliver/mycc.docker:v1 chown -R $(id -un):$(id -un) /target/new_directory
exit
rm -rf $HOME/new_directory
I had the same problem. I am using ubuntu 18.04. I ran the following code and then I was able to delete files locally. I have app dir inside docker project dir
cd to your docker project dir
sudo chown -R $(whoami):$(whoami) app/
docker run -v {absolute path to dir with the file}:/to_delete -it ubuntu /bin/bash
Then just:
$ cd to_delete
$ rm -rf <file/dir>
Here is a solution that does not require --privileged.
Game Plan
Determine UIDs of all offending files created by previous docker runs. Use docker to find them, since in-container UID is not the same as host UID. An offending file is any file not owned by container user root which maps to the current user running docker.
Run a container using each discovered UID and delete the offending files (or chown them).
Code
# Assumes that current dir is the volume
# find files owned by docker internal uuids (not root) on the mounted volume:
BAD_FILE_UIDS=$(docker run --rm -v $(pwd):/build alpine sh -c 'find /build -mindepth 1 -not -user root | xargs stat -c "%u" | sort -u')
if [ -n "${BAD_FILE_UIDS}" ] ; then
for uuid in $BAD_FILE_UIDS ; do
echo "Cleaning up files owned by $uuid using docker"
docker run --rm -v $(pwd):/build --user $uuid:0 alpine find /build -mindepth 1 -user $uuid -delete
done
fi
You can change the -delete to -exec chown SOME_USER {} \; to chown.
The above works well for use in CI as post-build cleanup.
Try this:
docker stop $CONTAINER_NAME
docker rm -v $CONTAINER_NAME
I guess this should remove the mounted dir. If it doesn't, do this explicitly:
sudo rm -rf /target/new_directory
I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error