I'm trying to create a docker container that will let me run firefox, so I can eventually use a jupyter notebook. Right now, although I have successfully installed firefox, I cannot get a window to open.
Following instructions from running-gui-apps-within-docker, I created an image (i.e. "sample") with Firefox and then tried to run it using
$ docker run -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --net=host sample
When I did so, I got the following error:
root#machine:~# firefox
No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :1
Using man docker run to understand the flags, I was not able to find the --net flag, though I did see a --network flag. However, replacing --net with --network didn't change anything. How do I specify a protocol, that will let me create an image from whose containers I will be able to run firefox?
PS - For what it's worth, when I check the value of DISPLAY, I get the predictable:
~# echo $DISPLAY
:1
I have been running firefox inside docker for quite some time so this is possible. With regards to the security aspects I think the following is the relevant parts:
Building
The build needs to match up uid/gid values with the user that is running the container. I do this with UID and GID build args:
Dockerfile
...
FROM fedora:35 as runtime
ENV DISPLAY=:0
# uid and gid in container needs to match host owner of
# /tmp/.docker.xauth, so they must be passed as build arguments.
ARG UID
ARG GID
RUN \
groupadd -g ${GID} firefox && \
useradd --create-home --uid ${UID} --gid ${GID} --comment="Firefox User" firefox && \
true
...
ENTRYPOINT [ "/entrypoint.sh" ]
Makefile
build:
docker pull $$(awk '/^FROM/{print $$2}' Dockerfile | sort -u)
docker build \
-t $(USER)/firefox:latest \
-t $(USER)/firefox:`date +%Y-%m-%d_%H-%M` \
--build-arg UID=`id -u` \
--build-arg GID=`id -g` \
.
entrypoint.sh
#!/bin/sh
# Assumes you have run
# pactl load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1 auth-anonymous=1
# on the host system.
PULSE_SERVER=tcp:127.0.0.1:4713
export PULSE_SERVER
if [ "$1" = /bin/bash ]
then
exec "$#"
fi
exec /usr/local/bin/su-exec firefox:firefox \
/usr/bin/xterm \
-geometry 160x15 \
/usr/bin/firefox --no-remote "$#"
So I am running firefox as a dedicated non-root user, and I wrap it via xterm so that the container does not die if firefox accidentally exit or if you want to restart. It is a bit annoying having all these extra xterm windows, but I have not found any other way in preventing accidental loss of the .mozilla directory content (mapping out to a volume would prevent running multiple independent docker instances which I definitely want, and also from a privacy point of view not dragging along a long history is something I want. Whenever I do want to save something I make a copy of the .mozilla directory and save it on the host computer (and restore later in a new container)).
Running
run.sh
#!/bin/bash
export XSOCK=/tmp/.X11-unix
export XAUTH=/tmp/.docker.xauth
touch ${XAUTH}
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
DISPLAY2=$(echo $DISPLAY | sed s/localhost//)
if [ $DISPLAY2 != $DISPLAY ]
then
export DISPLAY=$DISPLAY2
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
fi
ARGS=$(echo $# | sed 's/[^a-zA-Z0-9_.-]//g')
docker run -ti --rm \
--user root \
--name firefox-"$ARGS" \
--network=host \
--memory "16g" --shm-size "1g" \
--mount "type=bind,target=/home/firefox/Downloads,src=$HOME/firefox_downloads" \
-v ${XSOCK}:${XSOCK} \
-v ${XAUTH}:${XAUTH} \
-e XAUTHORITY=${XAUTH} \
-e DISPLAY=${DISPLAY} \
${USER}/firefox "$#"
With this you can for instance run ./run.sh https://stackoverflow.com/ and get a container named firefox-httpsstackoverflow.com. If you then want to log into your bank completely isolated from all other firefox instances (protected by operating system process boundaries, not just some internal browser separation) you run ./run.sh https://yourbank.example.com/.
Try run xhost + in your docker host to allow conections with X server.
Related
I am running docker "rootless" according to this guide: https://docs.docker.com/engine/security/rootless/
The user which actually runs docker is svc_test.
When I try and start a docker container which has diretory mounts which don't exists - the docker daemon (a.k.a. svc_test user) attempts to mkdir these directories, but fails with
docker: Error response from daemon: error while creating mount source path '/dir_path/dir_name': mkdir /dir_path/dir_name: permission denied.
When I (svc_test) them attempt to do mkdir /dir_path/dir_name I succeed without any issues.
What is going on here and why does this happen?
Clearly I am missing something, but I can't trace what is that exactly.
Update 1:
This is the specific docker cmd I use to run the container:
docker run -d --restart unless-stopped \
--name questdb \
-e QDB_METRICS_ENABLED=TRUE \
--network="host" \
-v /my_mounted_volume/questdb:/questdb \
-v /my_mounted_volume/questdb/public:/questdb/public \
-v /my_mounted_volume/questdb/conf:/questdb/conf \
-v /my_mounted_volume/questdb/db:/questdb/db \
-v /my_mounted_volume/questdb/log:/questdb/log \
questdb/questdb:6.5.2 /usr/bin/env QDB_PACKAGE=docker /app/bin/java \
-m io.questdb/io.questdb.ServerMain \
-d /questdb \
-f
For clarity: my final goal is to be able to run the docker container in question from the same user form which I run my docker daemon (the svc_test user). Hence how I stumbled on this problem.
I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.
Earlier I used to run with the following command :
sudo docker run --pid=host -dit --restart unless-stopped --privileged -v /home/:/home/ --net=host ubuntu:latest bash -c "cd home && wget http://someurl.com/a.js -O /home/a.js && node a.js && tail -F anything"
This command would launch the container having a root user by default. So the wget http://someurl.com/a.js -O /home/a.js command used to work without any issues.
Now I want to get sound from the container. To make sure that the container and the host plays audio simultaneously I used pulseaudio socket otherwise I used to get device busy error as alsa captures the sound card.
Here is the new command I used to accomplish my requirement:
sudo docker run --pid=host -dit --restart unless-stopped --privileged --env PULSE_SERVER=unix:/tmp/pulseaudio.socket --env PULSE_COOKIE=/home/$USER/pulseaudio.cookie --volume /tmp/pulseaudio.socket:/tmp/pulseaudio.socket --volume /home/$USER/pulseaudio.client.conf:/etc/pulse/client.conf --user $(id -u):$(id -g) -v /home/:/home/ --net=host ubuntu:latest bash -c "cd home && wget http://someurl.com/a.js -O /home/a.js && node a.js && tail -F anything"
problem with pulseaudio is that it doesnt work when the user inside docker is a root user hence I have to use --user $(id -u):$(id -g) in the run command.
Now since the user is not root, the wget http://someurl.com/a.js -O /home/a.js command gives permission denied error.
I want this wget command to execute whenever I start my container.
I want to run the container as non-root as well as be able to execute the wget command also.
Is there any workaround for this?
1- Execute docker command with non-root user
If this is your case and don't want to run docker command with root user, follow this link .
create a docker group and add your current user to it.
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
2- Execute commands inside docker! with non-root user
If I'm right you want to use a non-root user inside docker not the root!
The uid given to your user in the docker is related to the root docker images you are using, for example alphine or ubuntu:xenial as mentioned in this article
But you can simple change the user inside docker by changing a little bit as follow in your Dockerfile and add a new user and user it. like this:
RUN adduser -D myuser
USER myuser
ENTRYPOINT [“sleep”]
CMD [“1000”]
then in the docker file, if you gain the /bin/bash and execute id command in it, you will see that the id of user inside docker is changed.
Update:
If you have a ready to use Dockerfile, then create a new Dockerfile, for example it's name is myDocker, and put code below in it:
from myDockerfile
RUN adduser -D myuser
USER myuser
ENTRYPOINT [“sleep”]
CMD [“1000”]
then save this file,and build it:
$ docker build -t myNewDocker .
$ docker run myNewDocker <with your options>
I solved the issue by using a simple chmod 777 command.
First I executed docker run command without the -c flag or the wget command etc
sudo docker run --pid=host -dit --restart unless-stopped --privileged -v /home/:/home/ --net=host ubuntu:latest bash
Once the container was running I entered this container as a root user using this command :
sudo docker exec -it --user="root" bash
Now once inside the container I executed the command chmod 777 /home/a.js
Then I commited a new image with the above changes.
Run the new image with the wget command
sudo docker run --pid=host -dit --restart unless-stopped --privileged -v /home/:/home/ --net=host ubuntunew:latest bash -c "cd home && wget http://someurl.com/a.js -O /home/a.js && node a.js && tail -F anything"
Now the wget works perfectly in non-root mode
Create the docker group.
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log out and log back in so that your group membership is
re-evaluated.
On Linux, you can also run the following command to
activate the changes to groups:
$ newgrp docker
Verify that you can run docker commands without sudo.
$ docker run hello-world
i am following this tutorial to run mssql in a docker.First the user pulls the image
docker pull microsoft/mssql-server-linux
second he does below
export DIR=/var/lib/mssql
sudo mkdir $DIR
finally he runs
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v $DIR:/var/opt/mssql \
microsoft/mssql-server-linux
Author explains second step as below
Create a directory on the host that will store data from the container and keep the value in an environment variable for convenience:
ask:
what does the author meant by that and what happens if we dont create the directory
I tried searching for different terms like below
docker container default path
docker file system
but not able to understand.Can some one shed some light on this
So here is thing. Consider below code
export DIR=/var/lib/mssql
sudo mkdir $DIR
I can rewrite it as
sudo mkdir /var/lib/mssql
But I will also have to change my RUN command to
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v /var/lib/mysql:/var/opt/mssql \
microsoft/mssql-server-linux
Now if you change the directory, then you you will have to update two places. Thats why DIR was used.
If you remove below from your docker run
-v /var/lib/mysql:/var/opt/mssql \
The data of your DB will be stored inside container at /var/opt/mssql and the data will only exist till the container is running. Next time you restart the container the DB will be blank.
That is why you map it to an outside directory on host. So when you restart the container or launch a new one then that directory content would be made available inside the container and the DB will have all the changes you made
I have containers for multiple Atlassian products; JIRA, Bitbucket and Confluence. When I'm trying to access the running containers I'm usually using:
docker exec -it -u root ${DOCKER_CONTAINER} bash
With this command I'm able to access as usual, but after running a script to extract and compress log files, I can't access that one container anymore.
Excerpt from the 'clean up script'
This is the first point of failure, and the script is running once each week (scheduled by Jenkins).
docker cp ${CLEAN_UP_SCRIPT} ${DOCKER_CONTAINER}:/tmp/${CLEAN_UP_SCRIPT}
if [ $? -eq 0 ]; then
docker exec -it -u root ${DOCKER_CONTAINER} bash -c "cd ${LOG_DIR} && /tmp/compressOldLogs.sh ${ARCHIVE_FILE}"
fi
When the script executes these two lines towards the Bitbucket container the result is:
unable to find user root: no matching entries in passwd file
It's failing on the 'docker cp'-command, but only towards the Bitbucket container. After the script has ran, the container is unaccessible with both the 'bitbucket' (defined in Dockerfile) and 'root' users.
I was able to copy /etc/passwd out of the container, and it contains all of the users as expected. When trying to access by uid, I get the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: process_linux.go:75: starting setns process caused "fork/exec /proc/self/exe: no such file or directory"
Dockerfile for Bitbucket image:
FROM java:openjdk-8-jre
ENV BITBUCKET_HOME /var/atlassian/application-data/bitbucket
ENV BITBUCKET_INSTALL_DIR /opt/atlassian/bitbucket
ENV BITBUCKET_VERSION 4.12.0
ENV DOWNLOAD_URL https://downloads.atlassian.com/software/stash/downloads/atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz
ARG user=bitbucket
ARG group=bitbucket
ARG uid=1000
ARG gid=1000
RUN mkdir -p $(dirname $BITBUCKET_HOME) \
&& groupadd -g ${gid} ${group} \
&& useradd -d "$BITBUCKET_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN mkdir -p ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_HOME}/shared \
&& chmod -R 700 ${BITBUCKET_HOME} \
&& chown -R ${user}:${group} ${BITBUCKET_HOME} \
&& mkdir -p ${BITBUCKET_INSTALL_DIR}/conf/Catalina \
&& curl -L --silent ${DOWNLOAD_URL} | tar -xz --strip=1 -C "$BITBUCKET_INSTALL_DIR" \
&& chmod -R 700 ${BITBUCKET_INSTALL_DIR}/ \
&& chown -R ${user}:${group} ${BITBUCKET_INSTALL_DIR}/
${BITBUCKET_INSTALL_DIR}/bin/setenv.sh
USER ${user}:${group}
EXPOSE 7990
EXPOSE 7999
WORKDIR $BITBUCKET_INSTALL_DIR
CMD ["bin/start-bitbucket.sh", "-fg"]
Additional info:
Docker version 1.12.0, build 8eab29e
docker-compose version 1.8.0, build f3628c7
All containers are running at all times, even Bitbucket works as usual after the issue occurres
The issue disappears after a restart of the container
You can use this command to access to the container with root user:
docker exec -u 0 -i -t {container_name_or_hash} /bin/bash
try debug with that. i think the script maybe remove or disable root user.
This issue is caused by a docker engine bug but which is tracked privately, Docker is asking users to restart the engine!
It seems that the bug is likely to be older than two years!
https://success.docker.com/article/ucp-health-checks-fail-unable-to-find-user-nobody-no-matching-entries-in-passwd-file-observed
https://forums.docker.com/t/unable-to-find-user-root-no-matching-entries-in-passwd-file/26545/7
... what can I say, someone is doing his best to get more funding.
Its a Long standing issue, replicated on my old version 1.10.3 to at least 1.17
As mentioned by #sorin the the docker forum says Running docker stop and then docker start fixes the problem but is hardly a long-term solution...
The docker exec -u 0 -i -t {container_name_or_hash} /bin/bash solution also in the same forum post mentioned here by #ObranZoltan might work for you, but does not work for many. See my output below
$ sudo docker exec -u 0 -it berserk_nobel /bin/bash
exec: "/bin/bash": stat /bin/bash: input/output error