Executing a binary compiled with gcc 7.2.0+ASan fails on Ubuntu 17.10 docker container with following error:
==5==HINT: LeakSanitizer does not work under ptrace (strace, gdb, etc)
LSan (that performs leak checks) attaches to the program under test via ptrace. It fails to do so under docker as it does not have permissions.
This can be fixed by running docker container with privileges using either of the two options:
docker run .... --privileged
or more specific:
docker run .... --cap-add SYS_PTRACE
--cap-add SYS_PTRACE is a much more preferred option for CI and automation as it restricts privileges to ptrace only.
Related
I'm trying to create a CI/CD infrastructure using Jenkins. Considering recoverability, performance and maintainability topics I decided to handle both Jenkins and agents as Docker containers.
There are some certain restrictions that I cannot workaround:
Cannot build this setup on Linux environment (IT policy)
Cannot use WSL2 on Windows (I don't know when IT department will release regarding Windows update that supports WSL2)
Security is a very high priorty topic
As far as I see, Docker outside of Docker setup is the proper way to implement. If I run the container as root using the command below, I can bind the docker.sock file and Jenkins jobs can create containers from Dockerfiles as agents:
docker run --name dood `
-d -u root --restart on-failure `
-p "8080:8080" -p "50000:50000" `
-v //var/run/docker.sock:/var/run/docker.sock `
-v /usr/local/bin/docker:/usr/bin/docker `
jenkins/jenkins:lts
However, it doesn't work if Jenkins container is run with non-root user. This is not acceptable as it creates vulnerability. Suggested way is to run the container without root user and assign "jenkins" user to "docker" group:
groupadd docker
usermod -a -G docker jenkins
newgrp docker
Unfortunately, it doesn't work. "Got permission denied..." error occurs when Jenkins jobs try to create agent containers. I restarted Docker Desktop and container but result is the same. I am not sure but possible reason might be the Windows environment. This may work in Linux environment.
As a final effort, I tried the solution that is described in a stackoverflow topic. I noticed "setfacl" command does not work when Docker runs with Hyper-V. If I switch to WSL2 on my demo PC then the commands below solve the problem:
gpasswd -a jenkins docker
apt-get install acl
setfacl -m user:jenkins:rw /var/run/docker.sock
Unfortunately, target Windows environment does not support WSL2 so I cannot use this solution. Moreover, setfacl command is not persistent but this is another story.
An alternative solution might be activating "Expose daemon on tcp://localhost:2375 without TLS" option. However, this is not acceptable in security point of view so I cross it out.
I am curious if it is even possible to implement Docker outside of Docker setup for Jenkins on Docker Desktop for Windows. Considering the named restrictions, I am open to alternative setups/solutions as well.
I am quite new to Docker and not very experienced with Jenkins. If I use wrong terminology or approach please let me know.
My goal is to run arbitrary GUI applications from Docker container using host Xserver.
I tried http://wiki.ros.org/docker/Tutorials/GUI#The_simple_way - Step 1
I would run the docker image using docker run --gpus all --net=host -it -p "8888:8888" -v "/home/gillian/Documents/deeplearning/:/deeplearning/:" --env=DISPLAY=$DISPLAY --env=QT_X11_NO_MITSHM=1 --volume=/tmp/.X11-unix:/tmp/.X11-unix:rw pytorch
But when I tried to run xlogo or xclock from within the container, it would always return error Error: Can't open display: :0
after spending the night trying to fix it I tried to use --net=host as an argument for docker run. And then I could run xclock and xlogo and it would display them on my screen without any issues.
Why?
What can I do to run the docker image without sacrificing the network isolation (--net=host)?
I am running Kubuntu 20.04
I have a master container instance (Node.js) that runs some tasks in a temporary worker docker container.
The base image used is node:8-alpine and the entrypoint command executes with user node (non-root user).
I tried running my container with the following command:
docker run \
-v /tmp/box:/tmp/box \
-v /var/run/docker.sock:/var/run/docker.sock \
ifaisalalam/ide-taskmaster
But when the nodejs app tries running a docker container, permission denied error is thrown - the app can't read /var/run/docker.sock file.
Accessing this container through sh and running ls -lha /var/run/docker.sh, I see that the file is owned by root:412. That's why my node user can't run docker container.
The /var/run/docker.sh file on host machine is owned by root:docker, so I guess the 412 inside the container is the docker group ID of the host machine.
I'd be glad if someone could provide me an workaround to run docker from docker container in Container-optimized OS on GCE.
The source Git repository link of the image I'm trying to run is - https://github.com/ifaisalalam/ide-taskmaster
Adding the following command into my start-up script of the host machine solves the problem:
sudo chmod 666 /var/run/docker.sock
I am just not sure if this would be a secure workaround for an app running in production.
EDIT:
This answer suggests another approach that might also work - https://stackoverflow.com/a/47272481/11826776
Also, you may read this article - https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
The docker container was created with the --privileged flag and has the /dev, /proc and /sys folders mounted from the host Tegra TX2 board, so the docker container has the 'nvhost...' devices such as 'nvhost-gpu'.
However when I run the GStreamer pipeline which uses the 'nvcamerasrc' element I get 'Connecting to camera_daemon failed'.
ERROR nvcamerasrc gstnvcamerasrc.cpp:2411:gst_nvcamera_socket_connect:<camera_src> Connecting to camera_daemon failed
I manually copied the actual 'nvcamera-daemon' and 'nvcamera-daemon.service' files from usr/sbin and etc/systemd/system on the host to the same places in the container but this has not made a difference.
So I'm just trying to use the nvcamera-daemon service (the nvcamerasrc requires this) from a docker container on the TX2 board rather than directly on the board. Has anyone had success with Tegra-docker https://github.com/Technica-Corporation/Tegra-Docker or another method of doing this, perhaps that doesn't require Cuda?
Check this:
sudo docker run --net=host --runtime nvidia --rm --ipc=host -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE -e DISPLAY=$DISPLAY -it nvcr.io/nvidia/l4t-base:r32.3.1
note: the version is .3 now src
if you encounter EGL connection error try : unset DISPLAY on docker
(which sounds contradictory to the aforementionned post like on this thread)
I'd like to package Selenium grid exrtas into a docker image.
This service being run without using docker container can reboot the OS it's running in. I wonder if I can setup the container to restart by Selemiun grid extras service running inside the container.
I am not familiar with Selenium Grid, but as a general idea: you could mount a folder from the host as data volume, then let Selenium write information to there, like a flag file.
On the host, you have a scheduled task / cronjob running on the host that would check for this flag in the shared folder and if it has a certain status, you would invoke a docker restart from there.
Not sure if there are other more elegant solutions for this, but this is what came to my mind adhoc.
Update:
I just found this on the Docker forum:
https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337
I'm not sure about CoreOS but normally you can manage your host
containers from within a container by mounting the Docker socket.
Such as
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ;
bash"
or
https://registry.hub.docker.com/u/abh1nav/dockerui/