Docker- Any way to provide multiple USB device Access (Not using --privilaged)? - docker

Is there any way that the docker environment could be given to multiple USB devices, say /dev/video0, /dev/video4 and /dev/ttyUSB4?
In case of a single device, it could be
docker run -t -i --device=/dev/ttyUSB4 ubuntu bash
and for multiple devices
docker run -t -i --privileged -v /dev/bus/usb:/dev/bus/usb ubuntu bash
But I needed to know if there is anyway I could provide access like in the former case to specific devices alone (Not using privilages mode).

Docker works with device option as an array. So,
you can specify several devices also with device option:
docker run -ti --device=/dev/ttyUSB4 --device=/dev/video0 --device=/dev/video4 ubuntu bash
In docker-compose is also possible:
docker-compose.yml
...
services:
myservice:
...
devices:
- "/dev/ttyUSB4:/dev/ttyUSB4"
- "/dev/video0:/dev/video0"
- "/dev/video4:/dev/video4"
There's another possibility giving linux capability, but it's unrecommended (dangerous like privileged mode) for production: FOWNER capability:
docker run -ti --cap-add=FOWNER ubuntu bash
Nevertheless, in kubernetes, for example, it's not enough and you need privileged mode.

Related

Using rviz in a remote connection "Could not connect to any X display."

I am trying to work with rviz by means a remote connection with ssh. When I execute the command rosrun rviz rviz, this error appears:
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
I already added the -X flag during the ssh connection, by ssh myusername#host -X but nothing changes.
I don't know what else to do, so any help would be welcomed.
I am working from a Mac computer (macOS Catalina), remotely I am working on a workstation with Docker, and my image has Ubuntu 18.04 and ROS Melodic.
Thank you in advance.
EDIT:
I just tried to execute rviz locally on the workstation and appears the same error, so I suppose the ssh connection is not the problem. Could the problem be due to the Docker or the workstation (Nvidia DGX Station)? Could it be due to permission issue?
Thank you.
I don't currently know about docker, but does the following work for you:
user#local $ export ROS_MASTER_URI=http://your_remote's_hostname:11311
user#local $ rosrun rviz rviz
And see https://wiki.ros.org/ROS/NetworkSetup for the details + ip configuration on both machines.
Update: Here are some instruction about running GUI apps in docker and MAC, may be useful (in case you haven' seen it already).
I have a docker container with ROS that I use to run rviz and other UI apps (ROS's QT-based apps do not work in KDE Neon).
The docker-compose.yml contains the following:
##############
version: "3.8"
services:
ros:
container_name: ros1
network_mode: host
# I created my own image, with my own user, etc
image: YOUR_IMAGE
volumes:
# you can ignore this line if you want (I'll explain below)
# - /home/ichramm/devel/robots:/home/ichramm/devel/robots
- /etc/localtime:/etc/localtime:ro
- /tmp/.X11-unix:/tmp/.X11-unix:ro
- /home/ichramm/.Xauthority:/home/ichramm/.Xauthority:ro
- /run/user/1000:/run/user/1000:ro
- /run/user/1000/bus:/run/user/1000/bus:ro
command: /entrypoint.sh
environment:
USER: ichramm
DISPLAY: ${DISPLAY}
XDG_RUNTIME_DIR=/tmp/runtime-${USER}
DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus
devices:
#- "/dev/ttyUSB0:/dev/ttyUSB0"
#- "/dev/dri/card0:/dev/dri/card0"
#- "/dev/dri/card1:/dev/dri/card1"
You should try to map the mounted volumes to your system. I understand that you have MAC, which means this may not work for you.
This works for me, but I don't use ssh, I just use two scripts:
1.
❯ cat docker-run.sh
#!/bin/bash
docker exec -ti -w $(pwd) ros1 ./wrapper.sh $#
❯ cat wrapper.sh
#!/bin/bash
export XDG_RUNTIME_DIR=/tmp/runtime-$USER
source env.sh
$#
In order for this to work you need the following:
Mount the working directory in the container (see commented line above)
Have a file env.sh which sources ROS's setup.bash and the workspace devel/setup.bash.
Of course the experience with those scripts its limited, that's why I also enter the docker directly using the following:
❯ cat enter-env.sh
#!/bin/bash
docker exec -ti -w /home/ichramm/devel/robots ros1 /bin/bash
This works only because the container's directory structure matches the host's. (Note that I only mount the development directory anyway). I also added a user with the same name UID and GID as in the host to prevent issues with file permissions.
If you can't make it work, I suggest you turn to a VM. Just install Ubuntu 20.04 without UI (you can disable it later with sudo systemctl set-default multi-user) and use SSH with X forwarding. I worked with that setup switching to docker and still have the VM in case something happens.
Update: Have in mind that I am doing some potentially unsecure things, like mounting .Xauthority. It works for me because no one else has access to my computer.

How to navigate to docker volumes folders on the host machine [duplicate]

I´m looking for the folder /var/lib/docker on my Mac after installing docker for Mac.
With docker info I get
Containers: 5
...
Server Version: 1.12.0-rc4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 339
Dirperm1 Supported: true
...
Name: moby
ID: LUOU:5UHI:JFNI:OQFT:BLKR:YJIC:HHE5:W4LP:YHVP:TT3V:4CB2:6TUS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
....
But I don´t have a directory /var/lib/docker on my host.
I have checked /Users/myuser/Library/Containers/com.docker.docker/ but couldn´t find anything there. Any idea where it is located?
As mentioned in the above answers, you will find it in:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Once you get the tty running you can navigate to /var/lib/docker
As of 2021 is the dance going, Mac Users get easily to the VM with the documented methods, and hence to the volumes.
There's a way Rocky Chen found to get inside the VM in Mac. With this you can actually inspect the famous /var/lib/docker/volumes.
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Let examine the method:
-it goes for Keep STDIN open even if not attached + Allocate pseudo-TTY
--privileged "gives all capabilities to the container. Allows special cases like running docker" .
--pid defines to use the host VM namespace.
debian the actual image to use.
nsenter a debian's tool to run programs in different namespaces
-t is the target PID
-m mount the provided PID namespace.
-u enter the Unix Time Sharing (UTS) namespace.
-n enter the provided PID network namespace.
-i enter the provided PID IPC namespace.
Once run, go to /var/lib/docker/volumes/and you'll find your volumes.
The next question to address for me is:
How to take those volumes and back them up in the host?
I appreciate ideas in the comments!
UPDATE FOR VSCODE USERS
If you downloaded the Official Docker extension, sun will shine for you.
Just inspect the volumes in Visual Studio Code. Right-click the files you want to have in your local, and download them. That easy!
2nd UPDATE
As of July 2021, Docker Desktop for Mac is announcing we will be able to access volumes directly from the GUI, but only for Pro and Team accounts.
The other answers here are outdated if you're using Docker for Mac.
Here's how I was able to get into the VM. Run the command:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
This is the default path, but you may need to first do:
cd ~/Library/Containers/com.docker.docker/Data/vms
and then ls to see which directory your VM is in and replace the "0" accordingly.
When you're in, you might just see a blank screen. Hit your "Enter" key.
This page explains that to exit from the VM you need to "Ctrl-a" then "d"
See this answer
When using Docker for Mac Application, it appears that the containers are stored within the VM located at:
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
Just as #Dmitriy said:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
and can use ctrl a + d to detach the screen
and use screen -dr to re-attach the screen again(since if you simply attach screen again, the terminal text will be garbled.)
Reference
or if you want to exit, use ctrl + a + k,then choose y to kill the screen.
some what of a zombie thread but as I just found it here is another solution that doesn't need screen nor messes up shell etc.
The path listed from a docker volume inspect <vol_name>
returns the path for the container, something like:
"Mountpoint": "/var/lib/docker/volumes/coap_service_db_data/_data"
the _data component being the last component of the path you setup in the volumes: section of the service using a given volume eg:
volumes:
- db_data:/var/lib/postgresql/data , obvs your mileage will vary.
To get there on the mac the easiest method I have found is to actually start a small container running and mount the root of the host to the /docker directory in the image, this gives you access to the volumes used on the host.
docker run --rm -it -v /:/docker alpine:edge
from this point you can cd to the volume
cd /var/lib/docker/volumes/coap_service_db_data/_data
I think the new version of docker (my version is 20.10.5) uses socket instead of TTY to communicate with the virtual machine so you can use the nc command instead of the screen command.
nc -U ~/Library/Containers/com.docker.docker/Data/debug-shell.sock
Looks like the new version of docker for Mac has moved this to a UI element which you can see here. Clicking on that button which says CLI will launch a terminal which you can use to browse the docker file system.
Run:
docker run -it --privileged --pid=host debian nsenter -t 1 -a bash
ls /var/lib/docker
For MacOS I use the following steps:
login into docker virtual-machine (on MacOS docker can be run only inside virtual machine, in my case I have VirtualBox tool with docker VM): docker-machine ssh
as soon as I logged-in I need to switch to super user from docker user: sudo -i
now I'm able to check /var/lib/docker directory
I would say that the file:
/var/run/docker.sock
Is actually at:
/Volumes/{DISKNAME}/var/run/docker.sock
If you run this, it should prove it, as long as your running VirtualBox 5.2.8 or later and the share for /Volumes is setup to be auto-mounted and permanent AND you generated the default docker-machine while on that version of Virtualbox:
#!/bin/bash
docker run -d --restart unless-stopped -p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock portainer/portainer \
--no-auth
Then, access Portainer at: 192.168.99.100:9000 or localhost:9000
This path comes from Docker Host (not from MacOS)
before "Docker for Mac Application" times, where there was a VirtualBox VM "default" and inside this VM, the mentioned path exists (for sure), now in "Docker for Mac Application" times there is a Docker.qcow2 image, which is qemu base vm.
To jump inside this VM #mik-jagger way is ok (but there are few more)
Docker logs are not in /var/lib/docker on MacOS.
MacOs users can find the docker logs on this path;
/Users/Barrack.Kenya/Library/Containers/com.docker.docker/Data/log/host
job_name: docker
static_configs:
targets:
docker
labels:
job: dockerlogs
path: (Please put the path)
pipeline_stages:
docker: {}

Run GUI apps via Docker without XQuartz or VNC

As an evolution on can you run GUI apps in a docker container, is it possible to run GUI applications via Docker without other tools like VNC or X11/XQuartz?
In VirtualBox, you could pass the --type gui to launch a headed VM, and this doesn't require installing any additional software. Is anything like that possible via Dockerfile or CLI arguments?
Docker doesn't provide a virtual video device and a place to render that video content in a window like a VM does.
It might be possible to run a container with --privileged and write to the Docker hosts video devices. That would possibly require a second video card that's not in use. The software that Docker runs in the container would also need to support that video device and be able write directly to it or a frame buffer. This limits what could run in the container to something like an X server or Wayland that draws a display to a device.
You could try the following which is worked in my case.
Check the Local machine display and its authentication
[root#localhost ~]# echo $DISPLAY
[root#localhost ~]# xauth list $DISPLAY
localhost:15 MIT-MAGIC-COOKIE-1 cc2764a7313f243a95c22fe21f67d7b1
Copy the above authentication and join your existing container, and add the display autherntication.
[root#apollo-server ~]# docker exec -it -e DISPLAY=$DISPLAY 3a19ab367e79 bash
root#3a19ab367e79:/# xauth add 192.168.10.10:15.0 MIT-MAGIC-COOKIE-1 cc2764a7313f243a95c22fe21f67d7b1
root#3a19ab367e79:/# firefox

In docker, how can I pass through all found nvidia devices to a container?

I'm netbooting CoreOS with Nvidia and want to pass through to a container all found nvidia devices.
How do I do this from the command line when some machines have more GPU cards than others.
i.e. I'd like to do something like:
docker run --name cuda_app --devices=/dev/nvidia*:/dev/nvidia* cuda_app
On some machines, there could be /dev/nvidia0 - 2, on others nvidia0 - 8 for example.
You could generate the list of device to expose using an inlined bash script
docker run --name cuda_app $(for dev in /dev/nvidia*; do echo -n "--device $dev:$dev "; done) cuda_app

default 'docker run' options?

I am running virtualbox images inside docker containers and this requires launching docker with either
docker run -i -t --device=/dev/vboxdrv fommil/freeslick:base
or
docker run -i -t --privileged=true fommil/freeslick:base
Obviously, the former is preferable, but I have no control over the way the target script launches the docker instance (it is managed by a third party) other than turning on/off privileged mode.
Is there a way to set system defaults for docker run such that all images launched on a Linux boxen will use --device=/dev/vboxdrv?
Because --device is an "operator exclusive option" it can only be specified at the invocation of the docker run command. So no, there is no way to default that option.

Resources