Does Docker need a program to access peripherals? - docker

Can't Open the card!
I wrote a driver for a peripheral. I have been able to run it in the host, but in the dokcer can not run the program. Will it need a procedure in the host to achieve reuse, like docker access to the network card needs the bridge?
Thanks!

You need to mount the device you want to use in your
docker run -v
command
Extract from
https://blog.jessfraz.com/post/docker-containers-on-the-desktop/
for example a Spotify docker image needs access to the sound device, so
$ docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
--device /dev/snd \ # sound
--name spotify \
jess/spotify
also the Skype docker image
$ docker run -d \
-v /etc/localtime:/etc/localtime \
-p 4713:4713 \ # expose the port
--device /dev/snd \ # sound
--name pulseaudio \
jess/pulseaudio

Related

gdb debug in docker failed

envs:
host:centos
docker:ubuntu 16 nivida-docker
program:c++ websocket
desc:
when I use gdb in docker ,I can't use breakpoint ,it just says:warning: error disabling address space randomization: operation not permitted.I see alot of resolutions to this question,all of them tell me to add :--cap-add=SYS_PTRACE --security-opt seccomp=unconfinedto my docker file ,so I did it.here is my docker file:
!/bin/sh
SCRIPT_DIR=$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)
PROJECT_ROOT="$( cd "${SCRIPT_DIR}/.." && pwd )"
echo "PROJECT_ROOT = ${PROJECT_ROOT}"
run_type=$1
docker_name=$2
sudo docker run \
--name=${docker_name} \
--privileged \
--network host \
-it --rm \
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
-v ${PROJECT_ROOT}/..:/home \
-v /ssd3:/ssd3 \
xxxx/xx/xxxx:xxxx \
bash
but when restart the container and run gdb ,it always killed like below:
(gdb) r -c conf/a.json -p 8075
Starting program: /home/Service/bin/Service --args -c conf/a.json -p 8075
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Killed
I don't known where is wrong ,anyone have any opinions?
Try this
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined

Docker Bench for Security for image on Google Cloud

I have an image in Container Registry and deployed to App Engine flex.
How do I use Docker Bench for Security to check my containers security?
You can't use Docker Bench for the images that are uploaded in the Google Cloud Container Registry.
You can do it locally with the following command:
docker run -it --net host --pid host --userns host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /etc:/etc:ro \
-v /usr/bin/docker-containerd:/usr/bin/docker-containerd:ro \
-v /usr/bin/docker-runc:/usr/bin/docker-runc:ro \
-v /usr/lib/systemd:/usr/lib/systemd:ro \
-v /var/lib:/var/lib:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
--label docker_bench_security \
docker/docker-bench-security
For more information on Docker Bench usage you can check this
I think you should also be able to replicate this process with Cloud Build. You can check the documentation to see how to use it.
Cloud Build quickstart
Cloud Build config reference

Docker container as default application

I have Firefox nightly running in a container. I'm looking for a solution to configure it as my default browser application(ubuntu 18.04).
So my question is, how to configure a Docker container as default system application in Ubuntu.
My docker command is:
docker run -d --net=host -v ~/:/home/firefox -v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix:0 -v /dev/shm:/dev/shm --device /dev/snd \
--group-add 29 -e PULSE_SERVER=unix:/run/user/1000/pulse/native \
-v /run/user/1000/pulse/native:/run/user/1000/pulse/native \
firefox-nightly
I suppose I must create a new mime file, but not sure how to do it, to be able to create the container with all these parameters.
Thanks
One alternative is to create a new .desktop file (e.g: /usr/share/applications/firefox-docker.desktop).
I just copied the existing firefox.desktop and changed Exec sections with the command using docker (*)
Then use xdg-utils (**) configure it as default browser application:
xdg-settings set default-web-browser firefox-docker.desktop.
*: To keep the .desktop file cleaner, you could create an executable file in system PATH (e.g: /usr/bin): docker-firefox:
xhost +
docker run --net=host -v ~/:/home/firefox -v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix:0 -v /dev/shm:/dev/shm --device /dev/snd \
--group-add 29 -e PULSE_SERVER=unix:/run/user/1000/pulse/native \
-v /run/user/1000/pulse/native:/run/user/1000/pulse/native \
firefox-nightly $#
Note the $# at the end. And make it executable so it can be executed as a normal application.
**: The link is from Arch documentation, but it works in Ubuntu as well.

Docker Swarm - equivalent docker commands

As far as I know, the Docker Swarm API is compatible with the Offical Docker API.
What is the equivalent Docker Swarm commands for the following docker commands:
docker ps -a
docker run --net=host --privileged=true \
-e DEVICE=$VETH_NAME -e SWARM_MANAGER_ADDR=$SWARM_MANAGER_ADDR -e SWARM_MANAGER_PORT=$SWARM_MANAGER_PORT \
-v conf_files:/etc/sur \
-v conf_files:/etc/sur/rules \
-v _log:/var/log/sur\
-d sur
The standalone swarm simply has a different host/port for you to connect with the client (client being the docker cli). It relays the commands as appropriate from the manager to each node in the swarm. The easiest way to do that is to set $DOCKER_HOST to point to the port the manager is listening to:
# start your manager, the end of the command is your discovery method
docker run -d -P --restart=always --name swarm-manager swarm manager ...
# send all future commands to the manager
export DOCKER_HOST=$(docker port swarm-manager 2375)
# run any docker ps, docker run, etc commands on the Swarm
docker ps
docker run --net=host --privileged=true \
-e DEVICE=$VETH_NAME \
-e SWARM_MANAGER_ADDR=$SWARM_MANAGER_ADDR \
-e SWARM_MANAGER_PORT=$SWARM_MANAGER_PORT \
-v conf_files:/etc/sur \
-v conf_files:/etc/sur/rules \
-v _log:/var/log/sur \
-d sur
# return to running commands on the local docker host
unset DOCKER_HOST
If you needed those SWARM_MANAGER_ADDR/PORT values defined, those can come out of the docker port command. Otherwise, I'm not familiar with the "sur" image to know about the values you need to pass there.

How to store my docker registry in the file system

I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.

Resources