Pass flag to cAdvisor with docker - docker

I am running cAdvisor using the following code as instructed here:
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
google/cadvisor:latest
I need to pass the following flag to cAdvisor as suggested in this answer:
--enable_load_reader=true
How do I pass that flag to cAdvisor?

The google/cadvisor container behaves like the binary itself, therefore you can just append the option to the end of the docker run ... command.
You would also like to add the --net host option to your docker run command as noted here:
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 \
--detach=true \
--net host \
--name=cadvisor \
google/cadvisor:latest \
--enable_load_reader=true

Related

How to open a rviz, a qt application, in vscode-remote?

I am trying to run a ros project inside a vscode-remote container where the images would be running on the current machine.
It needs to be able to communicate to other nodes outside of the containers and able to use vizualisation tools like rviz which uses qt library.
I installed nvidia-docker2 and was able to start the image on it's own and start rviz.
However when running the command in vscode-remote, some parameters don't seem to work.
This is the command I used to run my image using cli:
docker run -it --rm \
--name noetic_desktop \
--hostname noetic_desktop \
--device /dev/snd \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-v `pwd`/../Commands/bin:/home/user/bin \
-v `pwd`/../ExampleCode:/home/user/ExampleCode \
-v `pwd`/../Projects/catkin_ws_src:/home/user/Projects/catkin_ws/src \
-v `pwd`/../Data:/home/user/Data \
-env="XAUTHORITY=$XAUTH" \
--gpus all \
noetic_image:latest \
bash
And this is the config I am running for vscode-remote extension.
devcontainer.json :
{
"name": "Existing Dockerfile",
"context": "..",
"dockerFile": "../Dockerfile",
"runArgs": ["--env='DISPLAY'","--gpus all"],
"containerEnv": {
"QT_X11_NO_MITSHM": "1",
"XAUTHORITY": "${localEnv:XAUTH}"
}
}
For opening the image in cli I am doing :
docker run -it --rm \
--name noetic_desktop \
--hostname noetic_desktop \
--device /dev/snd \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-v `pwd`/../Commands/bin:/home/user/bin \
-v `pwd`/../ExampleCode:/home/user/ExampleCode \
-v `pwd`/../Projects/catkin_ws_src:/home/user/Projects/catkin_ws/src \
-v `pwd`/../Data:/home/user/Data \
-env="XAUTHORITY=$XAUTH" \
--gpus all \
noetic_image:latest \
bash
However when I try to open it in vscode I get an unknown flag on the gpu argument.
Start: Run: docker run --sig-proxy=false -a STDOUT -a STDERR --mount type=bind,source=/home/crossing-laptop/Documents/Code/docker/ros-in-container,target=/workspaces/ros-in-container --mount type=volume,src=vscode,dst=/vscode -l devcontainer.local_folder=/home/crossing-laptop/Documents/Code/docker/ros-in-container -e QT_X11_NO_MITSHM=1 -e XAUTHORITY= --env='DISPLAY' --gpus all --entrypoint /bin/sh vsc-ros-in-container-ea1fa5d968381e26dee62839190e6131-uid -c echo Container started
unknown flag: --gpus all
For reproduce ability, you can find the files at https://github.com/tomkimsour/ros-in-container.

How to forward local-user's groups to the container?

My docker command is pretty rich, but still I am not able to see all the local-user's groups when I am inside container. Question is how can I do that?
So from outside docker:
$>>groups
<$USER> adm cdrom sudo dip video plugdev lpadmin sambashare docker
My docker-run command:
docker run -it \
--restart=on-failure:5 \
--name amr_sdk_docker \
--user "$(id --user):$(id --group)" \
--group-add "$(id --group)" \
--hostname "$(hostname)" \
--env "USER=$(whoami)" \
--env "DISPLAY=$DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--network=host \
--security-opt apparmor:unconfined \
--security-opt=no-new-privileges \
--pids-limit 128 \
--volume /tmp/.X11-unix:/tmp/.X11-unix \
--volume "${HOME}":/home/"$(whoami)":rw \
--volume "${HOME}"/.cache:/.cache:rw \
--volume /run/user:/run/user \
--volume /var/run/nscd/socket:/var/run/nscd/socket:ro \
--volume /etc/ssl/certs/:/etc/ssl/certs/:ro \
--volume /etc/ssh/:/etc/ssh/:ro \
--volume /usr/share/ca-certificates:/usr/share/ca-certificates:ro \
--volume /etc/passwd:/etc/passwd:ro \
--volume /etc/group:/etc/group:ro \
--volume /usr/local/share/ca-certificates:/usr/local/share/ca-certificates:ro \
--volume /dev:/dev \
--volume /lib/modules:/lib/modules \
--volume /tmp:/tmp:rw \
--privileged \
<image_name:tag>
And after above command, from inside container:
$ groups
<$USER>
I'm not sure I understand your problem exactly, however you are currently passing only the effective group ID with --group-add. This is already taken care of by:
--user "$(id --user):$(id --group)"
What you might be missing is adding a --group-add argument for each of your local group IDs, which can be identified for your user on the host with:
id --groups

docker volume not found for configuration option

I am trying to run this docker command
docker run --rm --name lighthouse -it \
-v $PWD/test-results/lighthouse:/home/chrome/reports \
-v $PWD/lighthouse:/lighthouse \
--cap-add=SYS_ADMIN femtopixel/google-lighthouse \
--config-path=/lighthouse/config/custom-config.js \
$full_url \
--output html \
--output json
But it is not picking up the --config-path argument, somehow I have the volume mapped wrong.
I am trying to create a volume called lighthouse but I get this error:
/usr/bin/entrypoint: 11: exec:
--config-path=/lighthouse/config/custom-config.js: not found
You should be sending the url as the first parameter I think
docker run --rm --name lighthouse -it \
-v $PWD/test-results/lighthouse:/home/chrome/reports \
-v $PWD/lighthouse:/lighthouse \
--cap-add=SYS_ADMIN femtopixel/google-lighthouse \
$full_url \
--config-path=/lighthouse/config/custom-config.js \
--output html \
--output json

How to add rabbitmq_delayed_message_exchange plugin to RabbitMQ running docker

I would like to add the "rabbitmq_delayed_message_exchange" plugin to my docker installation.
Also, I want the Plugin to stay there after I reboot the RabbitMQ container.
The installation script I use is:
docker run -d -h docker01.docker \
--add-host=docker01.docker:192.168.1.11 \
--name rabbit \
-p "4370:4370" \
-p "5672:5672" \
-p "15672:15672" \
-p "25672:25672" \
-p "35197:35197" \
-e "RABBITMQ_USE_LONGNAME=true" \
-e "ERL_EPMD_PORT=4370" \
-e RABBITMQ_ERLANG_COOKIE="rabbitcookie" \
-e RABBITMQ_NODENAME="master" \
-e "RABBITMQ_LOGS=/var/log/rabbitmq/rabbit.log" \
-v /data/rabbitmq:/var/lib/rabbitmq \
-v /data/rabbitmq/logs:/var/log/rabbitmq \
rabbitmq:3.6.6-management
Is it possible to add that plugin to this above installation?
Thanks

external access to kubernetes

docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
A curl localhost:8080confirms that the API is running.
But trying to access it with the host's IP like curl dockerHostIp:8080fails:
Failed to connect to ipOfDockerHost port 8080: Connection refused
How can I expose k8s to the outside? (docker-host is an ubuntu server)
As far as I understand using --net=host should solve this problem. But it does not work in this case.
When you start kubernetes with docker, you choose between two models:
--config=/etc/kubernetes/manifests
--config=/etc/kubernetes/manifests-multi.
If you look in these files, you will notice one difference: --insecure-bind-address is different.
When you use --config=/etc/kubernetes/manifests, you ask for a local access only.
You should start with --config=/etc/kubernetes/manifests-multi.
Note that:
you will need to start etcd manually when you use --config=/etc/kubernetes/manifests-multi
follow this post as docker support is not working for now

Resources