How to open a rviz, a qt application, in vscode-remote? - docker

I am trying to run a ros project inside a vscode-remote container where the images would be running on the current machine.
It needs to be able to communicate to other nodes outside of the containers and able to use vizualisation tools like rviz which uses qt library.
I installed nvidia-docker2 and was able to start the image on it's own and start rviz.
However when running the command in vscode-remote, some parameters don't seem to work.
This is the command I used to run my image using cli:
docker run -it --rm \
--name noetic_desktop \
--hostname noetic_desktop \
--device /dev/snd \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-v `pwd`/../Commands/bin:/home/user/bin \
-v `pwd`/../ExampleCode:/home/user/ExampleCode \
-v `pwd`/../Projects/catkin_ws_src:/home/user/Projects/catkin_ws/src \
-v `pwd`/../Data:/home/user/Data \
-env="XAUTHORITY=$XAUTH" \
--gpus all \
noetic_image:latest \
bash
And this is the config I am running for vscode-remote extension.
devcontainer.json :
{
"name": "Existing Dockerfile",
"context": "..",
"dockerFile": "../Dockerfile",
"runArgs": ["--env='DISPLAY'","--gpus all"],
"containerEnv": {
"QT_X11_NO_MITSHM": "1",
"XAUTHORITY": "${localEnv:XAUTH}"
}
}
For opening the image in cli I am doing :
docker run -it --rm \
--name noetic_desktop \
--hostname noetic_desktop \
--device /dev/snd \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-v `pwd`/../Commands/bin:/home/user/bin \
-v `pwd`/../ExampleCode:/home/user/ExampleCode \
-v `pwd`/../Projects/catkin_ws_src:/home/user/Projects/catkin_ws/src \
-v `pwd`/../Data:/home/user/Data \
-env="XAUTHORITY=$XAUTH" \
--gpus all \
noetic_image:latest \
bash
However when I try to open it in vscode I get an unknown flag on the gpu argument.
Start: Run: docker run --sig-proxy=false -a STDOUT -a STDERR --mount type=bind,source=/home/crossing-laptop/Documents/Code/docker/ros-in-container,target=/workspaces/ros-in-container --mount type=volume,src=vscode,dst=/vscode -l devcontainer.local_folder=/home/crossing-laptop/Documents/Code/docker/ros-in-container -e QT_X11_NO_MITSHM=1 -e XAUTHORITY= --env='DISPLAY' --gpus all --entrypoint /bin/sh vsc-ros-in-container-ea1fa5d968381e26dee62839190e6131-uid -c echo Container started
unknown flag: --gpus all
For reproduce ability, you can find the files at https://github.com/tomkimsour/ros-in-container.

Related

installing transmission on debian with docker: Missing container [duplicate]

This question already has answers here:
Why docker container exits immediately
(16 answers)
Closed 9 months ago.
I am new to this. I have installed docker on my Raspi. I am trying to install transmission on the docker. I use the following;
docker run --cap-add=NET_ADMIN -d \
--name=transmission \
-v /mnt/extDrive1:/data \
-v /etc/localtime:/etc/localtime:ro \
-e CREATE_TUN_DEVICE=true \
-e OPENVPN_PROVIDER=EXPRESSVPN \
-e OPENVPN_CONFIG=my_expressvpn_uk_-_london_udp \
-e OPENVPN_USERNAME=XXX\
-e OPENVPN_PASSWORD=XXX \
-e WEBPROXY_ENABLED=false \
-e LOCAL_NETWORK=192.168.0.0 \
--log-driver json-file \
--log-opt max-size=10m \
-p 9091:9091 \
haugene/transmission-openvpn
I go through the debug on https://haugene.github.io/docker-transmission-openvpn/debug/
All is fine until I get to the section 'Checking if Transmission is running'.
When I run docker ps, there are no containers in the list.
What have I done wrong? Ultimately, I am trying to access transmission through localhost:9091.
Edit: So I have made some progress, but still having issues;
docker start transmission temporarily. populates the container ID
docker exec -it <container-id> bash comes up with the following error:
Error response from daemon: Container XXXX is not running
It seems that container is exiting out as you are not running it in the detached mode. Try this:
docker run -itd --cap-add=NET_ADMIN -d \
--name=transmission \
-v /mnt/extDrive1:/data \
-v /etc/localtime:/etc/localtime:ro \
-e CREATE_TUN_DEVICE=true \
-e OPENVPN_PROVIDER=EXPRESSVPN \
-e OPENVPN_CONFIG=my_expressvpn_uk_-_london_udp \
-e OPENVPN_USERNAME=XXX\
-e OPENVPN_PASSWORD=XXX \
-e WEBPROXY_ENABLED=false \
-e LOCAL_NETWORK=192.168.0.0 \
--log-driver json-file \
--log-opt max-size=10m \
-p 9091:9091 \
haugene/transmission-openvpn

docker volume not found for configuration option

I am trying to run this docker command
docker run --rm --name lighthouse -it \
-v $PWD/test-results/lighthouse:/home/chrome/reports \
-v $PWD/lighthouse:/lighthouse \
--cap-add=SYS_ADMIN femtopixel/google-lighthouse \
--config-path=/lighthouse/config/custom-config.js \
$full_url \
--output html \
--output json
But it is not picking up the --config-path argument, somehow I have the volume mapped wrong.
I am trying to create a volume called lighthouse but I get this error:
/usr/bin/entrypoint: 11: exec:
--config-path=/lighthouse/config/custom-config.js: not found
You should be sending the url as the first parameter I think
docker run --rm --name lighthouse -it \
-v $PWD/test-results/lighthouse:/home/chrome/reports \
-v $PWD/lighthouse:/lighthouse \
--cap-add=SYS_ADMIN femtopixel/google-lighthouse \
$full_url \
--config-path=/lighthouse/config/custom-config.js \
--output html \
--output json

Docker invalid reference format?

I try use this docker command
docker run --rm --name eosio -d -p 8888:8888 -p 9876:9876 \
-v \host_mntC\eosio\work:/work
-v \host_mntC\eosio\data:/mnt/dev/data \
-v \host_mntC\eosio\config:/mnt/dev/config \
\host_mntC\eosio\contracts:/contracts eosio/test /bin/bash \
-c "nodeos -e -p eosio --plugin eosio::producer_plugin \
--plugin eosio::history_plugin --plugin eosio::chain_api_plugin \
--plugin eosio::history_api_plugin --plugin eosio::http_plugin \
-d /mnt/dev/data --config-dir /mnt/dev/config \
--http-server-address=0.0.0.0:8888 --access-control-allow-origin=* \
--contracts-console --http-validate-host=false"
I need the files to be saved locally, when I start EOSIO. What I'm doing is wrong, thank you. System Windows 10.
It's just a missing -v before \host_mntC\eosio\contracts:/contracts.

How to run enlightenment wayland in docker container?

I am trying to run enlightenment(https://www.enlightenment.org/start) in a docker container,previously enlightenment is based on X11,but the latest version of enlightenment support wayland. As I searched,we can use the -v parameter when use the "docker run" command to start a docker image like :
$ docker run -it \
--net host \ # may as well YOLO
--cpuset-cpus 0 \ # control the cpu
--memory 512mb \ # max memory it can use
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
-v $HOME/Downloads:/root/Downloads \ # optional, but nice
-v $HOME/.config/google-chrome/:/data \ # if you want to save state
--device /dev/snd \ # so we have sound
--name chrome \
jess/chrome
(Reference: https://blog.jessfraz.com/post/docker-containers-on-the-desktop/)
But this is based on X11.Currently I do not use the X11,and use the wayland based enlightenment,How can I show my enlightenment UI in docker container?
According to
https://unix.stackexchange.com/questions/330366/how-can-i-run-a-graphical-application-in-a-container-under-wayland
you mount some device such as
/run/user/1000/wayland-0
in your
docker run
command
and here is an extract from
https://github.com/duzy/docker-wayland/blob/master/run.sh
docker run \
--name $container \
-v "$(pwd):/home/user/work" \
--device=/dev/dri/card0:/dev/dri/card0 \
--device=/dev/dri/card1:/dev/dri/card1 \
--device=/dev/dri/controlD64:/dev/dri/controlD64 \
--device=/dev/dri/controlD65:/dev/dri/controlD65 \

X11 forward to windows x server for docker client in AWS

I am using windows mobaxterm for xserver and ssh client. If I type xclock in my ssh server (ubuntu 16.04)in AWS, the clock appears and there is not problem. Now I install nvidia-docker in AWS. Here is the run script for starting the docker container:
nvidia-docker run -it \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /tmp/.docker.xauth:/tmp/.docker.xauth \
-e XAUTHORITY=/tmp/.docker.xauth \
-net=host \
\
gcr.io/tensorflow/tensorflow:latest-gpu /bin/bash
The error i have is:
root#ip-172-31-35-73:/notebooks# xclock
MobaXterm X11 proxy: Unsupported authorisation protocol
Error: Can't open display: localhost:10.0
the following seems to work.
ssh from local terminal in mobaxterm:
ssh -X -Y -i "C:\your_key_path\xxx.pem" root#xx.xx.xx.xx
in aws, start your docker as
nvidia-docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /root/.Xauthority:/root/.Xauthority \
-e XAUTHORITY=/root/.Xauthority \
--net=host \
\
gcr.io/tensorflow/tensorflow:latest-gpu /bin/bash
The results is:

Resources