I have problem like described below.
I have run my jetson nano with webcam connected.
I connected to the jetson via ssh and I have access to my camera from a host by using:
ssh -X nano#<my_ip> cheese
So the part is working.
Then I'm trying to run some docker container on my jetson via ssh.
So first at all:
ssh -X nano#<my_ip>
then:
docker run -it --runtime=nvidia -p 8888:8888 -v ${PWD}:/home/app -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v /dev/:/dev/ --privileged -w /home/my_app <image_name>
And there I have some simple python script to open the video and show preview.
And the part is not working, the camera is visible (/dev/v4l/by-id) but opencv cannot get any frame.
Of course when I'm working directly on jetson (without ssh) the docker container with the python script is working correctly (I mean, just logging to the nano, run the docker image, exactly like above and then inside run the python script)
So the question is, what am I doing wrong?
Edit
While controlling the jetson by ssh:
I have set DISPLAY=:0, and then I can run the container and python script - camera is working but the preview is shown in "remote" display (connected directly to jetson nano)
So as I understand it's a problem with setting up correct DISPLAY and redirecting the preview from remote to my host machine via ssh.
Related
I am trying to compile a "hello world" Rust program inside a Docker container and then remotely debug it using GDBServer and CLion, but I don't think gdbserver is starting correctly. When I start it, I don't get the "process started" and "listening on port..." messages I expect; I get nothing.
I have successfully done this with a Raspberry Pi on my home network, but can't get it to work when using Docker.
My ultimate goal is to deploy this Docker container on a Digital Ocean droplet and debug remotely from my local machine. For now, I've got Docker running on the local machine.
I am working on a Mac (Mojave), running Docker (v18.09), and spinning up a Docker container that is an image built from Debian with Rust and gdbserver installed. GDBServer and Rust are installed by:
# install curl (needed to install rust)
apt-get update && apt-get install -y curl gdb g++-multilib lib32stdc++6 libssl-dev libncurses5-dev
# install rust + cargo nightly
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain nightly
I start the container with docker run --rm -it -v $(pwd):/source -p 7777:7777 schickling/rust which starts up bash by default.
Once in the container, I compile the Rust program using rustc -g -o ./program ./src/main.rs which outputs a single file: program. I can run the program fine (it only outputs Hello World).
When I run gdbserver localhost:7777 ./program inside Docker, the terminal just hangs. I've let it sit for 20 minutes. I can't connect to it from CLion, and even ping doesn't work from my Mac. I've tried adding the --debug flag which outputs my_waitpid (11, 0x0) and then hangs. I've also tried :7777, 127.0.0.1:7777, and host:7777. I've tried several different ports.
I'm not sure where my problem is. It may be that GDBServer is running and the issue is in my CLion setup, but I doubt it. I have path mappings setup and target remote args is tcp:127.0.0.1:7777. I just get Connection closed. Symbol File and Sys Root are empty, but that has worked in the past with my Raspberry Pi.
I figured out how to run my Docker container as --privileged which allows gdbserver to run correctly. I also updated some of my CLion configs and got it working.
The useful links:
https://visualgdb.com/tutorials/linux/docker/
Run gdb inside docker container running systemd
gdb does not hit any breakpoints when I run it from inside Docker container
https://github.com/mdklatt/clion-remote
My updated docker command docker run --rm -it -v $(pwd):/source -p 7777:7777 -e container=docker --privileged schickling/rust
And my Run configuration:
GDB: Bundled
'target remote' args: tcp:localhost:7777
Symbolfile: The local copy of my compiled binary (copied from Docker thanks to volumes)
Sysroot: (blank)
Pathmappings: The absolute path to my project directory in Docker, and the absolute path to the same project directory on my local machine (the same volume)
Works like a charm.
To play around with a docker image, I installed docker and ran a sample docker ubuntu image as follows. (I hope I am using terminology correctly, still a docker noob)
docker run -it ubuntu
Because gvim or anyother gui based program was not installed, by default, I did, inside the ubuntu docker container
apt-get update
apt-get install x11-apps vim-gtk
However, on running xclock I get
root#59be2b1afca0:/# xclock
Error: Can't open display: :0
root#59be2b1afca0:/#
On running gvim I get
root#59be2b1afca0:/# gvim
E233: cannot open display
Press ENTER or type command to continue
So why won't gui apps work?
Containers weren't quite designed originally for gui apps, but rather for services, workers, processes, etc.. On the other hand since containerisation is a kernel construct to isolate and dedicate resources in a more managed way which can expose ports or share volumes, and devices etc..
This means you can technically map your screen, audio, webcam devices to a container by using --device /dev/xyz when you run your docker run command:
docker run [--rm [-it]|-d] \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY \
--device /dev/dri \
myimage [cmd]
I actually found an article describing this here - including audio, camera and other device mapping.
http://somatorio.org/en/post/running-gui-apps-with-docker/
Hope this helps a bit!
I am trying to setup an image of the osrm-backend on my docker. I am unable to run docker using the below commands (as mentioned in wiki)
docker run -t -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-extract -p /opt/car.lua /data/denmark-latest.osm.pbf
docker run -t -v ${pwd}:/data osrm/osrm-backend:v5.18.0 osrm-contract /data/denmark-latest.osrm
docker run -t -i -p 5000:5000 -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-routed /data/denmark-latest.osrm
I have already fetched the corresponding map using both wget and Invoke-WebRequest. Every time I run the first command from the above, it gives the error...
[error] Input file /data/denmark-latest.osm.pbf not found!
I have tried placing the downloaded maps in the corresponding location as well. Can anyone tell me what I am doing wrong here ?
I am using PowerShell on Windows 10
For me the problem was that docker was not able to access the C drive, even though the sharing was turned on in docker settings. After wasting lots of time, I turned off the sharing of C drive, and then turned it back on. After that when I mounted some folder to docker, it was able to see the files.
Docker share drive
I've built a very basic docker container to try and proof of concept running an xterm window from inside it.
In it, I have a basic install of RHEL 7.3 and xterm
I build as normal, open xhost xhost + and then run the docker run command like so:
docker run -ti --rm -e DISPLAY=${DISPLAY} -v /tmp/.X11-unix:/tmp.X11-unix xtermDemo /bin/bash
This runs perfectly when my base host is linux. The problem is that most of the developers in my organization run with a Windows/Mac host and log into a VNC session. When running the docker image from the VNC session xterm can’t run.
Any ideas? My only hunch at the moment is that the VNC Xorg isn't being ran natively and that somehow is causing the issue.
i have a pintool that runs normaly with this command:
../../../pin -injection child -t obj-intel64/mypintool.so -- obj-intel64/myexcecutable
i want in the position of myexcecutable to put a docker program which runs with this command:
docker run --rm --net spark-net --volumes-from data \
cloudsuite/graph-analytics \
--driver-memory 1g --executor-memory 4g \
--master spark://spark-master:7077
when I tried to simply replace the -- obj-intel64/myexecutable with the docker command, the pintool started normally but it didn't finish normally.
I believe that my pintool attaches to docker and not in the contained application which is my target.
Do I have to follow a different approach in order to attach correctly my pintool in a program running in a docker container ?
I'm not a docker expert but running it this way will indeed make pin instrument the docker exec. You need to put pin inside the docker instance and run the executable in the docker instance under pin. That is, command line should look something like this:
docker -run <docker arguments> pin <pin arguments> --myexecutable <executable arguments>