Docker for GUI-based environments? - docker

Problem
I have a set of client machines that are a part of an enterprise web application. Each machine runs identical software, which is a PyQT-based web client that connects to a server. This client software is updated regularly and I would like to have some configuration/provisioning tool that allows to have the same environment on each machine and hence provide easy deployment and configuration of the software onto each of the clients' machines.
The problem is that I have tried to use Chef, but it takes a lot of effort to actually maintain Chef knowledge and skills (we do not have a dedicated Ops guy) and moreover a Chef recipe can fail if some third party repository is no longer available (this is a main stopper).
I would like to try Docker to solve the problem, but I still do not know if it is possible to set up images/containers that allow for some GUI based software to operate.
Question
Is it possible to use Docker to have a development/production environment for a GUI-based application (PyQt/QT)? If yes, what would be the first steps to approach that?

Currently this question is not answered, but it is very highly ranked on Google. The other answers are mostly correct, but with some caveats that I have learned the hard way, and I would like to save others trouble.
The answer given by Nasser Alshammari is the simplest (and fastest) approach to running GTK applications inside a Docker container - simply mount the socket for the X server as a Docker volume and tell Docker to use that instead.
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY TheImage
(I would also recommend passing the -u <username-within-container> flag, as running X11 applications as root does not always work, and is generally not recommended, especially when sharing sessions).
This will work for applications such as xterm, as well as GTK-based applications. For example, if you try this with Firefox (which is GTK-based), it will work (note that if you are already running Firefox on the host, it will open a new window in the host rather than open a new instance of Firefox from within the container).
However, your answer asks about PyQT specifically. It turns out that Qt does not support sharing of X sessions in this way (or at least does not support it well).
If you try running a QT-based application this way, you will probably get an error like the following:
X Error: BadAccess (attempt to access private resource denied) 10
Extension: 140 (MIT-SHM)
Minor opcode: 1 (X_ShmAttach)
Resource id: 0x12d
X Error: BadShmSeg (invalid shared segment parameter) 148
Extension: 140 (MIT-SHM)
Minor opcode: 5 (X_ShmCreatePixmap)
Resource id: 0xb1
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
Major opcode: 62 (X_CopyArea)
Resource id: 0x2c0000d
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
Major opcode: 62 (X_CopyArea)
Resource id: 0x2c0000d
I say "probably" because I have not tested this approach with enough Qt applications to be sure, or dug into the Qt source code enough to figure out why this is not supported. YMMV, and you may get lucky, but if you are looking to run a Qt-based application from within a Docker container, you may have to go the "old-fashioned" approach and either
Run sshd within the container, turn on X11 forwarding, and then connect to the container using ssh -X (more secure) or ssh -Y (less secure, used only if you fully trust the containerized application).
Run VNC within the container, and connect to it from the host with a VNC client.
Between those two options, I would recommend the first, but see which works best for your situation.

There are many solutions to have GUI apps running in a docker container. You can use SSH, or VNC for instance. But they add some overhead and delay. The best way that I found is just to pass in the file used by the X server in the host machine as a volume to the container. Like this:
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY TheImage
Then all your GUI apps will run from container.
Hope This helps!

SOLVED - PyQt5-GUI in Docker Container:
Enable Qt-Debug $ export QT_DEBUG_PLUGINS=1 ==> reproduce error ==> re/install the No such file or directory-library listed in debug message ==> repeat!
I also could not run PyQt5-GUI-app in a Docker container without receiving errors & first read all the posts that it would not be possible to run Qt in Docker containers. But I could solve it (at least for me)...
System
I am running my PyQt5-application in a Docker container with shared /tmp/.X11-unix/ socket and display for GUI visualization:
$ nividia-docker run --interactive --tty --env DISPLAY=$DISPLAY --volume /tmp/.X11-unix/:/tmp/.X11-unix/ <docker_iamge>
Error
Initializing PyQt5.QtWidgets.QApplication always led to following error:
Type "help", "copyright", "credits" or "license" for more information.
>>> from PyQt5.QtWidgets import QApplication
>>> app = QApplication([])
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
In PyCharm Debug mode the error returned:
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
Solution
General method:
set Qt-debug environement variable in docker container terminal:
$ export QT_DEBUG_PLUGINS=1
reproduce error in the docker terminal (or in the IDE), e.g.:
$ python
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
KeyboardInterrupt
>>> from PyQt5.QtWidgets import QApplication, QLabel
>>> app = QApplication([])
read debug messages printed to the terminal, e.g.:
QFactoryLoader::QFactoryLoader() checking directory path "/conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqeglfs.so"
Found metadata in lib /conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqeglfs.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"eglfs"
]
},
...
...
...
Got keys from plugin meta data ("xcb")
QFactoryLoader::QFactoryLoader() checking directory path "/conda/envs/rapids/bin/platforms" ...
Cannot load library /conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)
QLibraryPrivate::loadPlugin failed on "/conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so" : "Cannot load library /conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
find the <No such file or directory>.so.* and <coud not be loaded>-packages, here e.g. libxkbcommon-x11.so.0 and libxcb. Then re/install the corresponding packages/libraries (finding the packages works with apt-file --package-only search <filename> or conda/pip search ...). In my case the following libs were required:
### lib no.1 ###
$ sudo conda install --name <env_name> --force-reinstall libxcb # or pip install ...
### lib no. 2 ###
$ apt-file --package-only search libxkbcommon-x11.so.0
libxkbcommon-x11-0
$ sudo apt install libxkbcommon-x11-0
After repeating this process for all sequentially reproduced debug messages and installing the 2 libs I can now run PyQt5-apps from inside the Docker container on my local machine desktop.

I managed to run xeyes in a container and see the "window" in a X server running outside of the container. Here's how:
I used Xephyr to run a nested X Server. This is not necessary, but most linux desktops do not allow running remote apps on them by default (here's how to "fix" this on ubuntu).
Install Xephyr:
$ sudo apt-get install xserver-xephyr
Run Xephyr:
$ Xephyr -ac -br -noreset -screen 800x600 -host-cursor :1
This creates a new 800x600 window, which acts as a X server.
Find an "external" address of your machine. This is where the X server is running:
$ ifconfig
docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:133395 errors:0 dropped:0 overruns:0 frame:0
TX packets:242570 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9566682 (9.5 MB) TX bytes:353001178 (353.0 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:650493 errors:0 dropped:0 overruns:0 frame:0
TX packets:650493 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2506560450 (2.5 GB) TX bytes:2506560450 (2.5 GB)
wlan0 Link encap:Ethernet HWaddr c4:85:08:97:b6:de
inet addr:192.168.129.159 Bcast:192.168.129.255 Mask:255.255.255.0
inet6 addr: fe80::c685:8ff:fe97:b6de/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6587370 errors:0 dropped:1 overruns:0 frame:0
TX packets:3716257 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7405648745 (7.4 GB) TX bytes:693693327 (693.6 MB)
Don't use 127.0.0.1! You can use any of the others. I'll use 172.17.42.1.
Create a Dockerfile with the following content:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y x11-apps
CMD ["/usr/bin/xeyes"]
Build it:
$ docker build -t xeyes .
And run it:
$ docker run -e DISPLAY=172.17.42.1:1.0 xeyes
Note, that I'm setting the DISPLAY environment variable to where I want to see it.
You can use the same technique to redirect the display to any X server.

Recently I tried to run PyQt5 application in docker. What I learned is that you can not run application as root (you have to create normal user). When you want to play audio/video in application you have to run docker container with group "audio" and mount sound device. So to run my application I use this:
docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $(pwd)/test:/app \
-e DISPLAY=$DISPLAY \
-u myusername \
--group-add audio \
--device /dev/snd \
fadawar/docker-pyqt5-qml-qtmultimedia python3 /app/hello.py
I spend some time until I figured out what packages I need to add to my container to run PyQt application in it so I created few Dockerfiles (with simple demo app) to make it easier for others:
Python 3 + PyQt5: https://github.com/jozo/docker-pyqt5
Python 3 + PyQt5 + QML + QtMultimedia: https://github.com/jozo/docker-pyqt5-qml-qtmultimedia

Here are the basic steps you need to follow get things working fine,
To create and run the Docker container
sudo nvidia-docker run -it -d --privileged -e DISPLAY=$DISPLAY --name wakemeeup -v -v /dev:/dev -v /tmp/.X11-unix:/tmp/.X11-unix:rw nvidia/cuda:9.1-cudnn7-devel-ubuntu16.04 bash
To start the docker container
sudo docker start wakemeup
To attach to the docker container
xhost +local:root 1>/dev/null 2>&1
docker exec -u $USER -it wakemeup /bin/bash
xhost -local:root 1>/dev/null 2>&1
The MIT-SHM is an extension to the X server which allows faster transactions by using shared memory. Docker isolation probably blocks it. Qt applications can be forced not to use the extension. Inside the docker container,
nano ~/.bashrc
export QT_X11_NO_MITSHM=1
Source .bashrc
source ~/.bashrc
Hope this will help

You can use subuser to package your GUI applications. It also has good support for updating applications. You can put your Dockerfiles in a git repo once, and then just run subuser update all on each client to rebuild the images when they need to be changed.

For Mac Catalina, had to install install XQuartz, then...
xhost 127.0.0.1
export DISPLAY=:0
ssh -Y
docker run -e DISPLAY=host.docker.internal:0 -it ros

Check this repo as well. It's runs GUI applications inside docker

Related

Turn SCTP support on Ubuntu 22.04

I am building a SCTP supporting application with Erlang and I stumbled upon some problems likely related to my machine (I tried the same code on another machine and it works just fine). I am using Ubuntu 22.04. When I try to gen_sctp:open(...) it returns: "{error,eprotonosupport}" which after some research turns out to be " The protocol type or the specified protocol is not supported within this domain.".
I tried:
sudo apt-get install libsctp-dev lksctp-tools
sctp_darn -H 0 -P 2500 -l
sctp_darn -H 0 -P 2600 -h 127.0.0.1 -p 2500 -s
And it seems to work just fine.
After:
lynis audit system | grep sctp
It returns:
* Determine if protocol 'sctp' is really needed on this system [NETW-3200]
So it seems to be enabled. What am I missing? (port is 3868)
Edit:
The port is open. I tried with ufw and iptables for all protocols and solely for sctp. It did't work.
Edit 2:
So after setting up 2 VM's Ubuntu 20.04 and Ubuntu 22.04 everything seems to work as expected. I guess I have messed something up with my system.

Connecting with Portainer: "resource is online but isn't responding to connection attempts"

I installed Ubuntu on an older Laptop. Now there is Docker with Portainer running and I want to access Portainer via my main PC in the same network. When I try to connect to Portainer via my Laptop where it is runnig (not Localhost address) it works fine. But when I try to connect via my PC, I get a timeout. Windows diagnostics says: "resource is online but isn't responding to connection attempts". How can I open Portainer to my local network? Or is this a problem with Ubuntu?
so check if you have openssh server running for ssh! disable firewall on terminal sudo ufw disable check if your network card is running on name eth0 ifconfig if not change following this step below
Using netplan which is the default these days. File /etc/netplan/00-installer-config.yaml file. but b4 you need to get serial/mac
Find the target devices mac/hw address using the lshw command:
lshw -C network
You'll see some output which looks like:
root#ys:/etc# lshw -C network
*-network
description: Ethernet interface
physical id: 2
logical name: eth0
serial: dc:a6:32:e8:23:19
size: 1Gbit/s
capacity: 1Gbit/s
capabilities: ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bcmgenet driverversion=5.8.0-1015-raspi duplex=full ip=192.168.0.112 link=yes multicast=yes port=MII speed=1Gbit/s
So then you take the serial
dc:a6:32:e8:23:19
Note the set-name option.
This works for the wifi section as well.
if you using calbe you can delete everything add the example only change for your serial "mac" sudo nano /etc/netplan/00-installer-config.yaml file.
network:
version: 2
ethernets:
eth0:
dhcp4: true
match:
macaddress: <YOUR MAC ID HERE>
set-name: eth0
Then then to test this config run.
netplan try
When your happy with it
netplan apply
reboot you ubuntu
after restart
stop portainer container
sudo docker stop portainer
remove portainer container
sudo docker rm portainer
now run again on the last version
docker run -d -p 8000:8000 -p 9000:9000 \
--name=portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:2.13.1

In Jupyter docker , cannot connect to kernel

When installing Jupyter docker, for example this one :
docker run -d \
--hostname jupyterhub-ds \
--log-opt max-size=50m \
-p 8000:8000 \
-p 5006:5006 \
-e DOCKER_USER=$(id -un) \
-e DOCKER_USER_ID=$(id -u) \
-e DOCKER_PASSWORD=$(id -un) \
-e DOCKER_GROUP_ID=$(id -g) \
-e DOCKER_ADMIN_USER=$(id -un) \
-v "$(pwd)":/workdir \
-v "$(dirname $HOME)":/home_host \
dclong/jupyterhub-ds /scripts/sys/init.sh
JupyterLab starts well and I enter the lab. through URL+port.
However, this is not possible to connect to the inernal python kernel (connection is hanging up).
What kind of security I am facing ?
Is this related to socket communication security ?
After Investigation, I have those messages :
[D 16:01:39.488 NotebookApp] Starting kernel: ['/usr/local/bin/python', '-m', 'ipykernel_launcher', '-f', '/root/.local/share/jupyter/runtime/kernel-f0420fbf-12e918f-20df7d3e804a.json']
[D 16:01:39.491 NotebookApp] Connecting to: tcp://127.0.0.1:51775
[D 16:01:39.491 NotebookApp] Connecting to: tcp://127.0.0.1:38609
[I 16:01:39.492 NotebookApp] Kernel started: f0420fbf-12ef-403e-918f-20df7d3e804a
[D 16:01:39.492 NotebookApp] Kernel args: {'kernel_name': 'python3', 'cwd': '/'}
[D 16:01:39.493 NotebookApp] Clearing buffer for 5e93046f-aa3e-4edd-a018-66b9d4c752e5
[I 16:01:39.493 NotebookApp] Kernel shutdown: 5e93046f-aa3e-4edd-a018-66b9d4c752e5
It seems linked to this one :
https://jupyter-notebook.readthedocs.io/en/stable/public_server.html
Firewall Setup
To function correctly, the firewall on the computer running the jupyter notebook server must be configured to allow connections from client machines
on the access port c.NotebookApp.port set in jupyter_notebook_config.py to allow connections to the web interface.
The firewall must also allow connections from 127.0.0.1 (localhost) on ports from 49152 to 65535. These ports are used by the server to communicate with the notebook kernels.
The kernel communication ports are chosen randomly by ZeroMQ,
and may require multiple connections per kernel, so a large range of ports must be accessible.
I'm not sure how you built the docker command, or why you chose that particular Docker image dclong/jupyterhub?
If it is designed to run jupyterhub (multiuser) then it doesn't sound like what you need if you're trying to run your own Jupyter server in docker, just for you.
I would suggest using something like jupyter/scipy-notebook instead that is designed just to run one Jupyter server.
Otherwise, please describe what you actually want to get running, or why you believe you need to use that image etc.

I am running DPDK packet Gen application. The application does not find any ports by itself and even If I try to add one it doesnt work?

http://pktgen.readthedocs.org/en/latest/running.html
This is the pktgen dpdk application. The screenshot in that link shows how ports are configured. But For me it doesnt configure at all. I am looking for help as a beginner
First, as you may know, pktgen is an application that use the DPKD framework, thus, you should have bind at least one NIC to DPDK. Check the documentation about DPDK: DPDK building instructions. You should see your NIC correctly bound with this command:
# path/to/DPDK/tools/dpdk_nic_bind.py --status
Then, you can run pktgen. The ports you want to use are specified with the -p option (It's a specific pktgen option so it's after the --). It's a port mask, so for instance, if you want only the first port (port 0) you can use -p 0x1.
Then, the -m option permit you to choose which core will handle which DPDK port. The syntax is not really obvious, I suggest you to read the doc of pktgen about this option: pktgen command line options.
For example, to be short, the option -m "[1:3].0" says you want that CPU core 1 handle "RX port 0", and CPU core 3 handle "TX port 0".
A simple command line for pktgen, if you use only one port running on two cores could be:
./app/pktgen -c 0x7 -n 3 -- -p 0x1 -P -m "[1:2].0"
In that case CPU core 1 and 2 (possible because of the "-c 0x7 option") will be used to handle respectively RX and TX of port 0 (configured with "-p 0x1"). Note that -P is for promiscuous mode.

iperf, sctp command not recognized in command-promt

I'm using iperf3 that is supposedly a rewritten version of iperf. Reason why Im using this is because I love iperf when it comes to TCP and UDP throughput and I now want to test SCTP throughput between my end-points.
However when I'm trying to use the --sctp command that I've seen people been using it says command not recognizable. Is it the implementation I'm using that have not implemented this command?
https://github.com/esnet/iperf
This is the implementation I'm using, can't find any obvious documentation of the SCTP commands related to this. Most SCTP iperf implementations are added manually in the tests and the source code is often not provided.
Any help would be appreciated!
Get a copy of iperf which supports lksctp module of linux kernel. Install it using the standard process. (If it fails, please inform with the error message and the operating system and kernel details). Now to use SCTP in iperf these are the proper syntaxes.
For creating an SCTP server,
iperf -z -s
(-z is for selecting the SCTP protocol and -s is for server.)
For creating an SCTP client,
iperf -z -c <host address> -t <time duration for the connection in second>s -i <interval of the time to print the bandwidth in terminal in second>s
(-z for SCTP, -c is for client. Host address should be the ip address of the server where iperf -z -s is already running. -t is to specify the communication time duration. -i is to specify the interval to show the bandwidth.)
Example:
iperf -z -c 0.0.0.0 -t 10s -i 2s
Here the communication time is 10 seconds and it'll report the bandwidth for each 2 seconds interval.
P.S.
(1) To use iperf for SCTP, you must enable the SCTP module in the kernel and recompile it. The kernel version must be 2.6 or above. Check it using uname -a or uname -r. If you have a lower one, then download a new kernel from The Linux Kernel Archives. And compile it by enabling SCTP.
First check if it is already enabled or not by running these two commands in the terminal.
modprobe sctp
lsmod | grep sctp If you get any output then SCTP is already enabled.
(2) If still iperf with -z fails. Try the following solution. If the two machines are 'A' and 'B'.
First make 'A' the server and 'B' the client. It won't succeed. So
exit by using `ctrl + z` and kill iperf
using `pkill -9 iperf`.
Then make 'B' the server and 'A' the client. It may succeed. If it fails again, kill iperf using the above command and repeat step 1 again. it might get succeeded.
(The 2nd solution works for me with fedora 20 and kernel 2.6 and above.)
Couldn't find any recent answers through googling so I though I would leave an answer here for those looking to installing Iperf3 to use SCTP on RHEL / CentOS.
You'll need to install lksctp-tools-devel first and build from source to enable the SCTP support. Yum Install Iperf3 3.17 with lksctp-tools-devel did not enable SCTP for me.

Resources