I'm using this guide to try and run up Docker using WSL2. I've got everything starting however there is an issue when I actually try to run up Docker. Once I run the command sudo dockerd -H `ifconfig eth0 | grep -E "([0-9]{1,3}.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d:
WARN[2022-02-01T11:07:40.033323500-06:00] Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network. host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:40.033991800-06:00] Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there! host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.036303800-06:00] Binding to an IP address without --tlsverify is deprecated. Startup is intentionally being slowed down to show this message host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.043536700-06:00] Please consider generating tls certificates with client validation to prevent exposing unauthenticated root access to your network host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.044564400-06:00] You can override this by explicitly specifying '--tls=false' or '--tlsverify=false' host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.045654100-06:00] Support for listening on TCP without authentication or explicit intent to run without authentication will be removed in the next release host="tcp://169.254.77.26:2375"
failed to load listeners: listen tcp 169.254.77.26:2375: bind: cannot assign requested address
I'm not too familiar with Docker so not sure what I can adjust to make it launch properly. Any suggestions are appreciated, thanks!
I'm doing exactly the same.
What worked for me was this comment https://dev.to/nelsonpena/comment/1jmkb . But it was not too explicit
I opened windows PowerShell and used the command
wsl --set-version Ubuntu 2
if you have another distro of linux it would be
wsl --set-version <distroname> 2
I closed wsl and opened it again. and executed the command
echo `ifconfig eth0 | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2;exit }' | cut -f2 -d:`
and got API listen on [the IP]
I'm setting a backup/ sync within an Ubuntu network using rsync.
Assume that a Desktop (Ubuntu 18.04)-ip: 10.0.0.13
Running a docker with two Containers :
Client_A: 2001 -> 22/tcp , 8001 -> 80/tcp
Client_B: 2002 -> 22/tcp , 8002 -> 80/tcp
All 3 images are ubuntu, apache2 installed and running
dir:
DesktopOS Container1 Container2
10.0.0.13:80 10.0.0.13:8001 10.0.0.13:8002
⊢var ⊢var ⊢var
⊢www ⊢www ⊢www
⊢html ⊢html ⊢html
⊢1.txt ⊢2.txt ⊢3.txt
all three txt can be accessed in browser
When i try to pull 3.txt to Container1:
rsync -av -e 'ssh -p 2002' --rsh=ssh user#10.0.0.13/var/www/html/ ~/BACKUP/
1.txt has received.
How to access the 3.txt in Container1?
Please use the IP Address since I am simulating a real network, maybe 1 docker on 1 device in the real world.
Finally I found I installed ssh only, doesn't install the ssh-server.
Otherwise, the firewall blocks the access.
#find out port 22, 2002, 2001 etc.
#from netstat result, is it listening?
netstat | grep 2002
Install ssh server
sudo apt install tasksel
sudo tasksel install openssh-server
for Firewall:
sudo ufw allow 2001,2002
and it solved, thanks for your patients who try to answer me.
Searched online but I don't see the solution. I have influx installed: InfluxDB shell version: v1.6.2. But it throws me this error:
Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection refused
Please check your connection settings and ensure 'influxd' is running.
Just a couple things to check: make sure the the service is running (use the service manager on your OS or the influxd command to check). Another test you can do is to use the actual machine IP address http://:8086 instead of localhost. It could be access is restricted (iptables).
If none of that works, I would check out the discussion on this GitHub issue.
In my case on a Mac, I had to run influxd -config /usr/local/etc/influxdb.conf first before running influx.
I was facing the same challenge when I upgraded influxdb to 1.8.9, so I had to downgrade back to 1.8.5.
https://vibhubithar.medium.com/workaround-latest-version-of-influxdb-not-starting-on-raspberry-pi-buster-a8b5afa84fce
sudo apt update
sudo apt upgrade -y
wget https://s3.amazonaws.com/dl.influxdata.com/influxdb/releases/influxdb_1.8.5_armhf.deb
sudo systemctl unmask influxdb.service
sudo systemctl start influxdb
sudo systemctl enable influxdb.service
First check and see if the influxdb instance is running or not. If it is already running you might need to kill the process by issuing command,
ps -ef |grep influxdb
influxdb 5781 1 99 18:15 pts/0 00:00:22 /usr/bin/influxd -pidfile /var/run/influxdb/influxd.pid -config /etc/influxdb/influxdb.conf
pkill -f influxdb
Once the process is killed, there are chances that port is still in used, which can be verified by issuing command shown below.
sudo netstat -tulpn | grep LISTEN |grep influx
root#db1:/usr/bin# sudo netstat -tulpn | grep LISTEN |grep influx
tcp 0 0 127.0.0.1:8088 0.0.0.0:* LISTEN 28558/influxd
tcp6 0 0 :::8086 :::* LISTEN 28558/influxd
root#db1:/usr/bin#
In the above example, kill the process 28558 by issuing command pkill -9 28558
Once the port is released, cd to /etc/init.d directory and run the below mention service.
root#jvision-db1:/etc/init.d# influx
DB instance should come back and can be verified by the ps -ef |grep influxdb command.
Also, cd to /usr/bin directory and issue below mention command to verify InfluxDB is also available.
root#db1:~# cd /usr/bin
root#db1:/usr/bin# ./influx
Connected to http://localhost:8086 version 1.7.9
InfluxDB shell version: 1.7.9
>
if :
bind-address = "10.0.0.32:8086"
use
$> influx -host 10.0.0.32
Connected to http://10.2.3.102:8086 version 1.8.10
InfluxDB shell version: 1.8.10
Problem
I have a set of client machines that are a part of an enterprise web application. Each machine runs identical software, which is a PyQT-based web client that connects to a server. This client software is updated regularly and I would like to have some configuration/provisioning tool that allows to have the same environment on each machine and hence provide easy deployment and configuration of the software onto each of the clients' machines.
The problem is that I have tried to use Chef, but it takes a lot of effort to actually maintain Chef knowledge and skills (we do not have a dedicated Ops guy) and moreover a Chef recipe can fail if some third party repository is no longer available (this is a main stopper).
I would like to try Docker to solve the problem, but I still do not know if it is possible to set up images/containers that allow for some GUI based software to operate.
Question
Is it possible to use Docker to have a development/production environment for a GUI-based application (PyQt/QT)? If yes, what would be the first steps to approach that?
Currently this question is not answered, but it is very highly ranked on Google. The other answers are mostly correct, but with some caveats that I have learned the hard way, and I would like to save others trouble.
The answer given by Nasser Alshammari is the simplest (and fastest) approach to running GTK applications inside a Docker container - simply mount the socket for the X server as a Docker volume and tell Docker to use that instead.
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY TheImage
(I would also recommend passing the -u <username-within-container> flag, as running X11 applications as root does not always work, and is generally not recommended, especially when sharing sessions).
This will work for applications such as xterm, as well as GTK-based applications. For example, if you try this with Firefox (which is GTK-based), it will work (note that if you are already running Firefox on the host, it will open a new window in the host rather than open a new instance of Firefox from within the container).
However, your answer asks about PyQT specifically. It turns out that Qt does not support sharing of X sessions in this way (or at least does not support it well).
If you try running a QT-based application this way, you will probably get an error like the following:
X Error: BadAccess (attempt to access private resource denied) 10
Extension: 140 (MIT-SHM)
Minor opcode: 1 (X_ShmAttach)
Resource id: 0x12d
X Error: BadShmSeg (invalid shared segment parameter) 148
Extension: 140 (MIT-SHM)
Minor opcode: 5 (X_ShmCreatePixmap)
Resource id: 0xb1
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
Major opcode: 62 (X_CopyArea)
Resource id: 0x2c0000d
X Error: BadDrawable (invalid Pixmap or Window parameter) 9
Major opcode: 62 (X_CopyArea)
Resource id: 0x2c0000d
I say "probably" because I have not tested this approach with enough Qt applications to be sure, or dug into the Qt source code enough to figure out why this is not supported. YMMV, and you may get lucky, but if you are looking to run a Qt-based application from within a Docker container, you may have to go the "old-fashioned" approach and either
Run sshd within the container, turn on X11 forwarding, and then connect to the container using ssh -X (more secure) or ssh -Y (less secure, used only if you fully trust the containerized application).
Run VNC within the container, and connect to it from the host with a VNC client.
Between those two options, I would recommend the first, but see which works best for your situation.
There are many solutions to have GUI apps running in a docker container. You can use SSH, or VNC for instance. But they add some overhead and delay. The best way that I found is just to pass in the file used by the X server in the host machine as a volume to the container. Like this:
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY TheImage
Then all your GUI apps will run from container.
Hope This helps!
SOLVED - PyQt5-GUI in Docker Container:
Enable Qt-Debug $ export QT_DEBUG_PLUGINS=1 ==> reproduce error ==> re/install the No such file or directory-library listed in debug message ==> repeat!
I also could not run PyQt5-GUI-app in a Docker container without receiving errors & first read all the posts that it would not be possible to run Qt in Docker containers. But I could solve it (at least for me)...
System
I am running my PyQt5-application in a Docker container with shared /tmp/.X11-unix/ socket and display for GUI visualization:
$ nividia-docker run --interactive --tty --env DISPLAY=$DISPLAY --volume /tmp/.X11-unix/:/tmp/.X11-unix/ <docker_iamge>
Error
Initializing PyQt5.QtWidgets.QApplication always led to following error:
Type "help", "copyright", "credits" or "license" for more information.
>>> from PyQt5.QtWidgets import QApplication
>>> app = QApplication([])
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
In PyCharm Debug mode the error returned:
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
Solution
General method:
set Qt-debug environement variable in docker container terminal:
$ export QT_DEBUG_PLUGINS=1
reproduce error in the docker terminal (or in the IDE), e.g.:
$ python
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
KeyboardInterrupt
>>> from PyQt5.QtWidgets import QApplication, QLabel
>>> app = QApplication([])
read debug messages printed to the terminal, e.g.:
QFactoryLoader::QFactoryLoader() checking directory path "/conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqeglfs.so"
Found metadata in lib /conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqeglfs.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"eglfs"
]
},
...
...
...
Got keys from plugin meta data ("xcb")
QFactoryLoader::QFactoryLoader() checking directory path "/conda/envs/rapids/bin/platforms" ...
Cannot load library /conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)
QLibraryPrivate::loadPlugin failed on "/conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so" : "Cannot load library /conda/envs/rapids/lib/python3.6/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
find the <No such file or directory>.so.* and <coud not be loaded>-packages, here e.g. libxkbcommon-x11.so.0 and libxcb. Then re/install the corresponding packages/libraries (finding the packages works with apt-file --package-only search <filename> or conda/pip search ...). In my case the following libs were required:
### lib no.1 ###
$ sudo conda install --name <env_name> --force-reinstall libxcb # or pip install ...
### lib no. 2 ###
$ apt-file --package-only search libxkbcommon-x11.so.0
libxkbcommon-x11-0
$ sudo apt install libxkbcommon-x11-0
After repeating this process for all sequentially reproduced debug messages and installing the 2 libs I can now run PyQt5-apps from inside the Docker container on my local machine desktop.
I managed to run xeyes in a container and see the "window" in a X server running outside of the container. Here's how:
I used Xephyr to run a nested X Server. This is not necessary, but most linux desktops do not allow running remote apps on them by default (here's how to "fix" this on ubuntu).
Install Xephyr:
$ sudo apt-get install xserver-xephyr
Run Xephyr:
$ Xephyr -ac -br -noreset -screen 800x600 -host-cursor :1
This creates a new 800x600 window, which acts as a X server.
Find an "external" address of your machine. This is where the X server is running:
$ ifconfig
docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:133395 errors:0 dropped:0 overruns:0 frame:0
TX packets:242570 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:9566682 (9.5 MB) TX bytes:353001178 (353.0 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:650493 errors:0 dropped:0 overruns:0 frame:0
TX packets:650493 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2506560450 (2.5 GB) TX bytes:2506560450 (2.5 GB)
wlan0 Link encap:Ethernet HWaddr c4:85:08:97:b6:de
inet addr:192.168.129.159 Bcast:192.168.129.255 Mask:255.255.255.0
inet6 addr: fe80::c685:8ff:fe97:b6de/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6587370 errors:0 dropped:1 overruns:0 frame:0
TX packets:3716257 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7405648745 (7.4 GB) TX bytes:693693327 (693.6 MB)
Don't use 127.0.0.1! You can use any of the others. I'll use 172.17.42.1.
Create a Dockerfile with the following content:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y x11-apps
CMD ["/usr/bin/xeyes"]
Build it:
$ docker build -t xeyes .
And run it:
$ docker run -e DISPLAY=172.17.42.1:1.0 xeyes
Note, that I'm setting the DISPLAY environment variable to where I want to see it.
You can use the same technique to redirect the display to any X server.
Recently I tried to run PyQt5 application in docker. What I learned is that you can not run application as root (you have to create normal user). When you want to play audio/video in application you have to run docker container with group "audio" and mount sound device. So to run my application I use this:
docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $(pwd)/test:/app \
-e DISPLAY=$DISPLAY \
-u myusername \
--group-add audio \
--device /dev/snd \
fadawar/docker-pyqt5-qml-qtmultimedia python3 /app/hello.py
I spend some time until I figured out what packages I need to add to my container to run PyQt application in it so I created few Dockerfiles (with simple demo app) to make it easier for others:
Python 3 + PyQt5: https://github.com/jozo/docker-pyqt5
Python 3 + PyQt5 + QML + QtMultimedia: https://github.com/jozo/docker-pyqt5-qml-qtmultimedia
Here are the basic steps you need to follow get things working fine,
To create and run the Docker container
sudo nvidia-docker run -it -d --privileged -e DISPLAY=$DISPLAY --name wakemeeup -v -v /dev:/dev -v /tmp/.X11-unix:/tmp/.X11-unix:rw nvidia/cuda:9.1-cudnn7-devel-ubuntu16.04 bash
To start the docker container
sudo docker start wakemeup
To attach to the docker container
xhost +local:root 1>/dev/null 2>&1
docker exec -u $USER -it wakemeup /bin/bash
xhost -local:root 1>/dev/null 2>&1
The MIT-SHM is an extension to the X server which allows faster transactions by using shared memory. Docker isolation probably blocks it. Qt applications can be forced not to use the extension. Inside the docker container,
nano ~/.bashrc
export QT_X11_NO_MITSHM=1
Source .bashrc
source ~/.bashrc
Hope this will help
You can use subuser to package your GUI applications. It also has good support for updating applications. You can put your Dockerfiles in a git repo once, and then just run subuser update all on each client to rebuild the images when they need to be changed.
For Mac Catalina, had to install install XQuartz, then...
xhost 127.0.0.1
export DISPLAY=:0
ssh -Y
docker run -e DISPLAY=host.docker.internal:0 -it ros
Check this repo as well. It's runs GUI applications inside docker
I'm using iperf3 that is supposedly a rewritten version of iperf. Reason why Im using this is because I love iperf when it comes to TCP and UDP throughput and I now want to test SCTP throughput between my end-points.
However when I'm trying to use the --sctp command that I've seen people been using it says command not recognizable. Is it the implementation I'm using that have not implemented this command?
https://github.com/esnet/iperf
This is the implementation I'm using, can't find any obvious documentation of the SCTP commands related to this. Most SCTP iperf implementations are added manually in the tests and the source code is often not provided.
Any help would be appreciated!
Get a copy of iperf which supports lksctp module of linux kernel. Install it using the standard process. (If it fails, please inform with the error message and the operating system and kernel details). Now to use SCTP in iperf these are the proper syntaxes.
For creating an SCTP server,
iperf -z -s
(-z is for selecting the SCTP protocol and -s is for server.)
For creating an SCTP client,
iperf -z -c <host address> -t <time duration for the connection in second>s -i <interval of the time to print the bandwidth in terminal in second>s
(-z for SCTP, -c is for client. Host address should be the ip address of the server where iperf -z -s is already running. -t is to specify the communication time duration. -i is to specify the interval to show the bandwidth.)
Example:
iperf -z -c 0.0.0.0 -t 10s -i 2s
Here the communication time is 10 seconds and it'll report the bandwidth for each 2 seconds interval.
P.S.
(1) To use iperf for SCTP, you must enable the SCTP module in the kernel and recompile it. The kernel version must be 2.6 or above. Check it using uname -a or uname -r. If you have a lower one, then download a new kernel from The Linux Kernel Archives. And compile it by enabling SCTP.
First check if it is already enabled or not by running these two commands in the terminal.
modprobe sctp
lsmod | grep sctp If you get any output then SCTP is already enabled.
(2) If still iperf with -z fails. Try the following solution. If the two machines are 'A' and 'B'.
First make 'A' the server and 'B' the client. It won't succeed. So
exit by using `ctrl + z` and kill iperf
using `pkill -9 iperf`.
Then make 'B' the server and 'A' the client. It may succeed. If it fails again, kill iperf using the above command and repeat step 1 again. it might get succeeded.
(The 2nd solution works for me with fedora 20 and kernel 2.6 and above.)
Couldn't find any recent answers through googling so I though I would leave an answer here for those looking to installing Iperf3 to use SCTP on RHEL / CentOS.
You'll need to install lksctp-tools-devel first and build from source to enable the SCTP support. Yum Install Iperf3 3.17 with lksctp-tools-devel did not enable SCTP for me.