Websockify issue about multiple users on one vncserver - websockify

i am doing research on websockify but i am facing a problem i want multiple users to access my vnc independently but when I use the command
websockify -D \
--web /usr/share/novnc/ \
--cert /etc/ssl/novnc.pem \
6081 \
localhost:5901 \
2>/dev/null
And accessing noVNC from 2 different browsers to control vnc, it's like the same machine controlling 1 screen.
When I access noVNC from different machines they work independently?

Related

connect jenkins swarm client over http proxy

i'm attempting to run a swarm client on an RFC1918 node. the idea is to make use of a squid proxy so that it can communicate to hudson which is in AWS.
however, when i attempt to run
/usr/bin/java \
-Dhttp.proxyHost=my.proxy.com -Dhttps.proxyHost=my.proxy.com \
-Dhttp.proxyPort=3128 -Dhttps.proxyPort=3128 \
-Dhttp.nonProxyHosts=127.0.0.0/8,192.168.0.0/16,10.0.0.0/8,.proxy.com \
-jar /usr/share/jenkins/swarm-client-3.15.jar \
-mode normal -executors 1 \
-username user -passwordEnvVariable JSWARM_PASSWORD \
-name my-agent \
-master https://external.host.com \
-labels 'docker ldfc' \
-fsroot /j \
-disableClientsUniqueId \
-deleteExistingClients
i can see from packet traces that it makes no attempt to go through my.proxy.com and instead tries to communicate directly to external.host.com (which of course fails).
i believe i'm following the official docs at https://github.com/jenkinsci/swarm-plugin/blob/master/docs/proxy.adoc; what am i doing wrong?

How do I use local elrond docker container node for local smart-contract development

I am new to elrond. I (think) successfully have local node running in docker?
Both of these yield same log output in portainer:
sudo docker run -d \
--name my-elrond-testnet \
-v ${PATH_TO_BLS_KEY_FILE}:/data/ \
elrondnetwork/elrond-go-node:latest \
--nodes-setup-file="/data/nodesSetup.json" \
--p2p-config="/data/config/p2p.toml" \
--validator-key-pem-file="/data/keys/validatorKey.pem"
sudo docker run -d \
--name my-other-elrond-testnet \
--mount type=bind,source=${PATH_TO_BLS_KEY_FILE}/,destination=/data \
elrondnetwork/elrond-go-node:latest \
--validator-key-pem-file="/data/validatorKey.pem"
But now I dont know what to do. How do I connect to that local node.
I wanted to use it as a local development node - I want to deploy smart contracts on it.
I have some experience with Solana and with NEAR.
I dont see that the container exposes any ports.
Do I need a proxy?
This isn't docker but it successfully runs a local node(s).
https://docs.elrond.com/developers/setup-local-testnet/
I followed most of the instructions without an issue except that when I tried to run "erdpy" it failed.
But elsewhere I found " pip install erdpy"
Once you get erdpy working, the rest of the instructions went well, as far as getting a local node(s) to run.
I dont know yet about actually deploying a contract on it.
There's also this: https://docs.elrond.com/developers/setup-local-testnet-advanced/

Cannot create a Google Compute Engine VM with a container image without en external IP address

I am attempting to build a VM using the marketplace posgresql11 image (though the problem appears to be general for all images I have tried) with the following GCLOUD command:
gcloud compute instances create-with-container postgres-test \
--container-image gcr.io/cloud-marketplace/google/postgresql11:latest \
--container-env-file=envdata.txt \
--container-mount-host-path mount-path=/var/lib/postgresql,host-path=/mnt/disks/postgres_data,mode=rw \
--machine-type=e2-small \
--scopes=cloud-platform \
--boot-disk-size=10GB \
--boot-disk-device-name=postgres-test \
--create-disk="mode=rw,size=10GB,type=pd-standard,name=postgres-test-data,device-name=postgres-test_data" \
--network-interface=subnet="default,no-address" \
--tags=database-postgres \
--metadata-from-file user-data=metadata.txt
The envdata.txt file contains the environment variable data for the image and the metadata.txt file contains bootcmd instructions to format and mount the external disk for the postgres data.
envdata.txt:
POSTGRES_USER=postgresuser
POSTGRES_PASSWORD=postgrespassword
metadata.txt:
#cloud-config
bootcmd:
- fsck.ext4 -tvy /dev/sdb
- mkdir -p /mnt/disks/postgres_data
- mount -t ext4 -O ... /dev/sdb /mnt/disks/postgres_data
The VM is created but and the sudo journalctl command shows that an attempt is starting to connect to the GCR but this appears to not be successful. The docker image for postgres is not downloaded and is not started on the VM.
If I now remove the no-address command from the network-interface line of the cloud command (allowing google to allocate an external IP address to the VM) by executing the following:
gcloud compute instances create-with-container postgres-test \
--container-image gcr.io/cloud-marketplace/google/postgresql11:latest \
--container-env-file=envdata.txt \
--container-mount-host-path mount-path=/var/lib/postgresql,host-path=/mnt/disks/postgres_data,mode=rw \
--machine-type=e2-small \
--scopes=cloud-platform \
--boot-disk-size=10GB \
--boot-disk-device-name=postgres-test \
--create-disk="mode=rw,size=10GB,type=pd-standard,name=postgres-test-data,device-name=postgres-test_data" \
--network-interface=subnet="default" \
--tags=database-postgres \
--metadata-from-file user-data=metadata.txt
Then a VM is created, the POSTGRES image is downloaded and is executed. sudo journalctl shows that the connection to GCR starting and started.
Can anyone explain to me why the execution of an image in my case is dependant on having an external IP and how I can create a VM using the GCR without having to allocate an external IP address to the instance?
If you have a public IP, then requests from your instance to the Internet go thru the Internet Gateway. If your instance does not have a public IP then you need to setup Cloud NAT to provide a route to the Internet. This is the simplest solution. If you only need to access Google APIs and services and not the public Internet, see the next option.
Google Cloud NAT
Google also offers Private Google Access to reach only Google APIs and services.
Private Google Access

MPI on docker main process

Recommended way of dealing with horovod and docker is: https://github.com/uber/horovod/blob/master/docs/docker.md. That's bad in a way because it leaves bash as a primary docker process and python process as a secondary. Docker logs report of bash logs, docker state is dependent on bash state, docker closes if bash process closes, etc, so it thinks its main process is bash while it should be python process we're starting. Is it possible to make python process main process in all dockers workers, primary and secondary?
I tried starting mpirun process outside instead of starting mpirun inside of the docker, with interactive docker start command as a mpirun command (docker containers were already prepared with nvidia-docker create):
mpirun -H localhost,localhost \
-np 1 \
-bind-to none \
-map-by slot \
-x NCCL_DEBUG=INFO \
-x LD_LIBRARY_PATH \
-x PATH \
-x NCCL_SOCKET_IFNAME=^docker0,lo \
-mca btl_tcp_if_exclude lo,docker0 \
-mca oob_tcp_if_exclude lo,docker0 \
-mca pml ob1 \
-mca btl ^openib \
docker start -a -i bajaga_aws-ls0-l : \
-np 1 \
-bind-to none \
-map-by slot \
-x NCCL_DEBUG=INFO \
-x LD_LIBRARY_PATH \
-x PATH \
-x NCCL_SOCKET_IFNAME=^docker0,lo \
-mca btl_tcp_if_exclude lo,docker0 \
-mca oob_tcp_if_exclude lo,docker0 \
-mca pml ob1 \
-mca btl ^openib \
docker start -a -i bajaga_aws-ls1-l
But that failed - processes didn't communicate via horovod and were working as independent processes.
Do you know how could I achieve making python process docker main process?
Managed to execute this good enough via few tricks:
* Starting container with entrypoint that runs forever until sigterm is passed
* Starting mpi stuff as another process
* Writting output to process 1 stdout/err, so that docker logs works
* At the end of my process sending sigterm to process 1, so that whole container close.

Docker : sharing /dev/snd on multiple containers leads to "device or resource busy"

When adding host device (--device /dev/snd) to a Docker container, I sometimes encounter Device or resource busy errors.
Example
I have reproduced the issue with a minimal example involving audio (alsa). Here's my Dockerfile (producing an image docker-device-example) :
FROM debian:buster
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
alsa-utils \
&& rm -rf /var/lib/apt/lists/*
I am running the following command (speaker-test is a tool to generate a tone that can be used to test the speakers), with /dev/snd shared :
docker run --rm \
-i -t \
--device /dev/snd \
docker-device-example \
speaker-test
Issue
When running the previous command, a pink noise is played, but only under some conditions :
if I am not playing any sound on my host : for example, if I'm playing a video, and that even if the video is paused, the command fails
if I am not running another container accessing the /dev/snd device
It looks like the /dev/snd is "locked" when used, and if that is the case, I got the following output (the error is represented by the last 2 lines) :
speaker-test 1.1.6
Playback device is default
Stream parameters are 48000Hz, S16_LE, 1 channels
Using 16 octaves of pink noise
ALSA lib pcm_dmix.c:1099:(snd_pcm_dmix_open) unable to open slave
Playback open error: -16,Device or resource busy
And, vice versa, if the pink noise is played (on the container), then I cannot play any sound on my host (Ubuntu). But commands on my host does not fail with the same message. Instead, the command on the host (like aplay test.wav to play a simple sound) is blocked indefinitely (even when the container is shutdown afterwards).
I tried to debug by running strace aplay test.way, and the command seems to be blocked on the poll system call :
poll([{fd=3, events=POLLIN|POLLERR|POLLNVAL}], 1, 4294967295
Question
How can I play sounds from 2 (or more) different containers, or from my host and a container, at the same time?
Additional info
I've reproduced the issue with /dev/snd, but I don't know if similar things happen when using other devices, or if it's just related to sound devices or to alsa.
Note also that when running multiple speaker-test or aplay commands simultaneously and all on my host (no containers involved), then all sounds are played.
I can't tell how to solve this with ALSA, but can provide 2 possible ways with pulseaudio. If these setups fail, install pulseaudio in image to make sure dependencies are fullfilled.
ALSA directly accesses sound hardware and blocks access to it for other clients. But it is possible to set up ALSA to serve multiple clients. That has to be answered by someone else. Probably some ALSA dmix plugin setup is the way to go.
Pulseaudio with shared socket:
Create pulseaudio socket:
pactl load-module module-native-protocol-unix socket=/tmp/pulseaudio.socket
Create /tmp/pulseaudio.client.conf for pulseaudio clients:
default-server = unix:/tmp/pulseaudio.socket
# Prevent a server running in the container
autospawn = no
daemon-binary = /bin/true
# Prevent the use of shared memory
enable-shm = false
Share socket and config file with docker and set environment variables PULSE_SERVER and PULSE_COOKIE. Container user must be same as on host:
docker run --rm \
--env PULSE_SERVER=unix:/tmp/pulseaudio.socket \
--env PULSE_COOKIE=/tmp/pulseaudio.cookie \
--volume /tmp/pulseaudio.socket:/tmp/pulseaudio.socket \
--volume /tmp/pulseaudio.client.conf:/etc/pulse/client.conf \
--user $(id -u):$(id -g) \
imagename
The cookie will be created by pulseaudio itself.
Pulseaudio over TCP:
Get IP address from host:
# either an arbitrary IPv4 address
Hostip="$(ip -4 -o a | awk '{print $4}' | cut -d/ -f1 | grep -v 127.0.0.1 | head -n1)"
# or especially IP from docker daemon
Hostip="$(ip -4 -o a| grep docker0 | awk '{print $4}' | cut -d/ -f1)"
Run docker image. You need a free TCP port, here 34567 is used.
(TCP port number must be in range of cat /proc/sys/net/ipv4/ip_local_port_range and must not be in use. Check with ss -nlp | grep 34567.)
docker run --rm \
--name pulsecontainer \
--env PULSE_SERVER=tcp:$Hostip:34567 \
imagename
After docker run get IP of container with:
Containerip="$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' pulsecontainer)"
Load pulseaudio TCP module authenticated with container IP:
pactl load-module module-native-protocol-tcp port=34567 auth-ip-acl=$Containerip
Be aware that the TCP module is loaded after container is up and running. It takes a moment until pulseaudio server is available for container applications.
If TCP connection fails, check iptables and ufw settings.
A How-To summarizing these setups: https://github.com/mviereck/x11docker/wiki/Container-sound:-ALSA-or-Pulseaudio

Resources