TightVNC authentication failed, how to get encrypted password? - docker

I'm trying to set RSelenium with docker following these instructions.
In "Remote control/debugging with Windows" I noticed something really strange. I installed TightVNC and set the passwords, but I got "Authentication Failed" while using these passwords. The guide said:
You will be asked for a password which is secret. This can be seen by reading the images Dockerfile:
and there is following code
RUN apt-get update -qqy \
&& apt-get -qqy install \
x11vnc \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p ~/.vnc \
&& x11vnc -storepasswd secret ~/.vnc/passwd
I may be wrong but this seems to me like linux command. Despite of this I tried to paste in docker but I got
bash: apt-get: command not found
Does this guide need to be fixed or am I missing something? Right now I'm unable to connect and complete VNC debugging.

So you got few things wrong conceptually. The guide is absolutely fine. VNC has two parts VNC Server and VNC Viewer. When you installed VNC locally on your system you may have installed the server version which asks you for password. This password is for you system's VNC Server. With that a VNC Client name VNC viewer or something would have been installed also.
Now the docker image that you run hosts a VNC server on port 5901 and the password for connection is secret. So the only thing you need to do is Open VNC Viewer, Connect to :5901. When asked for password enter secret.
The dockerfile was shown to you for explaining how the author got the password and those commands have nothing to do with your system.

Related

Socket bind failed while configuring apache and renderd

I am trying to build a custom map server tile server by following this tutorial on switch2osm.
Instead of using ubuntu as described in the tutorial, I am using docker for everything (postgis, apache, etc)
I am trying to build an image where apache and renderd are configured (I followed the instructions found here)
Here is my Dockerfile :
FROM httpd:2.4
RUN apt-get update && \
apt-get install -y libapache2-mod-tile renderd
RUN a2enmod tile
RUN a2enconf renderd
CMD ["renderd", "-f", "&&", "httpd-foreground"]
I keep having this error after building and creating the container :
renderd[1]: Initialising unix server socket on /run/renderd/renderd.sock
socket bind failed for: /run/renderd/renderd.sock
I know that's a user right issue but I dont see how to fix it.
Please can anyone help me solves this issue ?
I saw the same problem. I've partially resolved it by changing the owner of /run/renderd via sudo chown -R osm:osm /run/renderd
Then restarting the renderd process.
I've further tried (and failed) to make this permanent by modifying the file:
/etc/systemd/system/multi-user.target.wants/renderd.service
and specify the user there as well
[Service] ExecStart=/usr/bin/renderd -f User=osm
I do believe the above 'fix' has worked in the past, but doesn't seem to work now on Ubuntu 22.04

Starting ssh service through ENTRYPOINT not working

I'm having a lot of difficulties running an linux container with SSH service on it. To skip the details, SSH is not optional, I must have it.
I installed the openssh-server with:
RUN
echo "**** Setting up openssh-server ****" &&
apt-get install -y openssh-server &&
sed -i "s|# PasswordAuthentication yes|PasswordAuthentication yes|g" /etc/ssh/sshd_config &&
mkdir /var/run/sshd
And am trying to open the service with:
ENTRYPOINT service ssh restart && bash
However it does not work. I tried in multiple way to get it started, by using CMD, by making a script that would start the service, and it's not working. What's worse is that this seems to have worked for others (pull access denied repository does not exist or may require docker login)
The image that I am using as base is ubuntu:18.04. However I switched to jre/systemd-ubuntu:18.04 as I thought the lack of systemd could prevent the service from running however that did not work either. Any suggestions what the possibly issue could be?
I managed to get my service to run, as a first advice I recommend making sure that the service runs by itself before putting it together with other services. In my case it seems the ssh service was not being started because a previous non-returning service was started which would keep the shell occupied and would not let it continue it's ENTRYPOINT execution to start the SSH.
One other thing that I had done previously and could have been part of the solution is that I manually created the folder /var/run/sshd. It seems some ssh service versions need that to exist otherwise they won't run. At this point I can't verify though if that was the only issue, as I've tried multiple solution at once.

Running Docker on Google Cloud Instance with data in gcsfuse-mounted Bucket

I am trying to run a Docker container to analyze data in a Google Cloud Bucket.
I have been able to successfully mount the Bucket using gcsfuse, and I tested that I could do things like create and delete files within the Bucket.
In order to be able to install other programs (and mount the bucket), I installed Docker (and didn't use the Docker-optimized instance option). If I run Docker in interactive mode (without mounting a drive), it looks like it is working OK.
However, if I try to run Docker in interactive mode with the mounted drive (which is the gcsfuse-mounted Bucket), I get an error message:
user#instance:~/bucket-name/subfolder$ docker run -it -v /home/user/bucket-name:/mnt/bucket-name gcr.io/deepvariant-docker/deepvariant
docker: Error response from daemon: error while creating mount source path '/home/user/bucket-name': mkdir /home/user/bucket-name: file exists.
I hope that I am close to having this working: does anybody have any ideas about a relatively simple fix for this error message?
BTW, I realize that there are other ways to run DeepVariant on Google Cloud, but I am trying to makes things as similar as possible to what I am doing on AWS (plus, I may need to do some extra troubleshooting for analysis of one of my files).
Thank you very much for your help!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FYI, this is how I mounted the Bucket:
#mount directory: https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install gcsfuse
#restart and mount directory: https://cloud.google.com/storage/docs/gcs-fuse
#NOTE: please make sure you are in your home directory (I encounter issues if I try to mount from /mnt)
mkdir [bucket-name]
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name]
and this is how I installed Docker:
#install Docker for Debian: https://docs.docker.com/install/linux/docker-ce/debian/
sudo apt-get update
sudo apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get -y --allow-unauthenticated install docker-ce docker-ce-cli containerd.io
#fix Docker sock issue: https://stackoverflow.com/questions/47854463/got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket-at-uni
sudo usermod -a -G docker [user]
#have to restart after this
For anyone experiencing a similar error / issue - here is what worked for me. Steps I took:
First unmount the disk if it's already mounted: sudo umount /mounted_folder
Remount the disk using the below command, listing the credentials file to be used explicitly
sudo GOOGLE_APPLICATION_CREDENTIALS=/home/user/credentials/example-asdf21b0af7.json gcsfuse -o allow_other bucket_name /mounted_folder
Should now be connected successfully without further errors :)
NOTE: This command needs to be run everytime after restarting the computer / VM. Formatting this into fstab could probably be done so one does not need to manually execute these steps upon each restart.
EXPLANATION: What I did here was explicitly specifying the credentials via a credentials JSON for the user / service account with appropriate access (Not explained here on how to get this but should be googl-able) and referring to that json in the GOOGLE_APPLICATION_CREDENTIALS environment variable option, as suggested by this answer: https://stackoverflow.com/a/39047673/10002593. The need for this environment variable option is likely due to gcsfuse not registering the same level of access as the activated acount in gcloud config for some reason.
I think I figured out at least a partial solution to my problem:
As mentioned in this tutorial, you also need to run gcloud auth configure-docker.
I found you also needed to exit and restart your instance, but this strictly solved the original error message for this post.
I think got a strange message, but perhaps that is more about the specific container. So, I ran another test:
docker run -it -v /home/user/bucket-name:/mnt/bucket-name cwarden45/dnaseq-dependencies
This time, I got an error message about storage space on the instance (to be able to download and run the Docker container). So, I went back and created a new instance with a larger local hard drive:
1) From the Google Cloud Console, I selected "Compute Instance" and "VM instances"
2) I clicked "create instance" (similar to before)
3) I select "change" under "boot disk"
4) I set size to 300 GB instead of 10 GB (currently, towards bottom-right, under "Size (GB)")
Similar to before, I choose 8 vCPUs for the "Machine type", I selected "Allow full access to all Cloud APIs" under "Identity and API access", and I checked boxes for both "Allow HTTP traffic" and "Allow HTTPS traffic" (under "Firewall").
I am not selecting "Deploy a container image to this VM instance," which I believe is how I got Docker installed with "sudo" to be able to install gcsfuse.
I also have to call this a "parital" solution because this allows me to run the Docker container successfully in interactive mode, but the mounted bucket appears empty within Docker.
For another project, I noticed that executables could work if I installed them on the local hard drive under /opt, but not if I tried to install them on my bucket (in order to save the installation time for those programs each time). On AWS, I believe I needed to use EFS storage instead of S3 storage to do something similar, but I will keep learning more about using the Google Cloud Bucket for mounted storage / analysis.
Also, it is a different issue, but I noticed that I could fix an issue with running exectuable files from the bucket from changing the command from gcsfuse [bucket-name] ./[bucket-name] to gcsfuse --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name] (and I changed the example code accordingly)
I noticed more recently that the set of commands above is no longer sufficient to be able to have a functional directory (I can't add or edit files, for example).
Based upon this discussion, I thought that I needed to add the -o allow_other parameter.
However, if that is all I do, I get the following error message
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf
I can resolve that error message if I uncomment the corresponding line in that file. However, that still doesn't resolve having the right file permissions in the mounted directory.
So, I then tried editing my /etc/fstab file, by adding the following entry
[bucket-name] /home/[username]/[bucket-name] gcsfuse rw,allow_other,file_mode=777,dir_mode=777
I am also accordingly editing the content at the top (for whatever seems like it might help).
Also, please note that this was not a Docker-specific issue. This was necessary to essentially do anything within the bucket. Plus, I haven't actually solved this new problem.
For example, I still can't create files as root, after changing to the superuser via sudo su - (as described here)

docker login fails on a server with no X11 installed

I am trying to deploy a docker configuration with images on a private docker registry.
Now, every time I execute docker login registry.example.com, I get the following error message:
error getting credentials - err: exit status 1, out: Cannot autolaunch D-Bus without X11 $DISPLAY
The only solution I found for non-MacOS users was to run export $(dbus-launch) first, but that did not change anything.
I am running Ubuntu Server and tried with both the Ubuntu Docker package and the Docker-CE package.
How can I log in without an X11 session?
Looks like this is because it defaults to use the secretservice executable which seems to have some sort of X11 dependency for some reason. If you install and configure pass docker will use that instead which seems to solve the problem.
In a nutshell (from https://github.com/docker/compose/issues/6023)
sudo apt install gnupg2 pass
gpg2 --full-generate-key
This generates a you a gpg2 key. After that's done you can list it with
gpg2 -k
Copy the key id (from the line labelled [uid]) and do
pass init "whatever key id you have"
Now docker login should work.
There are a couple of bugs logged on launchpad regarding this:
https://bugs.launchpad.net/ubuntu/+source/golang-github-docker-docker-credential-helpers/+bug/1794307
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
This works: sudo apt remove golang-docker-credential-helpers
You can remove the offending package golang-docker-credential-helpers without removing all of docker-compose.
The following worked for me on a server without X11 installed:
dpkg -r --ignore-depends=golang-docker-credential-helpers golang-docker-credential-helpers
and then
echo 'foo' | docker login mydockerrepo.com -u dockeruser --password-stdin
Source:
bug reported in debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=910823#39
bug reported on ubuntu:
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
secretservice requires a GUI. You can use pass without a GUI.
Unfortunately, Docker's documentation on how to configure Docker Credential Helpers is quite lacking. Here's a comprehensive guide how to configure pass with Docker (tested with Ubuntu 18.04):
1. Install the Docker Credential Helper for pass
Find the url for the latest version of docker-credential-pass from https://github.com/docker/docker-credential-helpers/releases . For example:
# substitute with the latest version
url=https://github.com/docker/docker-credential-helpers/releases/download/v0.6.2/docker-credential-pass-v0.6.2-amd64.tar.gz
# download and untar the binary
wget $url
tar -xzvf $(basename $url)
# move the binary to a dir in your $PATH
sudo mv docker-credential-pass /usr/local/bin
# verify it works
docker-credential-pass list
2. Install and configure pass
apt install pass
# create a gpg2 key
gpg2 --gen-key
# if you have issues with lack of entropy, "apt install haveged" and try again
# create the password store using the gpg user id above
pass init $gpg_id
3. docker login
docker login
# You should not see any credentials stored in "auths" section.
# "credsStore": "pass" should have been automatically added.
# If the value is "secretservice", replace it with "pass".
cat ~/.docker/config.json
# verify credentials stored in `pass` store now
pass
There is a much easier answer than the ones already posted, which I found in a comment on https://github.com/docker/docker-credential-helpers/issues/105.
The solution is to rename docker-credential-secretservice out of the way
e.g: mv /usr/bin/docker-credential-secretservice /usr/bin/docker-credential-secretservice.broken
Once you do this, docker login works regardless of whether or not docker-compose is installed. No other package additions or removals are necessary.
I've resolved this issue by uninstalling docker-compose which was installed from Ubuntu repo and installing docker-compose by official instruction at https://docs.docker.com/compose/install/#install-compose
What helped me on Ubuntu 18.04 was:
Following the steps in #oberstet 's post and uninstalling the golang helper
Performing a login after the helper uninstall
Reinstalling docker via sudo apt-get install docker
Logging back in via sudo docker login

ngrok failing to launch

Trying to run ngrok, I get the following warning:
WARN[04-19|17:54:51] failed to get home directory, using $HOME instead err="user: Current not implemented on linux/amd64" $HOME=/root
It occurs whether I try to start a tunnel or merely run ngrok help.
If I do try to start a tunnel (e.g.: ngrok http -host-header=rewrite bilingueanglais.local:80), I get an empty screen, instead of the usual tunnel information.
It used to work fine, I'm not sure what changed. If I remember right, I got the exact same error in the past, but things went back to normal on their own. I'd then assumed the service was down.
However, this time, ngrok is clearly up but the error remains.
Environment:
Running ngrok on ubuntu:16.04 inside of Docker.
ngrok is version 2.2.8 (the latest available version at the time of posting.)
$HOME is /root
I installed Docker this way inside of my Dockerfile:
RUN apt-get install -y unzip
ADD https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip /ngrok.zip
RUN set -x \
&& unzip -o /ngrok.zip -d /bin \
&& rm -f /ngrok.zip
I'm able to run ngrok on the same computer on OS X instead of Docker, but would like to get things working again for Docker.
I'm confused by the error message and also, to some extent, by the docs where it mentions $HOME. Is the issue with my path? What does ngrok expect?
Any help welcome.

Resources