Trying to run ngrok, I get the following warning:
WARN[04-19|17:54:51] failed to get home directory, using $HOME instead err="user: Current not implemented on linux/amd64" $HOME=/root
It occurs whether I try to start a tunnel or merely run ngrok help.
If I do try to start a tunnel (e.g.: ngrok http -host-header=rewrite bilingueanglais.local:80), I get an empty screen, instead of the usual tunnel information.
It used to work fine, I'm not sure what changed. If I remember right, I got the exact same error in the past, but things went back to normal on their own. I'd then assumed the service was down.
However, this time, ngrok is clearly up but the error remains.
Environment:
Running ngrok on ubuntu:16.04 inside of Docker.
ngrok is version 2.2.8 (the latest available version at the time of posting.)
$HOME is /root
I installed Docker this way inside of my Dockerfile:
RUN apt-get install -y unzip
ADD https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip /ngrok.zip
RUN set -x \
&& unzip -o /ngrok.zip -d /bin \
&& rm -f /ngrok.zip
I'm able to run ngrok on the same computer on OS X instead of Docker, but would like to get things working again for Docker.
I'm confused by the error message and also, to some extent, by the docs where it mentions $HOME. Is the issue with my path? What does ngrok expect?
Any help welcome.
Related
I am trying to build a custom map server tile server by following this tutorial on switch2osm.
Instead of using ubuntu as described in the tutorial, I am using docker for everything (postgis, apache, etc)
I am trying to build an image where apache and renderd are configured (I followed the instructions found here)
Here is my Dockerfile :
FROM httpd:2.4
RUN apt-get update && \
apt-get install -y libapache2-mod-tile renderd
RUN a2enmod tile
RUN a2enconf renderd
CMD ["renderd", "-f", "&&", "httpd-foreground"]
I keep having this error after building and creating the container :
renderd[1]: Initialising unix server socket on /run/renderd/renderd.sock
socket bind failed for: /run/renderd/renderd.sock
I know that's a user right issue but I dont see how to fix it.
Please can anyone help me solves this issue ?
I saw the same problem. I've partially resolved it by changing the owner of /run/renderd via sudo chown -R osm:osm /run/renderd
Then restarting the renderd process.
I've further tried (and failed) to make this permanent by modifying the file:
/etc/systemd/system/multi-user.target.wants/renderd.service
and specify the user there as well
[Service] ExecStart=/usr/bin/renderd -f User=osm
I do believe the above 'fix' has worked in the past, but doesn't seem to work now on Ubuntu 22.04
I'm having a lot of difficulties running an linux container with SSH service on it. To skip the details, SSH is not optional, I must have it.
I installed the openssh-server with:
RUN
echo "**** Setting up openssh-server ****" &&
apt-get install -y openssh-server &&
sed -i "s|# PasswordAuthentication yes|PasswordAuthentication yes|g" /etc/ssh/sshd_config &&
mkdir /var/run/sshd
And am trying to open the service with:
ENTRYPOINT service ssh restart && bash
However it does not work. I tried in multiple way to get it started, by using CMD, by making a script that would start the service, and it's not working. What's worse is that this seems to have worked for others (pull access denied repository does not exist or may require docker login)
The image that I am using as base is ubuntu:18.04. However I switched to jre/systemd-ubuntu:18.04 as I thought the lack of systemd could prevent the service from running however that did not work either. Any suggestions what the possibly issue could be?
I managed to get my service to run, as a first advice I recommend making sure that the service runs by itself before putting it together with other services. In my case it seems the ssh service was not being started because a previous non-returning service was started which would keep the shell occupied and would not let it continue it's ENTRYPOINT execution to start the SSH.
One other thing that I had done previously and could have been part of the solution is that I manually created the folder /var/run/sshd. It seems some ssh service versions need that to exist otherwise they won't run. At this point I can't verify though if that was the only issue, as I've tried multiple solution at once.
I am trying to set up a Docker image (my Dockerfile is available here, sorry for the french README: https://framagit.org/Gwendal/firefox-icedtea-docker) with an old version of Firefox and an old version of Java to run an old Java applet to start a VPN. My image does work and successfully allows me to start the Java applet in Firefox.
Unfortunately, the said applet then tries to run the following command in the container (I've simply removed the --config part from the command as it does not matter here):
INFO: launching '/usr/bin/pkexec sh -c /usr/sbin/openvpn --config ...'
Then the applet exits silently with an error. While investigating, I've tried running a command with pkexec with the same Docker image, and it gives me this result:
$ sudo docker-compose run firefox pkexec /firefox/firefox-sdk/bin/firefox-bin -new-instance
**
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
But I don't know polkit at all and cannot understand this error.
EDIT: A more minimal way to reproduce the problem is with this Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -y policykit-1
And then run:
$ sudo docker build -t pkexec-test .
$ sudo docker run pkexec-test pkexec echo Hello
Which leads here again to:
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
Should I conclude that pkexec cannot work in a docker container? Or is there any way to make this command work?
Sidenote: I have no control whatsoever on the Java applet that I try to run, it is a horrible and very dated proprietary black box that I am supposed to use at work, for which I have no access to the source code, and that I must use as is.
I have solved my own problem by replacing pkexec by sudo in the docker image, and by allowing passwordless sudo.
Given an ubuntu docker image where a user called developer was created and configured with a USER statement, add these lines:
# Install sudo and make 'developer' a passwordless sudoer
RUN apt-get install sudo
ADD ./developersudo /etc/sudoers.d/developersudo
# Replacing pkexec by sudo
RUN rm /usr/bin/pkexec
RUN ln -s /usr/bin/sudo /usr/bin/pkexec
with the file developersudo containing:
developer ALL=(ALL) NOPASSWD:ALL
This replaces any call to pkexec made in a process running in the container, by a call to sudo without any password prompt, which works nicely.
I am trying to run a Docker container to analyze data in a Google Cloud Bucket.
I have been able to successfully mount the Bucket using gcsfuse, and I tested that I could do things like create and delete files within the Bucket.
In order to be able to install other programs (and mount the bucket), I installed Docker (and didn't use the Docker-optimized instance option). If I run Docker in interactive mode (without mounting a drive), it looks like it is working OK.
However, if I try to run Docker in interactive mode with the mounted drive (which is the gcsfuse-mounted Bucket), I get an error message:
user#instance:~/bucket-name/subfolder$ docker run -it -v /home/user/bucket-name:/mnt/bucket-name gcr.io/deepvariant-docker/deepvariant
docker: Error response from daemon: error while creating mount source path '/home/user/bucket-name': mkdir /home/user/bucket-name: file exists.
I hope that I am close to having this working: does anybody have any ideas about a relatively simple fix for this error message?
BTW, I realize that there are other ways to run DeepVariant on Google Cloud, but I am trying to makes things as similar as possible to what I am doing on AWS (plus, I may need to do some extra troubleshooting for analysis of one of my files).
Thank you very much for your help!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FYI, this is how I mounted the Bucket:
#mount directory: https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install gcsfuse
#restart and mount directory: https://cloud.google.com/storage/docs/gcs-fuse
#NOTE: please make sure you are in your home directory (I encounter issues if I try to mount from /mnt)
mkdir [bucket-name]
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name]
and this is how I installed Docker:
#install Docker for Debian: https://docs.docker.com/install/linux/docker-ce/debian/
sudo apt-get update
sudo apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get -y --allow-unauthenticated install docker-ce docker-ce-cli containerd.io
#fix Docker sock issue: https://stackoverflow.com/questions/47854463/got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket-at-uni
sudo usermod -a -G docker [user]
#have to restart after this
For anyone experiencing a similar error / issue - here is what worked for me. Steps I took:
First unmount the disk if it's already mounted: sudo umount /mounted_folder
Remount the disk using the below command, listing the credentials file to be used explicitly
sudo GOOGLE_APPLICATION_CREDENTIALS=/home/user/credentials/example-asdf21b0af7.json gcsfuse -o allow_other bucket_name /mounted_folder
Should now be connected successfully without further errors :)
NOTE: This command needs to be run everytime after restarting the computer / VM. Formatting this into fstab could probably be done so one does not need to manually execute these steps upon each restart.
EXPLANATION: What I did here was explicitly specifying the credentials via a credentials JSON for the user / service account with appropriate access (Not explained here on how to get this but should be googl-able) and referring to that json in the GOOGLE_APPLICATION_CREDENTIALS environment variable option, as suggested by this answer: https://stackoverflow.com/a/39047673/10002593. The need for this environment variable option is likely due to gcsfuse not registering the same level of access as the activated acount in gcloud config for some reason.
I think I figured out at least a partial solution to my problem:
As mentioned in this tutorial, you also need to run gcloud auth configure-docker.
I found you also needed to exit and restart your instance, but this strictly solved the original error message for this post.
I think got a strange message, but perhaps that is more about the specific container. So, I ran another test:
docker run -it -v /home/user/bucket-name:/mnt/bucket-name cwarden45/dnaseq-dependencies
This time, I got an error message about storage space on the instance (to be able to download and run the Docker container). So, I went back and created a new instance with a larger local hard drive:
1) From the Google Cloud Console, I selected "Compute Instance" and "VM instances"
2) I clicked "create instance" (similar to before)
3) I select "change" under "boot disk"
4) I set size to 300 GB instead of 10 GB (currently, towards bottom-right, under "Size (GB)")
Similar to before, I choose 8 vCPUs for the "Machine type", I selected "Allow full access to all Cloud APIs" under "Identity and API access", and I checked boxes for both "Allow HTTP traffic" and "Allow HTTPS traffic" (under "Firewall").
I am not selecting "Deploy a container image to this VM instance," which I believe is how I got Docker installed with "sudo" to be able to install gcsfuse.
I also have to call this a "parital" solution because this allows me to run the Docker container successfully in interactive mode, but the mounted bucket appears empty within Docker.
For another project, I noticed that executables could work if I installed them on the local hard drive under /opt, but not if I tried to install them on my bucket (in order to save the installation time for those programs each time). On AWS, I believe I needed to use EFS storage instead of S3 storage to do something similar, but I will keep learning more about using the Google Cloud Bucket for mounted storage / analysis.
Also, it is a different issue, but I noticed that I could fix an issue with running exectuable files from the bucket from changing the command from gcsfuse [bucket-name] ./[bucket-name] to gcsfuse --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name] (and I changed the example code accordingly)
I noticed more recently that the set of commands above is no longer sufficient to be able to have a functional directory (I can't add or edit files, for example).
Based upon this discussion, I thought that I needed to add the -o allow_other parameter.
However, if that is all I do, I get the following error message
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf
I can resolve that error message if I uncomment the corresponding line in that file. However, that still doesn't resolve having the right file permissions in the mounted directory.
So, I then tried editing my /etc/fstab file, by adding the following entry
[bucket-name] /home/[username]/[bucket-name] gcsfuse rw,allow_other,file_mode=777,dir_mode=777
I am also accordingly editing the content at the top (for whatever seems like it might help).
Also, please note that this was not a Docker-specific issue. This was necessary to essentially do anything within the bucket. Plus, I haven't actually solved this new problem.
For example, I still can't create files as root, after changing to the superuser via sudo su - (as described here)
I have frequently built docker container using centos 7 as base image. But now I am getting error when I run,
RUN yum update add \
bash \
&& rm -rfv /var/cache/apk/*
ERROR:
Loaded plugins: fastestmirror, ovl
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
Contact the upstream for the repository and get them to fix the problem.
Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
`subscription-manager repos --disable=<repoid>`
Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64 Could not retrieve
mirrorlist
http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container
error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org;
Name or service not known" The command '/bin/sh -c yum update add
bash && rm -rfv /var/cache/apk/*' returned a non-zero code: 1
I also saw few resolutions to use "dhclient" but this error happens when i do docker-compose build.
I ran into this problem attempting to run the same Dockerfile, which fetched several software packages using yum, on two different platforms; one macOS, the other an Ubuntu 16.04-based Linux OS (elementaryOS Loki), both using the official packages from docker.com.
My theory is that the Linux package is just more restrictive out of the box, security-wise, than the macOS one. Maybe this is configurable with some kind of /etc/something config file, but I don't have the expertise with Docker to say for sure. EDIT: See my comment below.
What I can say is there was no additional configuration required for me on macOS (10.11 El Capitan); just docker build . worked fine, and yum processes from the Dockerfile were able to reach all the remote repositories.
In the Ubuntu-derived Linux distro, however, it was necessary to use
docker build --network host .
followed by
docker run -it --network host <image> <command>
when I wanted to run a process inside that image which required internet access.
This may be the case for other Debian-derived systems as well.
There are, of course, security considerations which need to be taken into account when allowing a long-running Docker container to communicate through the host network adapter, unrestricted, and one would do well to review the appropriate documentation in that regard.
My assumption is that for some reason network behavior in docker varies based on distribution.
Try to use:
docker run -d --net mybridge centos
or
docker network create -d bridge mybridge
docker run -d --net mybridge centos
It should start working. Or just edit /etc/hosts and add mirror address
Name: mirrorlist.centos.org
Address: 67.219.148.138
root cause of the issue is, container proxy settings were wrong. Just corrected the proxy settings at the below location and worked.
/root/.docker/config.json