I am trying to run a Docker container to analyze data in a Google Cloud Bucket.
I have been able to successfully mount the Bucket using gcsfuse, and I tested that I could do things like create and delete files within the Bucket.
In order to be able to install other programs (and mount the bucket), I installed Docker (and didn't use the Docker-optimized instance option). If I run Docker in interactive mode (without mounting a drive), it looks like it is working OK.
However, if I try to run Docker in interactive mode with the mounted drive (which is the gcsfuse-mounted Bucket), I get an error message:
user#instance:~/bucket-name/subfolder$ docker run -it -v /home/user/bucket-name:/mnt/bucket-name gcr.io/deepvariant-docker/deepvariant
docker: Error response from daemon: error while creating mount source path '/home/user/bucket-name': mkdir /home/user/bucket-name: file exists.
I hope that I am close to having this working: does anybody have any ideas about a relatively simple fix for this error message?
BTW, I realize that there are other ways to run DeepVariant on Google Cloud, but I am trying to makes things as similar as possible to what I am doing on AWS (plus, I may need to do some extra troubleshooting for analysis of one of my files).
Thank you very much for your help!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FYI, this is how I mounted the Bucket:
#mount directory: https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install gcsfuse
#restart and mount directory: https://cloud.google.com/storage/docs/gcs-fuse
#NOTE: please make sure you are in your home directory (I encounter issues if I try to mount from /mnt)
mkdir [bucket-name]
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name]
and this is how I installed Docker:
#install Docker for Debian: https://docs.docker.com/install/linux/docker-ce/debian/
sudo apt-get update
sudo apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get -y --allow-unauthenticated install docker-ce docker-ce-cli containerd.io
#fix Docker sock issue: https://stackoverflow.com/questions/47854463/got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket-at-uni
sudo usermod -a -G docker [user]
#have to restart after this
For anyone experiencing a similar error / issue - here is what worked for me. Steps I took:
First unmount the disk if it's already mounted: sudo umount /mounted_folder
Remount the disk using the below command, listing the credentials file to be used explicitly
sudo GOOGLE_APPLICATION_CREDENTIALS=/home/user/credentials/example-asdf21b0af7.json gcsfuse -o allow_other bucket_name /mounted_folder
Should now be connected successfully without further errors :)
NOTE: This command needs to be run everytime after restarting the computer / VM. Formatting this into fstab could probably be done so one does not need to manually execute these steps upon each restart.
EXPLANATION: What I did here was explicitly specifying the credentials via a credentials JSON for the user / service account with appropriate access (Not explained here on how to get this but should be googl-able) and referring to that json in the GOOGLE_APPLICATION_CREDENTIALS environment variable option, as suggested by this answer: https://stackoverflow.com/a/39047673/10002593. The need for this environment variable option is likely due to gcsfuse not registering the same level of access as the activated acount in gcloud config for some reason.
I think I figured out at least a partial solution to my problem:
As mentioned in this tutorial, you also need to run gcloud auth configure-docker.
I found you also needed to exit and restart your instance, but this strictly solved the original error message for this post.
I think got a strange message, but perhaps that is more about the specific container. So, I ran another test:
docker run -it -v /home/user/bucket-name:/mnt/bucket-name cwarden45/dnaseq-dependencies
This time, I got an error message about storage space on the instance (to be able to download and run the Docker container). So, I went back and created a new instance with a larger local hard drive:
1) From the Google Cloud Console, I selected "Compute Instance" and "VM instances"
2) I clicked "create instance" (similar to before)
3) I select "change" under "boot disk"
4) I set size to 300 GB instead of 10 GB (currently, towards bottom-right, under "Size (GB)")
Similar to before, I choose 8 vCPUs for the "Machine type", I selected "Allow full access to all Cloud APIs" under "Identity and API access", and I checked boxes for both "Allow HTTP traffic" and "Allow HTTPS traffic" (under "Firewall").
I am not selecting "Deploy a container image to this VM instance," which I believe is how I got Docker installed with "sudo" to be able to install gcsfuse.
I also have to call this a "parital" solution because this allows me to run the Docker container successfully in interactive mode, but the mounted bucket appears empty within Docker.
For another project, I noticed that executables could work if I installed them on the local hard drive under /opt, but not if I tried to install them on my bucket (in order to save the installation time for those programs each time). On AWS, I believe I needed to use EFS storage instead of S3 storage to do something similar, but I will keep learning more about using the Google Cloud Bucket for mounted storage / analysis.
Also, it is a different issue, but I noticed that I could fix an issue with running exectuable files from the bucket from changing the command from gcsfuse [bucket-name] ./[bucket-name] to gcsfuse --file-mode 777 --dir-mode 777 [bucket-name] ./[bucket-name] (and I changed the example code accordingly)
I noticed more recently that the set of commands above is no longer sufficient to be able to have a functional directory (I can't add or edit files, for example).
Based upon this discussion, I thought that I needed to add the -o allow_other parameter.
However, if that is all I do, I get the following error message
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf
I can resolve that error message if I uncomment the corresponding line in that file. However, that still doesn't resolve having the right file permissions in the mounted directory.
So, I then tried editing my /etc/fstab file, by adding the following entry
[bucket-name] /home/[username]/[bucket-name] gcsfuse rw,allow_other,file_mode=777,dir_mode=777
I am also accordingly editing the content at the top (for whatever seems like it might help).
Also, please note that this was not a Docker-specific issue. This was necessary to essentially do anything within the bucket. Plus, I haven't actually solved this new problem.
For example, I still can't create files as root, after changing to the superuser via sudo su - (as described here)
Related
I want to run vscode in docker for internal test, i've created the following
FROM debian:stable
RUN apt-get update && apt-get install -y apt-transport-https curl gpg
RUN curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg \
&& install -o root -g root -m 644 microsoft.gpg /etc/apt/trusted.gpg.d/ \
&& echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list
RUN apt-get update && apt-get install -y code libx11-xcb-dev libasound2
RUN code --user-data-dir="~/.vscode-root"
I use to build
docker build -t vscode .
I use to run
docker run vscode code -v
when I run it like this I got error
You are trying to start vscode as a super user which is not recommended. If you really want to, you must specify an alternate user data directory using the --user-data-dir argument.
I just want to verify it by running something like RUN code -v how can I do it ?
should I change the user ? I just want to run vscode in docker and use some vscode apis
Have you tried using VSCode's built in functionality for developing in a container?
Checkout this page which describes how to do this:
Developing inside a Container
You can try out some of the sample container configurations provided by VSCode and use any of those devcontainer.json files as an example to configure a custom development container to your liking. According to the page above:
Workspace files are mounted from the local file system or copied or cloned into the container. Extensions are installed and run inside the container, where they have full access to the tools, platform, and file system. This means that you can seamlessly switch your entire development environment just by connecting to a different container.
This is a very handy way to have different environments for development that are isolated within the container.
Im trying to installing jira-server via docker-image on openshift.
I pulled the image from docker-desktop for windows. Added simple dockerfile includes USER ROOT etc.
When trying to deploy the pod. I get error and pod enters to loop.
The errror is: Permission Error in diffrent locations.
Tried many times to relocate the jira-home directory but without success.
(Trying to install on closed network)
Thanks for helping!
Short Answer
The official Atlassian Images are incompatible with Kubernetes Derivatives /e.g. Openshift as they violate some key concepts.
In Openshift for example, containers are running with arbitrary user ids, which means a nameless user is executing the processes in the container.
This is a safety mechanism, prevents containers running as root and limits the risk of escaping the container gaining privileges on the cluster host.
Solution
You do need to rebuild the image from scratch.
Furthermore, the behaviour of the startup python script trying to modify file system permissions need to be removed.
Clone official Repo
https://bitbucket.org/atlassian-docker/docker-atlassian-jira/src/master/
Modify the Dockerfile and add to the UserGroup creation Step:
RUN groupadd --gid ${RUN_GID} ${RUN_GROUP} \
...
&& chown -R ${RUN_USER}:${RUN_GROUP} ${JIRA_HOME} \
# make the image compatible to run as an arbitrary uid
&& chgrp -R 0 /etc/container_id \
&& chmod -R g=u /etc/container_id \
&& chmod -R 460 /etc/container_id \
&& chgrp -R 0 ${JIRA_INSTALL_DIR} \
&& chmod -R g=u ${JIRA_INSTALL_DIR} \
&& chgrp -R 0 ${JIRA_HOME} \
&& chmod -R g=u ${JIRA_HOME}
Modify the gen_cfg function from entrypoint_helpers.py and remove the else clause at the end.
The necessary permissions have already been set in step2.
def gen_cfg(tmpl, target, user='root', group='root', mode=0o644, overwrite=True):
if not overwrite and os.path.exists(target):
logging.info(f"{target} exists; skipping.")
return
logging.info(f"Generating {target} from template {tmpl}")
cfg = jenv.get_template(tmpl).render(env)
try:
with open(target, 'w') as fd:
fd.write(cfg)
except (OSError, PermissionError):
logging.warning(f"Container not started as root. Bootstrapping skipped for '{target}'")
# else:
# set_perms(target, user, group, mode)
Rebuild the image using the --build-arg JIRA_VERSION= --build-arg ARTEFACT_NAME
Run and Enjoy
Detail inspection
Firing up the atlassian images, user root is the first to enter, doing modifications (chown...) and later dropping down to user "jira".
All these operations are not possible in openshift.
In most cases, building a new Dockerfile starting from the official image and modifing permissions of the needed files and folders before deploying to a cluster is the solution,
but to make things worse, atlassian choose to "mount" the necessary directories as VOLUME.
They have even referenced the issue in their comments.
VOLUME ["${JIRA_HOME}"] # Must be declared after setting perms
After the volume mount, the permissions can only be modified persistently at runtime.
This forces them to rebuild and set permissions after container startup, using a python function in the entrypoint_helpers.py.
This is also the place, where the container fails with several "permission denied"s.
Would be glad to issue a pull request on this, but unfortunatelly they are hosted on bitbucket.
I am trying to set up a Docker image (my Dockerfile is available here, sorry for the french README: https://framagit.org/Gwendal/firefox-icedtea-docker) with an old version of Firefox and an old version of Java to run an old Java applet to start a VPN. My image does work and successfully allows me to start the Java applet in Firefox.
Unfortunately, the said applet then tries to run the following command in the container (I've simply removed the --config part from the command as it does not matter here):
INFO: launching '/usr/bin/pkexec sh -c /usr/sbin/openvpn --config ...'
Then the applet exits silently with an error. While investigating, I've tried running a command with pkexec with the same Docker image, and it gives me this result:
$ sudo docker-compose run firefox pkexec /firefox/firefox-sdk/bin/firefox-bin -new-instance
**
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
But I don't know polkit at all and cannot understand this error.
EDIT: A more minimal way to reproduce the problem is with this Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -y policykit-1
And then run:
$ sudo docker build -t pkexec-test .
$ sudo docker run pkexec-test pkexec echo Hello
Which leads here again to:
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
Should I conclude that pkexec cannot work in a docker container? Or is there any way to make this command work?
Sidenote: I have no control whatsoever on the Java applet that I try to run, it is a horrible and very dated proprietary black box that I am supposed to use at work, for which I have no access to the source code, and that I must use as is.
I have solved my own problem by replacing pkexec by sudo in the docker image, and by allowing passwordless sudo.
Given an ubuntu docker image where a user called developer was created and configured with a USER statement, add these lines:
# Install sudo and make 'developer' a passwordless sudoer
RUN apt-get install sudo
ADD ./developersudo /etc/sudoers.d/developersudo
# Replacing pkexec by sudo
RUN rm /usr/bin/pkexec
RUN ln -s /usr/bin/sudo /usr/bin/pkexec
with the file developersudo containing:
developer ALL=(ALL) NOPASSWD:ALL
This replaces any call to pkexec made in a process running in the container, by a call to sudo without any password prompt, which works nicely.
I am trying to deploy a docker configuration with images on a private docker registry.
Now, every time I execute docker login registry.example.com, I get the following error message:
error getting credentials - err: exit status 1, out: Cannot autolaunch D-Bus without X11 $DISPLAY
The only solution I found for non-MacOS users was to run export $(dbus-launch) first, but that did not change anything.
I am running Ubuntu Server and tried with both the Ubuntu Docker package and the Docker-CE package.
How can I log in without an X11 session?
Looks like this is because it defaults to use the secretservice executable which seems to have some sort of X11 dependency for some reason. If you install and configure pass docker will use that instead which seems to solve the problem.
In a nutshell (from https://github.com/docker/compose/issues/6023)
sudo apt install gnupg2 pass
gpg2 --full-generate-key
This generates a you a gpg2 key. After that's done you can list it with
gpg2 -k
Copy the key id (from the line labelled [uid]) and do
pass init "whatever key id you have"
Now docker login should work.
There are a couple of bugs logged on launchpad regarding this:
https://bugs.launchpad.net/ubuntu/+source/golang-github-docker-docker-credential-helpers/+bug/1794307
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
This works: sudo apt remove golang-docker-credential-helpers
You can remove the offending package golang-docker-credential-helpers without removing all of docker-compose.
The following worked for me on a server without X11 installed:
dpkg -r --ignore-depends=golang-docker-credential-helpers golang-docker-credential-helpers
and then
echo 'foo' | docker login mydockerrepo.com -u dockeruser --password-stdin
Source:
bug reported in debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=910823#39
bug reported on ubuntu:
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
secretservice requires a GUI. You can use pass without a GUI.
Unfortunately, Docker's documentation on how to configure Docker Credential Helpers is quite lacking. Here's a comprehensive guide how to configure pass with Docker (tested with Ubuntu 18.04):
1. Install the Docker Credential Helper for pass
Find the url for the latest version of docker-credential-pass from https://github.com/docker/docker-credential-helpers/releases . For example:
# substitute with the latest version
url=https://github.com/docker/docker-credential-helpers/releases/download/v0.6.2/docker-credential-pass-v0.6.2-amd64.tar.gz
# download and untar the binary
wget $url
tar -xzvf $(basename $url)
# move the binary to a dir in your $PATH
sudo mv docker-credential-pass /usr/local/bin
# verify it works
docker-credential-pass list
2. Install and configure pass
apt install pass
# create a gpg2 key
gpg2 --gen-key
# if you have issues with lack of entropy, "apt install haveged" and try again
# create the password store using the gpg user id above
pass init $gpg_id
3. docker login
docker login
# You should not see any credentials stored in "auths" section.
# "credsStore": "pass" should have been automatically added.
# If the value is "secretservice", replace it with "pass".
cat ~/.docker/config.json
# verify credentials stored in `pass` store now
pass
There is a much easier answer than the ones already posted, which I found in a comment on https://github.com/docker/docker-credential-helpers/issues/105.
The solution is to rename docker-credential-secretservice out of the way
e.g: mv /usr/bin/docker-credential-secretservice /usr/bin/docker-credential-secretservice.broken
Once you do this, docker login works regardless of whether or not docker-compose is installed. No other package additions or removals are necessary.
I've resolved this issue by uninstalling docker-compose which was installed from Ubuntu repo and installing docker-compose by official instruction at https://docs.docker.com/compose/install/#install-compose
What helped me on Ubuntu 18.04 was:
Following the steps in #oberstet 's post and uninstalling the golang helper
Performing a login after the helper uninstall
Reinstalling docker via sudo apt-get install docker
Logging back in via sudo docker login
I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker