Docker build failing when using gcsfuse to mount google storage - docker

I have been trying to mount SQL and a storage bucket to my docker WordPress container. It appears to succeeding in mounting SQL, but failing mounting the bucket. The instance is based of of this post.
I have attached the Docker file and error below, as well as my build command.
Build command:
docker build -t ic/spm .
Dockerfile:
FROM wordpress
MAINTAINER Gareth Williams <gareth#itinerateconsulting.com>
# Move login creds locally
ADD ./creds.json /creds.json
# install sudo, wget and gcsfuse
ENV GCSFUSE_REPO=gcsfuse-jessie
RUN apt-get update && \
apt-get -y install sudo && \
apt-get install -y curl ca-certificates && \
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" > /etc/apt/sources.list.d/gcsfuse.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y gcsfuse wget && \
apt-get remove -y curl --purge && \
apt-get autoremove -y && \
rm -rf /var/lib/apt/lists/*
# Config fuse
RUN chmod a+r /etc/fuse.conf
RUN perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
# Setup sql proxy
RUN sudo mkdir /cloudsql
RUN sudo chmod 777 /cloudsql
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 cloud_sql_proxy.linux.amd64
RUN mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy && chmod +x ./cloud_sql_proxy
RUN ./cloud_sql_proxy -dir=/cloudsql -fuse -credential_file=/creds.json &
# mysql -u icroot -S /cloudsql/[INSTANCE_CONNECTION_NAME]
# Perform Cloud Storage FUSE mounting for uploads folder
RUN mkdir /mnt/uploads
RUN chmod a+w /mnt/uploads
#RUN chown www-data:www-data -R /mnt && groupadd fuse && gpasswd -a www-data fuse && chmod g+rw /dev/fuse
USER www-data
RUN gcsfuse --key-file /creds.json \
--debug_gcs --debug_http --debug_fuse --debug_invariants \
--dir-mode "777" -o allow_other spm-bucket /mnt/uploads
Error:
Step 17 : RUN gcsfuse --key-file /creds.json --foreground --debug_gcs --debug_http --debug_fuse --debug_invariants --dir-mode "777" -o allow_other spm-bucket /mnt/uploads
---> Running in 7e3f31221bee
Using mount point: /mnt/uploads
Opening GCS connection...
Opening bucket...
gcs: Req 0x0: <- ListObjects()
http: ========== REQUEST:
GET http://www.googleapis.com/storage/v1/b/spm-bucket/o?maxResults=1&projection=full HTTP/1.1
Host: www.googleapis.com
User-Agent: gcsfuse/0.0
Authorization: Bearer ya29.ElrQAw8oxClKt8YGvtmxhc7z2Y2LufvL0fBueq1UESjYYjRrdxukNTQqO1qfM8e8h-rqfbOWNSjVK2rCRXVrEDla-CiUVhHwT6X71Y1Djb0jDJg7z3KblgNQPrc
Accept-Encoding: gzip
http: ========== RESPONSE:
HTTP/2.0 200 OK
Content-Length: 31
Alt-Svc: quic=":443"; ma=2592000; v="35,34"
Cache-Control: private, max-age=0, must-revalidate, no-transform
Content-Type: application/json; charset=UTF-8
Date: Wed, 11 Jan 2017 09:19:05 GMT
Expires: Wed, 11 Jan 2017 09:19:05 GMT
Server: UploadServer
Vary: Origin
Vary: X-Origin
X-Guploader-Uploadid: AEnB2UpTqXhtHW906FFDTRsz4FjHjFu_E84wYhvt0zhaVFuMpqSY1fsd1XcrEcpsYBBwX1mqf0ZXRVWJH05ThtDQIfFKHd4PFw
{
"kind": "storage#objects"
}
http: ====================
gcs: Req 0x0: -> ListObjects() (1.793169206s): OK
Mounting file system...
mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: failed to open /dev/fuse: Operation not permitted

If you're running your container on GKE, and you want to use gcsfuse, permissions should automatically be inherited in your account locally. Also...there is a caveat that you need to make sure that the cluster your running needs to have storage access. So make sure your cluster has the storage permissions set to full access. That way gcsfuse can mount your buckets on GCS within the container without having to worry about passing credential files and all that stuff...making the implementation pretty straight forward.
In your docker file...make sure you're doing your apt commands to get and install the gcsfuse application.
I personally made a shell script that I call once the instance is up, that mounts my directories that I needed.
Something like this...
Docker Entry
ENTRYPOINT ["/opt/entry.sh"]
entry.sh script example
gcsfuse [gcs bucket name] [local folder to mount as]
When generating your GKE cluster, make sure to add the storage scope
gcloud container clusters create [your cluster name] --scopes storage-full
Hope this helps you.

Docker won't allowed to mount with other storages(like GCP) by default. What you can do is when running the container with privileged option you can mount with the storage.
Put this command in script file(gcp.sh) and build the docker image.
RUN gcsfuse --key-file /creds.json \
--debug_gcs --debug_http --debug_fuse --debug_invariants \
--dir-mode "777" -o allow_other spm-bucket /mnt/uploads
gcp.sh:
gcsfuse --key-file /creds.json --debug_gcs --debug_http --debug_fuse --debug_invariants --dir-mode "777" -o allow_other spm-bucket /mnt/uploads
and the Dockerfile:
FROM wordpress
MAINTAINER Gareth Williams <gareth#itinerateconsulting.com>
# Move login creds locally
ADD ./creds.json /creds.json
# install sudo, wget and gcsfuse
ENV GCSFUSE_REPO=gcsfuse-jessie
RUN apt-get update && \
apt-get -y install sudo && \
apt-get install -y curl ca-certificates && \
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" > /etc/apt/sources.list.d/gcsfuse.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y gcsfuse wget && \
apt-get remove -y curl --purge && \
apt-get autoremove -y && \
rm -rf /var/lib/apt/lists/*
# Config fuse
RUN chmod a+r /etc/fuse.conf
RUN perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
# Setup sql proxy
RUN sudo mkdir /cloudsql
RUN sudo chmod 777 /cloudsql
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 cloud_sql_proxy.linux.amd64
RUN mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy && chmod +x ./cloud_sql_proxy
RUN ./cloud_sql_proxy -dir=/cloudsql -fuse -credential_file=/creds.json &
# mysql -u icroot -S /cloudsql/[INSTANCE_CONNECTION_NAME]
# Perform Cloud Storage FUSE mounting for uploads folder
RUN mkdir /mnt/uploads
RUN chmod a+w /mnt/uploads
#RUN chown www-data:www-data -R /mnt && groupadd fuse && gpasswd -a www-data fuse && chmod g+rw /dev/fuse
USER www-data
COPY gcp.sh /home
RUN chmod +x /home/gcp.sh
CMD cd /home && ./gcp.sh
and finally after build the image run the container with --privileged option
docker run --privileged

your www-data have permission problem in the dockerfile:
#RUN chown www-data:www-data -R /mnt && groupadd fuse && gpasswd -a www-data fuse && chmod g+rw /dev/fuse
uncomment this line

Related

"Failed to read marionette port" when Running Selenium + geckodriver + firefox as a non-root user in a Docker container

I'm running selenium tests inside a docker container, with Firefox and Geckodriver.
When running that container as root, everything works fine.
When running the container as non-root user (USER 1000), the driver fails to initialize :
[[1;31mERROR[m] test01_WO_default_dashboard Time elapsed: 132.6 s <<< ERROR!
org.openqa.selenium.TimeoutException:
Failed to read marionette port
Build info: version: '3.14.0', revision: 'aacccce0', time: '2018-08-02T20:19:58.91Z'
System info: host: 'testrunner-cockpit-3--1-mdbwj', ip: '10.130.2.18', os.name: 'Linux', os.arch: 'amd64', os.version: '4.18.0-305.28.1.el8_4.x86_64', java.version: '11.0.15'
Driver info: driver.version: FirefoxDriver
remote stacktrace:
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.openqa.selenium.remote.W3CHandshakeResponse.lambda$new$0(W3CHandshakeResponse.java:57)
at org.openqa.selenium.remote.W3CHandshakeResponse.lambda$getResponseFunction$2(W3CHandshakeResponse.java:104)
at org.openqa.selenium.remote.ProtocolHandshake.lambda$createSession$0(ProtocolHandshake.java:122)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:958)
at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:127)
at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:502)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:488)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:150)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:543)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:125)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:73)
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:136)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:83)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:548)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:212)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:130)
at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:125)
Relevant parts of my Dockerfile :
FROM ubuntu:20.04
# install firefox
ARG FIREFOX_VERSION=latest
RUN FIREFOX_DOWNLOAD_URL=$(if [ $FIREFOX_VERSION = "latest" ] || [ $FIREFOX_VERSION = "nightly-latest" ] || [ $FIREFOX_VERSION = "devedition-latest" ] || [ $FIREFOX_VERSION = "esr-latest" ]; then echo "https://download.mozilla.org/?product=firefox-$FIREFOX_VERSION-ssl&os=linux64&lang=en-US"; else echo "https://download-installer.cdn.mozilla.net/pub/firefox/releases/$FIREFOX_VERSION/linux-x86_64/en-US/firefox-$FIREFOX_VERSION.tar.bz2"; fi) \
&& apt-get update -qqy \
&& apt-get -qqy --no-install-recommends install firefox libavcodec-extra \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/* \
&& wget --no-verbose -O /tmp/firefox.tar.bz2 $FIREFOX_DOWNLOAD_URL \
&& apt-get -y purge firefox \
&& rm -rf /opt/firefox \
&& tar -C /opt -xjf /tmp/firefox.tar.bz2 \
&& rm /tmp/firefox.tar.bz2 \
&& mv /opt/firefox /opt/firefox-$FIREFOX_VERSION \
&& ln -fs /opt/firefox-$FIREFOX_VERSION/firefox /usr/bin/firefox
# install geckodriver
ARG GECKODRIVER_VERSION=latest
RUN GK_VERSION=$(if [ ${GECKODRIVER_VERSION:-latest} = "latest" ]; then echo "0.31.0"; else echo $GECKODRIVER_VERSION; fi) \
&& echo "Using GeckoDriver version: "$GK_VERSION \
&& wget --no-verbose -O /tmp/geckodriver.tar.gz https://github.com/mozilla/geckodriver/releases/download/v$GK_VERSION/geckodriver-v$GK_VERSION-linux64.tar.gz \
&& rm -rf /opt/geckodriver \
&& tar -C /opt -zxf /tmp/geckodriver.tar.gz \
&& rm /tmp/geckodriver.tar.gz \
&& mv /opt/geckodriver /opt/geckodriver-$GK_VERSION \
&& chmod 755 /opt/geckodriver-$GK_VERSION \
&& ln -fs /opt/geckodriver-$GK_VERSION /usr/bin/geckodriver \
&& geckodriver --version
I even added some chmod / chown to try and fix some permissions issues with firefox or geckodriver :
RUN chown 1000 -R /usr/bin/geckodriver \
&& chmod 775 -R /usr/bin/geckodriver \
&& chown 1000 -R /usr/bin/firefox \
&& chmod 775 -R /usr/bin/firefox
And finally the USER instruction to run the container as non-root
USER 1000
I do not manually install selenium. It's a maven dependency of the project I'm installing where my tests sources are.
I met the same issue recently, and tried to solve it by the following
in Dockerfile create a directory /.cache
mkdir -p /.cache;\
and give write permission to /.cache directory
chmod 777 /.cache;\
if you run docker with specified user id (i.e. Jenkins id), then you need to add some username with the userid and groupid in yourcontainername:/etc/passwd
echo "somename:x:1234:1235:somename:/tmp:/bin/bash" >> /tmp/passwd
below is my Dockerfile to install geckodriver, copy here for your reference, hopefully it helps!
#######################################
RUN (install_dir="/opt/automation/data/webdriver";
url=$(curl -s https://api.github.com/repos/mozilla/geckodriver/releases/latest | python -c "import sys, json; print(next(item['browser_download_url'] for item in json.load(sys.stdin)['assets'] if 'linux64' in item.get('browser_download_url', '')))");
echo "url $url";
wget -O geckodriver.tar.gz "$url";
tar -xvzf geckodriver.tar.gz -C $install_dir;
chmod 777 $install_dir/geckodriver;
ls -la $install_dir;
ln -s $install_dir/geckodriver /usr/local/bin/geckodriver;
chmod 777 /usr/local/bin/geckodriver;
chown root:root $install_dir/geckodriver;
chown root:root /usr/local/bin/geckodriver;
mkdir -p /.cache;
chmod 777 /.cache;
ls -la /.cache;
export PATH=$PATH:$install_dir;
mkdir $HOME/tmp;
export TMPDIR=$HOME/tmp geckodriver;
echo "seleniumuser:x:idxxx:group_idxxx:seleniumuser:/tmp:/bin/bash" >> /etc/passwd;
echo "installed geckodriver binary in $install_dir";)
ENV PATH="/opt/automation/data/webdriver:${PATH}"
###################################################
For me, it was a firefox version issue. I was running on Ubuntu 22.04 and the issue was with the snap package of firefox. The solution was to uninstall firefox completely and install the necessary version from the tar file, even apt-get was using snap to install. More detailed info here.
According to this MR https://github.com/SeleniumHQ/docker-selenium/pull/485/files#diff-04c6e90faac2675aa89e2176d2eec7d8R43, it may be fixed by add --shm-size 2g when you start the container.
As the Selenium community provide docker image for Firefox, I think it would be a better choice to use their docker image directly, or create your own based on their dockerfile to reduce the chance of having problem like this.
For me the issue was caused by two identical firefox processes starting when selenium creates the web driver object. It looks like this caused a race condition, preventing the other process from initialising properly.
To solve this I ran the following commands in the terminal:
List the firefox processes currently running:
pgrep -laf firefox
Kill the identical process (killing the second duplicate process in the list worked for me)
kill -9 <PID>
I was running into the same issue trying to launch a headless Firefox inside a Docker container on Kubernetes. What solved it for me was adding the code below to my Dockerfile
RUN groupadd ffgroup --gid 2000 \
&& useradd ffuser \
--create-home \
--home-dir /tmp/ffuser \
--gid 2000 \
--shell /bin/bash \
--uid 1000
and then adding the following to the container's spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 2000
fsGroup: 2000
The main problem is .cache and .mozilla folders, which need permissions. The data is generated in the profile of the user who execute the script.
This solution worked for me:
Identify the user who needs to save the cache. In my case it was
www-data (the user of Symfony). I checked it in the folder /tmp/.
You can see the owner of folders like
/tmp/Temp-276cddf8-d7c5-4a09... These folders are generated by
Firefox.
The user needs bash permissions, so you have to adjust the file
/etc/passwd from
www-data:33:33:www-data:/usr/sbin:/usr/sbin/nologin to www-data:x:33:33:www-data:**/var/www**:**/bin/bash**
Create the folders in the path, in my case is /var/www/ (this path
is located in the user path specified in the file /etc/passwd).
mkdir /var/www/.cache
mkdir /var/www/.mozilla
chown www-data:www-data .cache
chown www-data:www-data .mozilla

Error while setting up a remote ssh host on a docker container

I'm trying to setup a remote ssh host on a docker container to perform jenkins jobs from another container. MY Dockerfile looks like this:
FROM Centos
RUN yum -y install openssh-server && \
yum install -y passwd
RUN useradd remote_user && \
echo remote_user:1234 | chpasswd && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized-keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chmod 600 /home/remote_user/.ssh/authorized-keys
RUN /usr/sbin/sshd-keygen -A
EXPOSE 22
RUN r -rf /run/nologin
CMD /usr/sbin/sshd -D
I run into this following error:
/bin/sh: /usr/sbin/sshd-keygen: No such file or directory
The command '/bin/sh -c /usr/sbin/sshd-keygen -A' returned a non-zero code: 127
Can you please help me correct this issue?

Openshift : failed to start docker image

I have a docker image with base image ubuntu and tomcat installed later on that image. After the docker build, I am able to run the docker image locally without any issue. But when it is deployed on OpenShift, it fails to start.
Dockerfile
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install openjdk-8-jdk wget
RUN wget http://apache.stu.edu.tw/tomcat/tomcat-8/v8.5.58/bin/apache-tomcat-8.5.58.tar.gz -O /tmp/tomcat.tar.gz && \
cd /tmp && tar xvfz tomcat.tar.gz && \
cp -Rv /tmp/apache-tomcat-8.5.58/* /usr/local/tomcat/
EXPOSE 8080
CMD /usr/local/tomcat/bin/catalina.sh run
By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.
Here is the modified Dockerfile
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install openjdk-8-jdk wget
RUN wget http://apache.stu.edu.tw/tomcat/tomcat-8/v8.5.58/bin/apache-tomcat-8.5.58.tar.gz -O /tmp/tomcat.tar.gz && \
cd /tmp && tar xvfz tomcat.tar.gz && \
cp -Rv /tmp/apache-tomcat-8.5.58/* /usr/local/tomcat/
#Add a user ubuntu with UID 1001
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 ubuntu && \
chown -R ubuntu:root /usr/local/tomcat && \
chgrp -R 0 /usr/local/tomcat && \
chmod -R g=u /usr/local/tomcat
#Specify the user with UID
USER 1001
EXPOSE 8080
CMD /usr/local/tomcat/bin/catalina.sh run
Refer section "Support Arbitrary User IDs" on the Guideline from Openshift
To relax the security in your cluster so that images are not forced to run as a pre-allocated UID, without granting everyone access to the privileged SCC:
Grant all authenticated users access to the anyuid SCC:
$ oc adm policy add-scc-to-group anyuid system:authenticated
This allows images to run as the root UID if no USER is specified in the Dockerfile.

Docker permission denied via build image for container

I tried to build image from Dockerfile.
For this purposes I used this dockerhub image: https://hub.docker.com/r/openshift/origin-haproxy-router
My Dockerfile:
FROM openshift/origin-haproxy-router
RUN INSTALL_PKGS="haproxy18 rsyslog" && \
yum install -y $INSTALL_PKGS && \
yum clean all && \
rpm -V $INSTALL_PKGS && \
mkdir -p /var/lib/haproxy/router/{certs,cacerts,whitelists} && \
mkdir -p /var/lib/haproxy/{conf/.tmp,run,bin,log} && \
touch /var/lib/haproxy/conf/{{os_http_be,os_edge_reencrypt_be,os_tcp_be,os_sni_passthrough,os_route_http_redirect,cert_config,os_wildcard_domain}.map,haproxy.config} && \
setcap 'cap_net_bind_service=ep' /usr/sbin/haproxy && \
chown -R :0 /var/lib/haproxy && \
chmod -R g+w /var/lib/haproxy
COPY images/router/haproxy/* /var/lib/haproxy/
LABEL io.k8s.display-name="OpenShift HAProxy Router" \
io.k8s.description="This component offers ingress to an OpenShift cluster via Ingress and Route rules." \
io.openshift.tags="openshift,router,haproxy"
USER root
EXPOSE 80 443
WORKDIR /var/lib/haproxy/conf
ENV TEMPLATE_FILE=/var/lib/haproxy/conf/haproxy-config.template \
RELOAD_SCRIPT=/var/lib/haproxy/reload-haproxy
ENTRYPOINT ["/usr/bin/openshift-router"]
After I tried to run command inside folder with dockerfile:
sudo docker build -t os-router .
I got next result:
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/Conflictname'
You need to be root to perform this command.
How can I solve this error?
put USER root in your dockerfile

Is s3fs not able to mount inside docker container?

I want to mount s3fs inside of docker container.
I made docker image with s3fs, and did like this:
host$ docker run -it --rm docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
fuse: failed to open /dev/fuse: Operation not permitted
Showing "Operation not permitted" error.
So I googled, and did like this (adding --privileged=true) again:
host$ docker run -it --rm --privileged=true docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
[ root#container:~ ]$ fusermount -u /mnt/s3bucket
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
Then, mounting not shows error, but if run ls command, "Transport endpoint is not connected" error is occured.
How can I mount s3fs inside of docker container?
Is it impossible?
[UPDATED]
Add Dockerfile configuration.
Dockerfile:
FROM dockerfile/ubuntu
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y libfuse-dev
RUN apt-get install -y fuse
RUN apt-get install -y libcurl4-openssl-dev
RUN apt-get install -y libxml2-dev
RUN apt-get install -y mime-support
RUN \
cd /usr/src && \
wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz && \
tar xvzf s3fs-1.74.tar.gz && \
cd s3fs-1.74/ && \
./configure --prefix=/usr && \
make && make install
ADD passwd/passwd-s3fs /etc/passwd-s3fs
ADD rules.d/99-fuse.rules /etc/udev/rules.d/99-fuse.rules
RUN chmod 640 /etc/passwd-s3fs
RUN mkdir /mnt/s3bucket
rules.d/99-fuse.rules:
KERNEL==fuse, MODE=0777
I'm not sure what you did that did not work, but I was able to get this to work like this:
Dockerfile:
FROM ubuntu:12.04
RUN apt-get update -qq
RUN apt-get install -y build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support automake libtool wget tar
RUN wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.77.tar.gz -O /usr/src/v1.77.tar.gz
RUN tar xvz -C /usr/src -f /usr/src/v1.77.tar.gz
RUN cd /usr/src/s3fs-fuse-1.77 && ./autogen.sh && ./configure --prefix=/usr && make && make install
RUN mkdir /s3bucket
After building with:
docker build --rm -t ubuntu/s3fs:latest .
I ran the container with:
docker run -it -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged ubuntu/s3fs:latest bash
and then inside the container:
root#efa2689dca96:/# s3fs s3bucket /s3bucket
root#efa2689dca96:/# ls /s3bucket
testing.this.out work.please working
root#efa2689dca96:/#
which successfully listed the files in my s3bucket.
You do need to make sure the kernel on your host machine supports fuse, but it would seem you have already done so?
Note: Your S3 mountpoint will not show/work from inside other containers when using Docker's --volume or --volumes-from directives. For example:
docker run -t --detach --name testmount -v /s3bucket -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged --entrypoint /usr/bin/s3fs ubuntu/s3fs:latest -f s3bucket /s3bucket
docker run -it --volumes-from testmount --entrypoint /bin/ls ubuntu:12.04 -ahl /s3bucket
total 8.0K
drwxr-xr-x 2 root root 4.0K Aug 21 21:32 .
drwxr-xr-x 51 root root 4.0K Aug 21 21:33 ..
returns no files even though there are files in the bucket.
Adding another solution.
Dockerfile:
FROM ubuntu:16.04
# Update and install packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update --fix-missing && \
apt-get install -y automake autotools-dev g++ git libcurl4-gnutls-dev wget libfuse-dev libssl-dev libxml2-dev make pkg-config
# Clone and run s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse.git /tmp/s3fs-fuse && \
cd /tmp/s3fs-fuse && ./autogen.sh && ./configure && make && make install && ldconfig && /usr/local/bin/s3fs --version
# Remove packages
RUN DEBIAN_FRONTEND=noninteractive apt-get purge -y wget automake autotools-dev g++ git make && \
apt-get -y autoremove --purge && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set user and group
ENV USER='appuser'
ENV GROUP='appuser'
ENV UID='1000'
ENV GID='1000'
RUN groupadd -g $GID $GROUP && \
useradd -u $UID -g $GROUP -s /bin/sh -m $USER
# Install fuse
RUN apt-get update && \
apt install fuse && \
chown ${USER}.${GROUP} /usr/local/bin/s3fs
# Config fuse
RUN chmod a+r /etc/fuse.conf && \
perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
# Copy credentials
ENV SECRET_FILE_PATH=/home/${USER}/passwd-s3fs
COPY ./passwd-s3fs $SECRET_FILE_PATH
RUN chmod 600 $SECRET_FILE_PATH && \
chown ${USER}.${GROUP} $SECRET_FILE_PATH
# Switch to user
USER ${UID}:${GID}
# Create mnt point
ENV MNT_POINT_PATH=/home/${USER}/data
RUN mkdir -p $MNT_POINT_PATH && \
chmod g+w $MNT_POINT_PATH
# Execute
ENV S3_BUCKET = ''
WORKDIR /home/${USER}
CMD exec sleep 100000 && /usr/local/bin/s3fs $S3_BUCKET $MNT_POINT_PATH -o passwd_file=passwd-s3fs -o allow_other
docker-compose-yaml:
version: '3.8'
services:
s3fs:
privileged: true
image: <image-name:tag>
##Debug
#stdin_open: true # docker run -i
#tty: true # docker run -t
environment:
- S3_BUCKET=my-bucket-name
devices:
- "/dev/fuse"
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
cap_drop:
- NET_ADMIN
Build image with docker build -t <image-name:tag> .
Run with: docker-compose -d up
If you would prefer to use docker-compose for testing on your localhost use the following. Note you don't need to use --privileged flag as we are passing --cap-add SYS_ADMIN --device /dev/fuse flags in the docker-compose.yml
create file .env
AWS_ACCESS_KEY_ID=xxxxxx
AWS_SECRET_ACCESS_KEY=xxxxxx
AWS_BUCKET_NAME=xxxxxx
create file docker-compose.yml
version: "3"
services:
s3-fuse:
image: debian-aws-s3-mount
restart: always
build:
context: .
dockerfile: Dockerfile
environment:
- AWSACCESSKEYID=${AWS_ACCESS_KEY_ID}
- AWSSECRETACCESSKEY=${AWS_SECRET_ACCESS_KEY}
- AWS_BUCKET_NAME=${AWS_BUCKET_NAME}
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
create file Dockerfile. i.e You can use any docker image you prefer but first, check if your distro is supported here
FROM node:16-bullseye
RUN apt-get update -qq
RUN apt-get install -y s3fs
RUN mkdir /s3_mnt
To run container execute:
$ docker-compose run --rm -t s3-fuse /bin/bash
Once inside the container. You can mount your s3 Bucket by running the command:
# s3fs ${AWS_BUCKET_NAME} s3_mnt/
Note: For this setup to work .env, Dockerfile and docker-compose.yml must be created in the same directory. Don't forget to update your .env file with the correct credentials to the s3 bucket.

Resources