I want to mount s3fs inside of docker container.
I made docker image with s3fs, and did like this:
host$ docker run -it --rm docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
fuse: failed to open /dev/fuse: Operation not permitted
Showing "Operation not permitted" error.
So I googled, and did like this (adding --privileged=true) again:
host$ docker run -it --rm --privileged=true docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
[ root#container:~ ]$ fusermount -u /mnt/s3bucket
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
Then, mounting not shows error, but if run ls command, "Transport endpoint is not connected" error is occured.
How can I mount s3fs inside of docker container?
Is it impossible?
[UPDATED]
Add Dockerfile configuration.
Dockerfile:
FROM dockerfile/ubuntu
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y libfuse-dev
RUN apt-get install -y fuse
RUN apt-get install -y libcurl4-openssl-dev
RUN apt-get install -y libxml2-dev
RUN apt-get install -y mime-support
RUN \
cd /usr/src && \
wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz && \
tar xvzf s3fs-1.74.tar.gz && \
cd s3fs-1.74/ && \
./configure --prefix=/usr && \
make && make install
ADD passwd/passwd-s3fs /etc/passwd-s3fs
ADD rules.d/99-fuse.rules /etc/udev/rules.d/99-fuse.rules
RUN chmod 640 /etc/passwd-s3fs
RUN mkdir /mnt/s3bucket
rules.d/99-fuse.rules:
KERNEL==fuse, MODE=0777
I'm not sure what you did that did not work, but I was able to get this to work like this:
Dockerfile:
FROM ubuntu:12.04
RUN apt-get update -qq
RUN apt-get install -y build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support automake libtool wget tar
RUN wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.77.tar.gz -O /usr/src/v1.77.tar.gz
RUN tar xvz -C /usr/src -f /usr/src/v1.77.tar.gz
RUN cd /usr/src/s3fs-fuse-1.77 && ./autogen.sh && ./configure --prefix=/usr && make && make install
RUN mkdir /s3bucket
After building with:
docker build --rm -t ubuntu/s3fs:latest .
I ran the container with:
docker run -it -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged ubuntu/s3fs:latest bash
and then inside the container:
root#efa2689dca96:/# s3fs s3bucket /s3bucket
root#efa2689dca96:/# ls /s3bucket
testing.this.out work.please working
root#efa2689dca96:/#
which successfully listed the files in my s3bucket.
You do need to make sure the kernel on your host machine supports fuse, but it would seem you have already done so?
Note: Your S3 mountpoint will not show/work from inside other containers when using Docker's --volume or --volumes-from directives. For example:
docker run -t --detach --name testmount -v /s3bucket -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged --entrypoint /usr/bin/s3fs ubuntu/s3fs:latest -f s3bucket /s3bucket
docker run -it --volumes-from testmount --entrypoint /bin/ls ubuntu:12.04 -ahl /s3bucket
total 8.0K
drwxr-xr-x 2 root root 4.0K Aug 21 21:32 .
drwxr-xr-x 51 root root 4.0K Aug 21 21:33 ..
returns no files even though there are files in the bucket.
Adding another solution.
Dockerfile:
FROM ubuntu:16.04
# Update and install packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update --fix-missing && \
apt-get install -y automake autotools-dev g++ git libcurl4-gnutls-dev wget libfuse-dev libssl-dev libxml2-dev make pkg-config
# Clone and run s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse.git /tmp/s3fs-fuse && \
cd /tmp/s3fs-fuse && ./autogen.sh && ./configure && make && make install && ldconfig && /usr/local/bin/s3fs --version
# Remove packages
RUN DEBIAN_FRONTEND=noninteractive apt-get purge -y wget automake autotools-dev g++ git make && \
apt-get -y autoremove --purge && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set user and group
ENV USER='appuser'
ENV GROUP='appuser'
ENV UID='1000'
ENV GID='1000'
RUN groupadd -g $GID $GROUP && \
useradd -u $UID -g $GROUP -s /bin/sh -m $USER
# Install fuse
RUN apt-get update && \
apt install fuse && \
chown ${USER}.${GROUP} /usr/local/bin/s3fs
# Config fuse
RUN chmod a+r /etc/fuse.conf && \
perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
# Copy credentials
ENV SECRET_FILE_PATH=/home/${USER}/passwd-s3fs
COPY ./passwd-s3fs $SECRET_FILE_PATH
RUN chmod 600 $SECRET_FILE_PATH && \
chown ${USER}.${GROUP} $SECRET_FILE_PATH
# Switch to user
USER ${UID}:${GID}
# Create mnt point
ENV MNT_POINT_PATH=/home/${USER}/data
RUN mkdir -p $MNT_POINT_PATH && \
chmod g+w $MNT_POINT_PATH
# Execute
ENV S3_BUCKET = ''
WORKDIR /home/${USER}
CMD exec sleep 100000 && /usr/local/bin/s3fs $S3_BUCKET $MNT_POINT_PATH -o passwd_file=passwd-s3fs -o allow_other
docker-compose-yaml:
version: '3.8'
services:
s3fs:
privileged: true
image: <image-name:tag>
##Debug
#stdin_open: true # docker run -i
#tty: true # docker run -t
environment:
- S3_BUCKET=my-bucket-name
devices:
- "/dev/fuse"
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
cap_drop:
- NET_ADMIN
Build image with docker build -t <image-name:tag> .
Run with: docker-compose -d up
If you would prefer to use docker-compose for testing on your localhost use the following. Note you don't need to use --privileged flag as we are passing --cap-add SYS_ADMIN --device /dev/fuse flags in the docker-compose.yml
create file .env
AWS_ACCESS_KEY_ID=xxxxxx
AWS_SECRET_ACCESS_KEY=xxxxxx
AWS_BUCKET_NAME=xxxxxx
create file docker-compose.yml
version: "3"
services:
s3-fuse:
image: debian-aws-s3-mount
restart: always
build:
context: .
dockerfile: Dockerfile
environment:
- AWSACCESSKEYID=${AWS_ACCESS_KEY_ID}
- AWSSECRETACCESSKEY=${AWS_SECRET_ACCESS_KEY}
- AWS_BUCKET_NAME=${AWS_BUCKET_NAME}
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
create file Dockerfile. i.e You can use any docker image you prefer but first, check if your distro is supported here
FROM node:16-bullseye
RUN apt-get update -qq
RUN apt-get install -y s3fs
RUN mkdir /s3_mnt
To run container execute:
$ docker-compose run --rm -t s3-fuse /bin/bash
Once inside the container. You can mount your s3 Bucket by running the command:
# s3fs ${AWS_BUCKET_NAME} s3_mnt/
Note: For this setup to work .env, Dockerfile and docker-compose.yml must be created in the same directory. Don't forget to update your .env file with the correct credentials to the s3 bucket.
Related
I have a yii1 application. And I have a dockerfile. And I had a docker-compose file.
But for the momemnt I only have one application. Because I have a remote database. So the database is not in a container.
So I have this dockerfile:
FROM php:7.3-apache
#COPY BaltimoreCyberTrustRoot.crt.pem /usr/local/share/ca-certificates/AzureDB.crt
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
# RUN pecl install -o -f xdebug-3.1.3 \
# && rm -rf /tmp/pear
# Copy composer installable
COPY ./install-composer.sh ./
# Copy php.ini
COPY ./php.ini /usr/local/etc/php/
#COPY BaltimoreCyberTrustRoot.crt.pem /var/www/html/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
# Change the current working directory
WORKDIR /var/www/html
# Change the owner of the container document root
RUN chown -R www-data:www-data /var/www
# Start Apache in foreground
CMD ["apache2-foreground"]
And I had this docker-compose file:
version: '3'
services:
web:
build: ./docker
container_name: dockeryiidisc
ports:
- 80:80
- 443:443
volumes:
- C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/
- C:\xampp\htdocs\webScraper:/var/www/html/
and that worked.
But so now I only want to use the dockerfile.
So I tried this:
docker build -t docker_webcrawler .
and this command:
docker run -d -p 80:80 --name cntr-apache docker_webcrawler
But if I then go to: http://localhost:80
I only see a empty directory:
Index of /
[ICO] Name Last modified Size Description
So what I have to change? That I only have to use the dockerfile?
Thank you
It looks like you're missing the volume mappings that you have in your docker-compose file. Try this
docker run -d -p 80:80 --name cntr-apache -v C:\xampp\htdocs\webScraper/docker:/etc/apache2/sites-enabled/ -v C:\xampp\htdocs\webScraper:/var/www/html/ docker_webcrawler
I installed Freeswitch and fusionPbx using docker. here is the docker file:
FROM ubuntu:20.04
#ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Asia/Tehran
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install Required Dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y wget
RUN wget -O - https://raw.githubusercontent.com/fusionpbx/fusionpbx-install.sh/master/ubuntu/pre-install.sh | sh \
&& cd /usr/src/fusionpbx-install.sh/ubuntu && ./install.sh
#RUN wget -O - https://raw.githubusercontent.com/fusionpbx/fusionpbx-install.sh/master/debian/pre-install.sh | sh \
#&& cd /usr/src/fusionpbx-install.sh/ && ./install.sh
USER root
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY start-freeswitch.sh /usr/bin/start-freeswitch.sh
VOLUME ["/var/lib/postgresql", "/etc/freeswitch", "/var/lib/freeswitch", "/usr/share/freeswitch", "/var/www/fusionpbx"]
CMD /usr/bin/supervisord -n
the docker is working fine and running. and is running using below command:
docker run -dit -name fus5 -p 80:80 -p 443:443 -p 5060:5060 -p 5080:5080 fus_ew:0.5
Problem:
the problem is that when trying to connect to an external sip gateway, give the below error when I hit the start button:
Freeswitch Invalid profile [External]: 2002
Appreciate it if anyone can help.
Regards
I am a docker newbie and i can't rly figure out how the changes that will be made to my working directory will be continuously copied to the docker container. Is there a command that copies all my changes to the docker container all the time ?
Edit : i added docker file and docker compose
My docker file
FROM scratch
ADD centos-7-x86_64-docker.tar.xz /
LABEL \
org.label-schema.schema-version="1.0" \
org.label-schema.name="CentOS Base Image" \
org.label-schema.vendor="CentOS" \
org.label-schema.license="GPLv2" \
org.label-schema.build-date="20201113" \
org.opencontainers.image.title="CentOS Base Image" \
org.opencontainers.image.vendor="CentOS" \
org.opencontainers.image.licenses="GPL-2.0-only" \
org.opencontainers.image.created="2020-11-13 00:00:00+00:00"
RUN yum clean all && yum update -y && yum -y upgrade
RUN yum groupinstall "Development Tools" -y
RUN yum install -y wget gettext-devel curl-devel openssl-devel perl-devel perl-CPAN zlib-devel && wget https://github.com/git/git/archive/v2.26.2.tar.gz\
&& tar -xvzf v2.26.2.tar.gz && cd git-2.26.2 && make configure && ./configure --prefix=/usr/local && make install
# RUN mkdir -p /root/.ssh && \
# chmod 0700 /root/.ssh && \
# ssh-keyscan github.com > /root/.ssh/known_hosts
# RUN ssh-keygen -q -t rsa -N '' -f /id_rsa
# RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
# echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
# chmod 600 /root/.ssh/id_rsa && \
# chmod 600 /root/.ssh/id_rsa.pub
RUN ls
RUN cd / && git clone https://github.com/odoo/odoo.git \
&& cd odoo \
&& git fetch \
&& git checkout 9.0
RUN yum install python-devel libxml2-devel libxslt-dev openldap-devel libtiff-devel libjpeg-devel libzip-devel freetype-devel lcms2-devel \
libwebp-devel tcl-devel tk-devel python-pip nodejs
RUN pip install setuptools==1.4.1 beautifulsoup4==4.9.3 pillow openpyxl==2.6.4 luhn gmp-devel paramiko==1.7.7.2 python2-secrets cffi pysftp==0.2.8
RUN pip install -r requirements.txt
RUN npm install -g less
CMD ["/bin/bash","git"]
My docker-compose
version: '3.3'
services:
app: &app
build:
context: .
dockerfile: ./docker/app/Dockerfile
container_name: app
tty: true
db:
image: postgres:9.2.18
environment:
- POSTGRES_DB=test
ports:
- 5432:5432
volumes:
- ./docker/db/pg-data:/var/lib/postgresql/data
odoo:
<<: *app
command: python odoo.py -w odoo -r odoo
ports:
- '8069:8069'
depends_on:
- db
If I understand correctly you want to mount a path from the host into a container which can be done using volumes. Something like this would keep the folders in sync which can be useful for development
docker run -v /path/to/local/folder:/path/in/container busybox
I am very new to Docker.
I am getting an error saying "cannot find module 'express'" while trying to containerize simple node-red application. The details are as follows:
Base machine
OS -Debian 9 (stretch) 64-bit
RAM -8 gb
GNOME - 3.22.2
Env - Oracle Virtual Box
Node-Red source
https://github.com/node-red/node-red.git
Docker Version
17.12.0-ce, build c97c6d6
docker-compose -v
1.20.1, build 5d8c71b
Docker File
FROM debian:stretch-slim
RUN useradd -c 'Node-Red user' -m -d /home/nodered -s /bin/bash nodered
RUN chown -R nodered.nodered /home/nodered
RUN echo "Acquire::http::Proxy \"http://xxxx:yyyy";" > /etc/apt/apt.conf.d/01turnkey \
&& echo "Acquire::https::Proxy \"http://xxxx.yyyy";" >> /etc/apt/apt.conf.d/01turnkey
ENV http_proxy="http://xxxx:yyyy \
https_proxy="http://xxxx:yyyy"
USER root
RUN apt-get update && apt-get -y install --no-install-recommends \
ca-certificates \
apt-utils \
curl \
sudo \
git \
python \
make \
g++ \
gnupg2
RUN mkdir -p /home/nodered/shaan-node-red && chown -R nodered.nodered /home/nodered/shaan-node-red
ENV HOME /home/nodered/shaan-node-red
WORKDIR /home/nodered/shaan-node-red
RUN ls -la
RUN env
USER root
RUN echo "nodered ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/nodered && \
chmod 0440 /etc/sudoers.d/nodered
RUN curl -sL https://deb.nodesource.com/setup_9.x | bash -
RUN apt-get -y install nodejs
RUN rm -rf node-v9.x
RUN node -v (v9.9.0) && npm -v (5.6.0)
RUN npm config set proxy "http://xxxx:yyyy" \
npm config set http-proxy "http://xxxx:yyyy"
COPY . /home/nodered/shaan-node-red
RUN cd /home/nodered/shaan-node-red && ls -la && npm install
RUN npm run build && ls -la
RUN cd /home/nodered/shaan-node-red/node_modules/ && git clone https://github.com/netsmarttech/node-red-contrib-s7.git && ls -la | grep s7 && cd ./node-red-contrib-s7 && npm install
RUN ls -la /home/nodered/shaan-node-red/node_modules
ENTRYPOINT ["sh","entrypoint.sh"]
entrypoint.sh
node /home/nodered/shaan-node-red/red.js
Docker-compose.yml
version: '2.0'
services:
web:
image: shaan-node-red
build: .
volumes:
- .:/home/nodered/shaan-node-red
ports:
- "1880:1880"
- "5858:5858"
network_mode: host
Building with command:
docker-compose up
Error description
Note
Not getting any error while building same node-red at the base machine.
I am running docker (via docker-compose) and can't run varnishadm from within the container. The error produced is:
Cannot open /var/lib/varnish/4f0dab1efca3/_.vsm: No such file or directory
Could not open shared memory
I have tried searching on the 'shared memory' issue and _.vsm with no luck. It seems that the _.vsm is not created at all and /var/lib/varnish/ inside the container is empty.
I have tried a variety of -T settings without any luck.
Why run varnishadm?
The root of why I need to run varnishadm is to reload varnish while saving the cache. My backup backup backup option is to set up varnish as a service. We are on an old version of Varnish for the time being.
How am I starting docker?
CMD varnishd -F -f /etc/varnish/varnish.vcl \
-s malloc,1G \
-a :80
Full Dockerfile
FROM ubuntu:12.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install wget dtrx varnish -y \
&& apt-get install pkg-config autoconf autoconf-archive automake libtool python-docutils libpcre3 libpcre3-dev xsltproc make -y \ && rm -rf /var/lib/apt/lists/*
RUN export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
RUN wget https://github.com/varnishcache/varnish-cache/archive/varnish-
3.0.2.tar.gz --no-check-certificate \
&& dtrx -n varnish-3.0.2.tar.gz
WORKDIR /varnish-3.0.2/varnish-cache-varnish-3.0.2/
RUN cd /varnish-3.0.2/varnish-cache-varnish-3.0.2/ && ./autogen.sh &&
cd /varnish-3.0.2/varnish-cache-varnish-3.0.2/ && ./configure && make install
RUN cd / && wget --no-check-certificate https://github.com/Dridi/libvmod-querystring/archive/v0.3.tar.gz && dtrx -n ./v0.3.tar.gz
WORKDIR /v0.3/libvmod-querystring-0.3
RUN ./autogen.sh && ./configure VARNISHSRC=/varnish-3.0.2/varnish-cache-varnish-3.0.2/ && make install
RUN cp /usr/local/lib/varnish/vmods/* /usr/lib/varnish/vmods/
WORKDIR /etc/varnish/
CMD varnishd -F -f /etc/varnish/varnish.vcl \
-s malloc,1G \
-a :80
EXPOSE 80
Full docker-compose
version: "3"
services:
varnish:
build: ./
ports:
- "8000:80"
volumes:
- ./default.vcl:/etc/varnish/varnish.vcl
- ./devicedetect.vcl:/etc/varnish/devicedetect.vcl
restart: unless-stopped