Dockerfile wireguard kernel issue - docker

Trying to run a decentralized vpn inside of a docker image. The issue im running to I think has to do with my running and installed kernels are different. I have to run modprobe wireguard in my docker file and its returning modprobe: FATAL: Module wireguard not found in directory /lib/modules/5.10.25-linuxkit I know the issue is related to the running kernels but im not sure what the fix would be. Heres my current Dockerfile.
FROM ubuntu:20.04
USER root
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
git \
gcc \
make \
musl-dev \
unbound \
libtool \
g++ \
file \
automake \
autoconf \
libssl-dev \
libexpat-dev \
bison \
systemd \
iproute2 \
sudo \
wireguard-tools
RUN systemctl enable systemd-resolved
COPY . /sentinelnode /bin/
COPY . /.sentinelnode /root/
COPY . /hnsd /bin/
RUN chmod +x /bin/sentinelnode
RUN chmod +x /bin/hnsd
RUN sudo modprobe wireguard
RUN cd $HOME
CMD sentinelnode start

Related

Error "failed to solve with frontend dockerfile.v0: failed to build LLB: executor failed running"

I am trying to deploy Shiny App through Docker using windows 10 machine(8GB RAM).
Here is the Dockerfile :
# Base image https://hub.docker.com/u/rocker
FROM rocker/shiny:3.6.3
# system libraries of general use
RUN apt-get update -qq && apt-get -y --no-install-recommends install \
libxml2-dev \
libcairo2-dev \
libsqlite3-dev \
libmariadbd-dev \
libmariadbclient-dev \
libpq-dev \
libssl-dev \
libcurl4-openssl-dev \
libssh2-1-dev \
unixodbc-dev \
&& install2.r --error \
--deps TRUE \
tidyverse \
dplyr \
devtools \
formatR \
remotes \
selectr \
caTools \
BiocManager \
&& rm -rf /tmp/downloaded_packages
# copy necessary files
## app folder
COPY /shinyapp/shinyapp.Rproj /srv/shiny-server/
COPY /shinyapp/ui.R /srv/shiny-server/
COPY /shinyapp/server.R /srv/shiny-server/
COPY /shinyapp/global.R /srv/shiny-server/
# install renv & restore packages
RUN Rscript -e 'install.packages("shiny", repos='http://cran.rstudio.com/')'
RUN Rscript -e 'install.packages("shinydashboard", repos='http://cran.rstudio.com/')'
RUN Rscript -e 'install.packages("glue", repos='http://cran.rstudio.com/')'
# expose port
EXPOSE 3838
RUN sudo chown -R shiny:shiny /srv/shiny-server
# run app on container start
CMD ["/usr/bin/shiny-server.sh"]
The Folder Structure is as follows :
Dockerfile
shinyapp ---> global.R ,
ui.R,
server.R
I tried troubleshooting and checked the name of the files. Everything is correct.
But still getting the error
Any help will be great !!
Follow this steps
/Users/myUserName/.docker
ls -a
rm -rf .token_seed
if it doesn't resolve the issue
remove '.token_seed.lock' file as well

Cannot mount local directory into container from docker-compose file

I want to mount the local directory of a project to docker container before I used COPY command but when I make changes I have to rebuild those parts which involve some installation from bash scripts.
This is my docker-compose file
version "3.7"
services
tesseract:
container_name: tesseract
build:
context: ./app/services/tesseract/
dockerfile: Dockerfile
volumes:
- ./app/services/tesseract:/tesseract/
I don't have any errors when building and my WORKDIR tesseract is empty when i run container
This is my Dockerfile
FROM ubuntu:19.10
ENV DEBIAN_FRONTEND=noninteractive
ENV TESSERACT=/usr/share/tesseract
WORKDIR /tesseract
RUN apt-get update && apt-get install -y \
build-essential \
software-properties-common \
python3.7 \
python3-pip \
cmake \
autoconf \
automake \
libtool \
pkg-config \
libpng-dev \
tesseract-ocr \
libtesseract-dev \
libpango1.0-dev \
libicu-dev \
libcairo2-dev \
libjpeg8-dev \
zlib1g-dev \
libtiff5-dev \
wget \
git \
g++ \
vim
RUN git clone https://github.com/tesseract-ocr/tesseract $TESSERACT
COPY . /tesseract/
RUN chmod +x scripts/*
RUN scripts/compile_tesseract.sh
RUN scripts/langdata_lstm.sh scripts/start.sh
RUN pip3 install -r requirements.txt
ENV TESSDATA_PREFIX=/usr/share/tesseract/tessdata
Main objective of docker volume is
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
which means volumes are used to persist the data outside the lifecycle of a container. If you want to COPY a file or a directory into a container, please use COPY instruction.
If you’re copying in local files to your Docker image, always use COPY because it’s more explicit.
With Docker-compose, you can use a bind mount volume
https://docs.docker.com/compose/gettingstarted/#step-5-edit-the-compose-file-to-add-a-bind-mount#step-5-edit-the-compose-file-to-add-a-bind-mount

how to merge Docker's layers of image and slim down the image file

docker image inspect <name>
gives me 16GB
and about 20 layers
When I am logged as root, this
du -hs /
show me just 2GB
FYI, there are already very multi-lines RUN commands in Dockerfile.
can I squash all layers into one layer without touching Dockerfile, rebuilding etc?
or possibly by adding extra action to Dockerfile which clear/improve caching
Dockerfile is
FROM heroku/heroku:18
ENV PYENV_ROOT="/pyenv"
ENV PATH="/pyenv/shims:/pyenv/bin:$PATH"
ENV PYTHON_VERSION 3.5.6
ENV GPG_KEY <value>
ENV PYTHONUNBUFFERED 1
ENV TERM xterm
ENV EDITOR vim
RUN apt-get update && apt-get install -y \
build-essential \
gdal-bin \
binutils \
iputils-ping \
libjpeg8 \
libproj-dev \
libjpeg8-dev \
libtiff-dev \
zlib1g-dev \
libfreetype6-dev \
liblcms2-dev \
libxml2-dev \
libxslt1-dev \
libssl-dev \
libncurses5-dev \
virtualenv \
python-pip \
python3-pip \
python-dev \
libmysqlclient-dev \
mysql-client-5.7 \
libpq-dev \
libcurl4-gnutls-dev \
libgnutls28-dev \
libbz2-dev \
tig \
git \
vim \
nano \
tmux \
tmuxinator \
fish \
sudo \
libnet-ifconfig-wrapper-perl \
ruby \
libssl-dev \
nodejs \
strace \
tcpdump \
# npm & grunt
&& curl -L https://npmjs.com/install.sh | sh \
&& npm install -g grunt-cli grunt \
# ruby & foreman
&& gem install foreman \
# installing pyenv
&& curl https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash
COPY . /app
COPY ./requirements /requirements
COPY ./requirements.txt /requirements.txt
COPY ./docker/docker_compose/django/foreman.sh /foreman.sh
COPY ./docker/docker_compose/django/Procfile /Procfile
COPY ./docker/docker_compose/django/entrypoint.sh /entrypoint.sh
# ADD sudoer user django with password django
RUN groupadd -r django -g 1000 && \
useradd -ms /usr/bin/fish -p $(openssl passwd -1 django) --uid 1000 --gid 1000 -r -g django django && \
usermod -a -G sudo django && \
chown -R django:django /app
COPY --chown=django:django ./docker/docker_compose/django/fish /home/django/.config/fish
COPY --chown=django:django ./docker/docker_compose/django/tmuxinator /home/django/.tmuxinator
COPY ./docker/docker_compose/django/fish /root/.config/fish
WORKDIR /app
RUN sed -i 's/\r//' /entrypoint.sh \
&& sed -i 's/\r//' /foreman.sh \
&& chmod +x /entrypoint.sh \
&& chown django /entrypoint.sh \
&& chmod +x /foreman.sh \
&& chown django /foreman.sh \
&& chown -R django:django /home/django/ \
&& pyenv install ${PYTHON_VERSION%%} \
&& mkdir -p /app/log \
&& pyenv global ${PYTHON_VERSION%%} \
&& pyenv rehash \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -U pip \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements.txt \
&& chown -R django:django /pyenv/ \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements/dev_requirements.txt
# this user receives ENVs from the top
USER django
ENTRYPOINT ["/entrypoint.sh"]
What I've tried so far:
The --squash option from experimental mode of docker build is rather not for me. That Dockerfile is one of more Dockerfiles inside docker-compose.
I've also checked this:
https://github.com/jwilder/docker-squash
but seems docker load cannot load a squashed image.
also, that squash gives me 8GB (still far away from expected ~2GB)
docker save <image_id> | docker-squash -t latest_tiny | docker load
update after answers:
when I've added this:
&& apt-get autoremove \ # ? to consider
&& apt-get clean \ # ? to consider
&& rm -rf /var/lib/apt/lists/*
to apt-get and --no-cache-dir to each pip, the result was 72GB (yes, even much more - docker images shows 36GB before pip command, and 72GB as final size).
my working directory is clear (regarding COPY). du -hs / (as a root) still has 2GB. And all images were removed before rebuilding.
Following the #Mihai approach, I was able to slim down the image from 16GB to 9GB.
There is a simple trick to get rid of the intermediate layers. It will bring down the size as well but with how much depends on how it was built.
Create a Dockerfile like this:
FROM your_image as initial
FROM your_image_base
COPY --from=initial / /
your_image_base should be something like 'alpine' - so the smallest image from which your image and its parents descend from.
Now build the image and check the history and size:
docker build -t your-image:2.0 .
docker image history your-image:2.0
docker image ls
This way you do create a new Dockerfile (if that is acceptable for your process) without touching the initial Dockerfile.
Let me know if this solves your issue.
UPDATE AFTER SEEING THE Dockerfile:
maybe I miss it but I don't see you cleaning up the apt-get cache after you perform the installations. Your big RUN command should end with "&& rm -rf /var/lib/apt/lists/*" on the same line so that it doesn't store the whole cache on the layer.
Definitely add && rm -rf /var/lib/apt/lists/* on the end of your main run command, like Mihai said. Another thing that may help (depending on how big your dependencies are) is installing with pip using the --no-cache-dir option . Also, make sure you understand build context and consider using either a .dockerignore or sending the context to another directory (totally depends on how you're directory is setup)
I've also had luck exploring an image using dive. Honestly this looks like a pretty big image so not sure how much you're going to be able to get it down
To squash a (Docker) container image, without re-building the image or manipulating the original Dockerfile,
You can extend from your image and squash it:
docker build --squash -t your_image_squashed - <<< "FROM your_image"
It's very easy, just use
docker commit YOUR_CONTAINER_ID NEW_IMAGE_ID
The docker will throw away the intermediate layers, you lost history but the size is small

Docker LAMP stack - lstat apache_default: no such file or directory?

I tried to installed a LAMP stack with a DockerFile in a directory on my desktop /home/username/Desktop/docker/lamp/:
FROM ubuntu:16.04
VOLUME ["/var/www"]
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y \
apache2 \
php7.0 \
php7.0-cli \
libapache2-mod-php7.0 \
php7.0-gd \
php7.0-json \
php7.0-ldap \
php7.0-mbstring \
php7.0-mysql \
php7.0-pgsql \
php7.0-sqlite3 \
php7.0-xml \
php7.0-xsl \
php7.0-zip \
php7.0-soap
COPY apache_default /etc/apache2/sites-available/000-default.conf
COPY run /usr/local/bin/run
RUN chmod +x /usr/local/bin/run
RUN a2enmod rewrite
EXPOSE 80
CMD ["/usr/local/bin/run"]
Then on my terminal, I run it:
$ docker build -t docker-lamp .
Step 4 : COPY apache_default /etc/apache2/sites-available/000-default.conf
lstat apache_default: no such file or directory
What I have done wrong? How can I fix this?
First of all, are you 100% sure the files apache_default and run are in the same directory as the Dockerfile?

Multiple dependent ? Dockerfiles building a LAMP container

I'm trying to build a LAMP container and I have already built several containers: httpd 2.4.23, redis 3.0.7, mysql 5.6.30 by compiling them myself from source code downloaded archives. I have based all of these above on the debian container.
Now that I'm doing the php 5.6.20 container it complains that it does not know about apache and mysql.
Here is the Dockerfile for the php container:
FROM debian
RUN apt-get update
RUN apt-get install -y build-essential;
RUN apt-get install -y cmake;
RUN apt-get install -y libfreetype6-dev libjpeg-dev libpng12-dev libcurl4-openssl-dev libbz2-dev libxml2-dev libxslt-dev libgd2-xpm-dev php5-imap libz-dev
WORKDIR /usr/bin/
COPY php-5.6.20.tar.gz /usr/bin/
RUN gzip -d php-5.6.20.tar.gz
RUN tar -xvf php-5.6.20.tar
RUN ln -s php-5.6.20 php
WORKDIR /usr/bin/php/
RUN ./configure \
--prefix=/usr/bin/ \
--with-apxs2=/usr/bin/apache/bin/apxs \
--with-config-file-path=/usr/bin/php-5.6.20/ \
--enable-libgcc \
--with-mysqli=/usr/bin/mysql/mysql_config \
--with-zlib-dir=/usr \
--with-jpeg-dir=/usr \
--with-png-dir=/usr \
--with-gd \
--enable-gd-native-ttf \
--with-freetype-dir=/usr \
--enable-ftp \
--enable-xml \
--enable-zip \
--with-bz2 \
--enable-wddx \
--without-pear \
--enable-mbstring \
--with-curl
RUN make
RUN make install
I wonder if I should base it instead on: FROM httpd:2.4.23. But then I would need to base httpd on the mysql one, and / or on the redis one... I don't really like that setup.
I have also installed Docker Compose but I wonder if it could be helpful in my situation.
UPDATE: Here is the fully working Dockerfile
FROM debian
RUN apt-get update
RUN apt-get install -y build-essential;
RUN apt-get install -y cmake;
RUN apt-get install -y openssl libssl-dev;
RUN apt-get install -y libpcre3 libpcre3-dev
WORKDIR /usr/bin/
COPY httpd-2.4.23.tar.gz /usr/bin/
RUN gzip -d httpd-2.4.23.tar.gz
RUN tar -xvf httpd-2.4.23.tar
RUN ln -s httpd-2.4.23 httpd
COPY apr-1.5.2.tar.gz /usr/bin/httpd/srclib/
COPY apr-util-1.5.4.tar.gz /usr/bin/httpd/srclib/
WORKDIR /usr/bin/httpd/srclib/
RUN gzip -d apr-1.5.2.tar.gz
RUN gzip -d apr-util-1.5.4.tar.gz
RUN tar -xvf apr-1.5.2.tar
RUN tar -xvf apr-util-1.5.4.tar
RUN ln -s apr-1.5.2 apr;
RUN ln -s apr-util-1.5.4 apr-util
WORKDIR /usr/bin/httpd/
RUN ./configure \
--prefix=/usr/bin/apache \
--enable-rewrite \
--enable-deflate \
--enable-ssl
RUN make
RUN make install
RUN apt-get update
RUN apt-get install -y libncurses-dev
COPY mysql-5.6.30.tar.gz /usr/bin/
WORKDIR /usr/bin/
RUN gzip -d mysql-5.6.30.tar.gz
RUN tar -xvf mysql-5.6.30.tar
RUN ln -s mysql-5.6.30 mysql
WORKDIR /usr/bin/mysql/
RUN mkdir install; mkdir install/data; mkdir install/var; mkdir install/etc; mkdir install/tmp
RUN cd /usr/bin/mysql/; cmake \
-DCMAKE_INSTALL_PREFIX=/usr/bin/mysql/install \
-DWITH_INNOBASE_STORAGE_ENGINE=1 \
-DMYSQL_DATADIR=/usr/bin/mysql/install/data \
-DDOWNLOAD_BOOST=1 \
-DWITH_BOOST=/usr/bin/mysql/install/boost \
-DMYSQL_UNIX_ADDR=/usr/bin/mysql/install/tmp/mysql.sock
RUN make
RUN make install
RUN apt-get update
RUN apt-get install -y libfreetype6-dev libjpeg-dev libpng12-dev libcurl4-openssl-dev libbz2-dev libxml2-dev libxslt-dev libgd2-xpm-dev php5-imap libz-dev
WORKDIR /usr/bin/
COPY php-5.6.20.tar.gz /usr/bin/
RUN gzip -d php-5.6.20.tar.gz
RUN tar -xvf php-5.6.20.tar
RUN ln -s php-5.6.20 php
WORKDIR /usr/bin/php/
RUN ./configure \
--prefix=/usr/bin/php \
--with-apxs2=/usr/bin/apache/bin/apxs \
--with-config-file-path=/usr/bin/php-5.6.20/ \
--enable-libgcc \
--with-mysqli=/usr/bin/mysql/install/bin/mysql_config \
--with-zlib-dir=/usr \
--with-jpeg-dir=/usr \
--with-png-dir=/usr \
--with-gd \
--enable-gd-native-ttf \
--with-freetype-dir=/usr \
--enable-ftp \
--enable-xml \
--enable-zip \
--with-bz2 \
--enable-wddx \
--without-pear \
--enable-mbstring \
--with-openssl
--with-curl
RUN make
RUN make install
ENTRYPOINT ["/usr/bin/apache/bin/apachectl", "start", "-D FOREGROUND"]
EXPOSE 80
# Build the container: docker build -t stephaneeybert/httpd:2.4.23 .
# Run the container: docker run -d -p 127.0.0.1:80:80 --name httpd stephaneeybert/httpd:2.4.23
# Check that the port is open: nmap -p 8081 localhost
If you need apache running in your container, you can install apache in your image with above Dockerfile, just as the same as you install the build-essential stuffs. Which means:
RUN apt-get install -y apache2
or similar command. If you also need the configure for this apache application, you can use ADD or COPY command to add your configure file from outside to inside of your container. More details can be found here.
If you need apache as an independent container, you can use docker-compse to achieve it. Start apache in another container, then use depends_on to config the dependency between your containers. You may use ports to change the port number of each container, so they can communicate between each other.

Resources