Creating first docker container: Can't find host system file on build - docker

I'm trying to bundle my Jekyll blog as a docker container.
I found this Dockerfile which seems to suit my use case but wanted to be more hands on so I copied it directly into my repo:
FROM ruby:latest
MAINTAINER Peter Etelej <peter#etelej.com>
RUN apt-get -qq update && \
apt-get -qq install nodejs -y && \
gem install -q bundler
RUN mkdir -p /etc/jekyll && \
printf 'source "https://rubygems.org"\ngem "github-pages"\ngem "execjs"\ngem "rouge"' > /etc/jekyll/Gemfile && \
printf "\nBuilding required Ruby gems. Please wait..." && \
bundle install --gemfile /etc/jekyll/Gemfile --clean --quiet
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV BUNDLE_GEMFILE /etc/jekyll/Gemfile
EXPOSE 4000
ENTRYPOINT ["bundle", "exec"]
CMD ["jekyll", "serve","--host=0.0.0.0"]
When I run it I get an error
jekyll 3.4.3 | Error: No such file or directory # rb_sysopen - /etc/modules-load.d/modules.conf
The host system has this file but my assumption was that the container didn't have access to it so I tried to add it into the Dockerfile
ADD /etc/modules-load.d/modules.conf /etc/modules-load.d/modules.conf
I then docker build and get the error
lstat etc/modules-load.d/: no such file or directory
I don't understand why the container is looking for this file in the first place but I'm even more confused by the fact that I can't add a file which is clearly there.

Docker builds run on the docker host, not necessarily the client where you run the command, and so all the files needed to run the build are sent in the build context to the host. That context is most often the current directory, or ., that you pass at the end of the docker build -t $image_name . command.
Everything that you try to include in the image with a COPY or ADD is done in reference to that build context, not the filesystem on your client or host machine. So if you need a modules.conf, you'll need to first copy that into your directory with the Dockerfile, and then COPY the file from there.
As for why jekyll is looking for the file, I'm not familiar with jekyll, but it doesn't look promising for something running inside of a container. The modules are kernel specific and containers are designed to be moved to different hosts with potentially different kernels.

Related

Run protoc command into docker container

I'm trying to run protoc command into a docker container.
I've tried using the gRPC image but protoc command is not found:
/bin/sh: 1: protoc: not found
So I assume I have to install manually using RUN instructions, but is there a better solution? An official precompiled image with protoc installed?
Also, I've tried to install via Dockerfile but I'm getting again protoc: not found.
This is my Dockerfile
#I'm not using "FROM grpc/node" because that image can't unzip
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_32.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr/local
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin
RUN cp -R ./proto/include/* ${BASE}/include
RUN protoc -I=...
I've done RUN echo $PATH to ensure the folder is in path and is ok:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Also RUN ls -la /usr/local/bin to check protoc file is into the folder and it shows:
-rwxr-xr-x 1 root root 4849692 Jan 2 11:16 protoc
So the file is in /bin folder and the folder is in the path.
Have I missed something?
Also, is there a simple way to get the image with protoc installed? or the best option is generate my own image and pull from my repository?
Thanks in advance.
Edit: Solved downloading linux-x86_64 zip file instead of x86_32. I downloaded the lower architecture requirements thinking a x86_64 machine can run a x86_32 file but not in the other way. I don't know if I'm missing something about architecture requirements (It's probably) or is a bug.
Anyway in case it helps someone I found the solution and I've added an answer with the neccessary Dockerfile to run protoc and protoc-gen-grpc-web.
The easiest way to get non-default tools like this is to install them through the underlying Linux distribution's package manager.
First, look at the Docker Hub page for the node image. (For "library" images like node, construct the URL https://hub.docker.com/_/node.) You'll notice there that there are several variations named "alpine", "buster", or "stretch"; plain node:12 is the same as node:12-stretch and node:12.20.0-stretch. The "alpine" images are based on Alpine Linux; the "buster" and "stretch" ones are different versions of Debian GNU/Linux.
For Debian-based packages, you can then look up the package on https://packages.debian.org/ (type protoc into the "Search the contents of packages" form at the bottom of the page). That leads you to the protobuf-compiler package. Knowing that contains the protoc binary, you can install it in your Dockerfile with:
FROM node:12 # Debian-based
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
protobuf-compiler
# The rest of your Dockerfile as above
COPY ...
RUN protoc ...
You generally must run apt-get update and apt-get install in the same RUN command, lest a subsequent rebuild get an old version of the package cache from the Docker build cache. I generally have only a single apt-get install command if I can manage it, with the packages list alphabetically one to a line for maintainability.
If the image is Alpine-based, you can do a similar search on https://pkgs.alpinelinux.org/contents to find protoc, and similarly install it:
FROM node:12-alpine
RUN apk add --no-cache protoc
# The rest of your Dockerfile as above
Finally I solved my own issue.
The problem was the arch version: I was using linux-x86_32.zip but works using linux-x86_64.zip
Even #David Maze answer is incredible and so complete, it didn't solve my problem because using apt-get install version 3.0.0 and I wanted 3.14.0.
So, the Dockerfile I have used to run protoc into a docker container is like this:
FROM node:12
...
# Download proto zip
ENV PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/${PROTOC_ZIP}
RUN unzip -o ${PROTOC_ZIP} -d ./proto
RUN chmod 755 -R ./proto/bin
ENV BASE=/usr
# Copy into path
RUN cp ./proto/bin/protoc ${BASE}/bin/
RUN cp -R ./proto/include/* ${BASE}/include/
# Download protoc-gen-grpc-web
ENV GRPC_WEB=protoc-gen-grpc-web-1.2.1-linux-x86_64
ENV GRPC_WEB_PATH=/usr/bin/protoc-gen-grpc-web
RUN curl -OL https://github.com/grpc/grpc-web/releases/download/1.2.1/${GRPC_WEB}
# Copy into path
RUN mv ${GRPC_WEB} ${GRPC_WEB_PATH}
RUN chmod +x ${GRPC_WEB_PATH}
RUN protoc -I=...
Because this is currently the highest ranked result on Google and the above instructions above won't work, if you want to use docker/dind for e.g. gitlab, this is the way how you can get the glibc-dependency working for protoc there:
#!/bin/bash
# install gcompat, because protoc needs a real glibc or compatible layer
apk add gcompat
# install a recent protoc (use a version that fits your needs)
export PB_REL="https://github.com/protocolbuffers/protobuf/releases"
curl -LO $PB_REL/download/v3.20.0/protoc-3.20.0-linux-x86_64.zip
unzip protoc-3.20.0-linux-x86_64.zip -d $HOME/.local
export PATH="$PATH:$HOME/.local/bin"

Why Dockerfile builds but is not working correctly, even though it works manually?

I've been trying to get this running for many MANY hours. I've been scouting docker docs, github repos and other stuff but I can't get it working for some reason.
My dockerfile:
FROM mattrayner/lamp:latest-1804
WORKDIR /app
RUN wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
EXPOSE 80
CMD ["/run.sh"]
It build normally without any errors but when I run the image nothing appears in the /app directory and I get just a basic Welcome to LAMP view on my browser.
Though,
If I do docker run -p "80:80" -it -v ${PWD}/app:/app mattrayner/lamp:latest-1804 /bin/bash, cd /app, copy and paste
wget -O /tmp/lwt.zip http://downloads.sourceforge.net/project/lwt/lwt_v_1_6_3.zip && \
yes A | unzip /tmp/lwt.zip &&\
rm /tmp/lwt.zip &&\
mv connect_xampp.inc.php connect.inc.php
it still doesn't work BUT if I exit and run the same docker run command it works.
Docker LAMP instructions also state to do exactly as I have done:
FROM mattrayner/lamp:latest-1804
# Your custom commands
CMD ["/run.sh"]
As I followed these instructions I thought that everything would work nicely.
What's the catch here? It has something to do with the intermediate containers probably but I can't comprehend it (I'm not a devops or developer by trade, just a hobbyist).
That happens because you're doing this:
Download a file (wget ...) in your /app dir in your docker image.
After that, you're overwritting this /app dir when you mount volume, with content of your $PWD/app.
If you are installing something doing docker build in some dirs, don't mount into the same path.
If you need something in the same path, you can mount some concrete files, but not the whole dir, or it will override what you had in your docker image when container is created.
You can do wget somewhere else or download it into your ${PWD}/app and then mount it.

My custom beat cant find custombeat.yml when I try to run it from a container

So, I have built a beat with mage GenerateCustomBeat and it runs okay, except, now I'm trying to cotainerize it. When I run the image I built, it complains that no customBeat.yml was found.
I have secured that the file exists in the folder by adding a line RUN ls . at the end of my Dockerfile.
The beat name is coletorbeat, so this name appears multiple times inside the Dockerfile.
Upon executing sudo docker run coletorbeat I have the following error message:
Exiting: error loading config file: stat coletorbeat.yml: no such file or directory
If there was a way to specify the coletorbeat.yml file location when I execute the beat, in CMD I think I would solve it, but I have not found how to do so yet.
I'll post the Dockerfile below. I know the code inside the beater folder works fine. I'm guessing I'm making some mistake on the containerization.
Dockerfile:
FROM ubuntu
MAINTAINER myNameHere
ARG ${ip:-"333.333.333.333"}
ARG ${porta:-"4343"}
ARG ${dataInicio:-"2020-01-07"}
ARG ${dataFim:-"2020-01-07"}
ARG ${tipoEquipamento:-"type"}
ARG ${versao:-"2"}
ARG ${nivel:-"0"}
ARG ${instituicao:-"RJ"}
ADD . .
RUN mkdir /etc/coletorbeat
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
RUN apt-get update && \
apt-get install -y wget git
RUN wget https://storage.googleapis.com/golang/go1.14.4.linux-amd64.tar.gz
RUN tar -zxvf go1.14.*.linux-amd64.tar.gz -C /usr/local
RUN mkdir /go
ENV GOROOT /usr/local/go
ENV GOPATH $HOME/go
ENV PATH $PATH:$GOROOT/bin:$GOPATH/bin
RUN echo $PATH
RUN go get -u -d github.com/magefile/mage
RUN cd $GOPATH/src/github.com/magefile/mage && \
go run bootstrap.go
RUN apt-get install -y python3-venv
RUN apt-get install -y build-essential
RUN cd /coletorbeat && chmod go-w coletorbeat.yml && ./coletorbeat setup
RUN cd /coletorbeat && ./coletorbeat test config -c /coletorbeat/coletorbeat.yml && ls .
CMD ./coletorbeat/coletorbeat -E 'coletorbeat.ip=${ip}'
You are adding the yml file into the /etc dir
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
But then running commands on /coletorbeat without using etc.
On CMD line in the Dockerfile, I added the command cd /mybeatfolder and it worked. Libbeat searches the current folder for the config file as default, so moving to the right directory before executing my beat solved it.

Mounting docker volume on specific path

I am trying to deploy photo-stream (https://github.com/maxvoltar/photo-stream) using a docker container. Photo-stream is a picture publishing site meant for self-hosting. It expects its pictures in a path called 'photos/original/', relative to where it's installed. It will create other directories under 'photos/' to cache thumbnails and such.
When I populate that directory with some pictures and start the application natively (without docker) from its build directory using:
$ bundle exec jekyll serve --host 0.0.0.0
it shows me the pictures I put in that directory. When running the application inside a docker container, I need it to
mount a volume that contains a path 'photos/original' so that I can keep my pictures there. I have created this path on
a disk mounted at /mnt/data/.
In order to do that, I have added a volume line to the existing Dockerfile:
FROM ruby:latest
ENV VIPSVER 8.9.1
RUN apt update && apt -y upgrade && apt install -y build-essential
RUN wget -O ./vips-$VIPSVER.tar.gz https://github.com/libvips/libvips/releases/download/v$VIPSVER/vips-$VIPSVER.tar.gz
RUN tar -xvzf ./vips-$VIPSVER.tar.gz && cd vips-$VIPSVER && ./configure && make && make install
COPY ./ /photo-stream
WORKDIR /photo-stream
RUN ruby -v && gem install bundler jekyll && bundle install
VOLUME /photo-stream/photos
EXPOSE 4000
ENTRYPOINT bundle exec jekyll serve --host 0.0.0.0
I build the container this way:
$ docker build --tag photo-stream:1.0 .
I run the container this way:
$ docker run -d -p 4000:4000 -v /mnt/data/photos/:/photos/ --name ps photo-stream:1.0
I was expecting the content of the directory /mnt/data/photos to be shown. Instead, nothing is shown. However, a volume '/var/lib/docker/volumes/e5ff426ced2a5e786ced6b47b67d7dee59160c60f59f481516b638805b731902/_data' is created, and when that is populated with pictures, those are shown.

PHP and redis in same docker image

I'm trying to add redis to a php:7.0-apache image, using this Dockerfile:
FROM php:7.0-apache
RUN apt-get update && apt-get -y install build-essential tcl
RUN cd /tmp \
&& curl -O http://download.redis.io/redis-stable.tar.gz \
&& tar xzvf redis-stable.tar.gz \
&& cd redis-stable \
&& make \
&& make install
COPY php.ini /usr/local/etc/php/
COPY public /var/www/html/
RUN chown -R root:www-data /var/www/html
RUN chmod -R 1755 /var/www/html
RUN find /var/www/html -type d -exec chmod 1775 {} +
RUN mkdir -p /var/redis/6379
COPY 6379.conf /etc/redis/6379.conf
COPY redis_6379 /etc/init.d/redis_6379
RUN chmod 777 /etc/init.d/redis_6379
RUN update-rc.d redis_6379 defaults
RUN service apache2 restart
RUN service redis_6379 start
It build and run fines but redis is never started? When I run /bin/bash inside my container and manually input "service redis_6379 start" it works, so I'm assuming my .conf and init.d files are okay.
While I'm aware it'd much easier using docker-compose, I'm specifically trying to avoid having to use it for specific reasons.
There are multiple things wrong here:
Starting processes in dockerfile has no effect. A dockerfile builds an image. The processes need to be started at container construction time. This can be done using an entrypoint can be defined in the dockerfile by using ENTRYPOINT. That entrypoint is typically a script that is executed when an actual container is started.
There is no init process in docker by default. Issuing service calls will fail without further work. If you need to start multiple processes you can look for the docs of the supervisord program.
Running both redis and a webserver in one container is not best practice. For a php application using redis you'd typically have 2 containers - one running redis and one running apache and let them interact via network.
I suggest you read the docker documentation before continuing. All this is described in depth there.
I am agree with #Richard. Use two or more containers according to your needs then --link them, in order to get the things work!

Resources