docker gulp build failing with nginx - docker

I am using this sample repository -
https://github.com/umputun/nginx-le
To create a docker image from Nginx with letsencrypt.
Now, I am getting the following error -
PEM_read_bio_X509_AUX("/etc/nginx/ssl/") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: TRUSTED CERTIFICATE)
nginx: [emerg] PEM_read_bio_X509_AUX("/etc/nginx/ssl/") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: TRUSTED CERTIFICATE)
When I try to test my nginx configuration.
Now, as stated in the above github repo I have this in my Dockerfile -
FROM nginx:stable-alpine
ADD conf/nginx.conf /etc/nginx/nginx.conf
ADD conf/service.conf /etc/nginx/conf.d/service.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /usr/build/app/dist /usr/share/nginx/html
ADD script/entrypoint.sh /entrypoint.sh
ADD script/le.sh /le.sh
RUN \
rm /etc/nginx/conf.d/default.conf && \
chmod +x /entrypoint.sh && \
chmod +x /le.sh && \
apk add --update certbot tzdata openssl && \
rm -rf /var/cache/apk/*
CMD ["/entrypoint.sh"]
-----Updated Dockerfile---------
FROM nginx:latest
RUN curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN apt-get install -y build-essential
RUN curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -y && sudo apt-get install -y yarn
RUN mkdir -p /usr/build
WORKDIR /usr/build
COPY package.json .
#COPY package-lock.json .
COPY bower.json .
COPY .bowerrc .
RUN npm install --quite
RUN npm install -g gulp bower --quite
RUN bower install --allow-root
RUN mkdir /usr/build/app
RUN cp -R /usr/build/node_modules /usr/build/app
RUN cp -R /usr/build/bower_components /usr/build/app
RUN cp -R /usr/build/*.json /usr/build/app/
RUN cp /usr/build/.bowerrc /usr/build/app/
COPY src /usr/build/app
RUN mkdir /usr/build/app/gulp
ADD gulp/* /usr/build/app/gulp/
ADD gulpfile.js /usr/build/app
WORKDIR /usr/build/app
RUN ls -al .
RUN rm -rf /usr/build/app/dist
RUN mkdir /usr/build/app/dist
RUN gulp build
RUN ls -al /usr/build/app
#RUN yum -y install nodejs
#RUN yum install gcc-c++ make
ADD conf/nginx.conf /etc/nginx/nginx.conf
#ADD conf/service.conf /etc/nginx/conf.d/service.conf
RUN rm -rf /usr/share/nginx/html/*
RUN ls -al /usr/share/nginx/ && ls -al /usr/share/nginx/html/ && ls -al /usr/build/app/dist/
RUN mv /usr/build/app/dist/* /usr/share/nginx/html/
#ADD script/entrypoint.sh /entrypoint.sh
#ADD script/le.sh /le.sh
RUN rm /etc/nginx/conf.d/default.conf && \
chmod +x /entrypoint.sh
CMD ["/entrypoint.sh"]
Now, it reaches entrypoint.sh successfully & I have checked the files in my nginx webroot are getting copied along with the conf.
-------updated issue---------
So, I figured it was not creating the ssh keys because it was not able to generate any html files as part of "gulp build:dev" command & therefore
throwing an error. So I updated my entrypoint to remove lets encrypt for now & only run nginx conf like this -
#!/bin/sh
echo "start nginx"
export TZ="America/Chicago"
cp /usr/share/zoneinfo/${TZ} /etc/localtime && echo ${TZ} > /etc/timezone
echo "ssl_key=${SSL_KEY:=le-key.pem}, ssl_cert=${SSL_CERT:=le-crt.pem}, ssl_chain_cert=${SSL_CHAIN_CERT:=le-chain-crt.pem}"
SSL_KEY=/etc/nginx/ssl/${SSL_KEY}
SSL_CERT=/etc/nginx/ssl/${SSL_CERT}
SSL_CHAIN_CERT=/etc/nginx/ssl/${SSL_CHAIN_CERT}
mkdir -p /etc/nginx/conf.d
mkdir -p /etc/nginx/ssl
#copy /etc/nginx/service*.conf if any of servcie*.conf mounted
if [ -f /etc/nginx/nginx*.conf ]; then
cp -fv /etc/nginx/nginx*.conf /etc/nginx/conf.d/
fi
#replace SSL_KEY, SSL_CERT and SSL_CHAIN_CERT by actual keys
ls -al /etc/nginx/conf.d
#sed -i "s|SSL_KEY|${SSL_KEY}|g" /etc/nginx/conf.d/*.conf
#sed -i "s|SSL_CERT|${SSL_CERT}|g" /etc/nginx/conf.d/*.conf
#sed -i "s|SSL_CHAIN_CERT|${SSL_CHAIN_CERT}|g" /etc/nginx/conf.d/*.conf
#generate dhparams.pem
if [ ! -f /etc/nginx/ssl/dhparams.pem ]; then
echo "make dhparams"
cd /etc/nginx/ssl
openssl dhparam -out dhparams.pem 2048
chmod 600 dhparams.pem
fi
#disable ssl configuration and let it run without SSL
mv -v /etc/nginx/conf.d /etc/nginx/conf.d.disabled
(
sleep 5 #give nginx time to start
echo "start letsencrypt updater"
while :
do
echo "trying to update letsencrypt ..."
# /le.sh
rm -f /etc/nginx/conf.d/default.conf 2>/dev/null #remove default config, conflicting on 80
mv -v /etc/nginx/conf.d.disabled /etc/nginx/conf.d #enable
echo "reload nginx with ssl"
ls -al /etc/nginx/ssl
echo "key contents are - "
cat /etc/nginx/ssl/dhparams.pem
nginx -t
nginx -s reload
sleep 60d
done
) &
nginx -g "daemon off;"
So, here in the script in the end when I test the nginx configuration it gives the following error -
-----Update-----
So, now I am getting this error -
Step 38/40 : RUN mv /usr/build/app/dist/* /usr/share/nginx/html/
---> Running in 9f59c1d5cb90
mv: cannot stat '/usr/build/app/dist/*': No such file or directory
The command '/bin/sh -c mv /usr/build/app/dist/* /usr/share/nginx/html/' returned a non-zero code: 1
Logs for the gulp build:dev command are -
---> Running in 1acca8373940
[14:40:09] Using gulpfile /usr/build/app/gulpfile.js
[14:40:09] Starting 'scripts'...
[14:40:09] Starting 'styles'...
[14:40:09] Starting 'fonts-dev'...
[14:40:10] Starting 'other-dev'...
[14:40:10] Finished 'scripts' after 1.14 s
[14:40:10] Finished 'styles' after 1.13 s
[14:40:10] Starting 'inject'...
[14:40:10] Finished 'other-dev' after 39 ms
[14:40:10] Finished 'inject' after 29 ms
[14:40:10] Starting 'html-dev'...
[14:40:10] Finished 'html-dev' after 288 ms
[14:40:11] Finished 'fonts-dev' after 2.44 s
[14:40:11] Starting 'build:dev'...
[14:40:11] Finished 'build:dev' after 123 μs
Removing intermediate container 1acca8373940
which suggests that gulp build was successful but still in this step -
Step 34/40 : RUN ls -al /usr/build/app/dist
---> Running in c141120c29dc
total 8
drwxr-xr-x 2 root root 4096 Apr 27 14:35 .
drwxr-xr-x 1 root root 4096 Apr 27 14:35 ..
Removing intermediate container c141120c29dc
I am not getting anything in the dist directory. Any suggestions for debugging, solving this ?
Can anyone help me find / debug / solve this issue ?

Related

composer install in Dockerfile not saving dependencies

I have some trouble to dockerize a Symfony project. At the first start from cloning from git repo the dependencies have to installed through composer.
I have read many questions with the same background but i cant get it working.
i show u first my Dockerfile:
ARG PHP_VERSION=8.1
ARG APP_ENV=dev
# Prod image
FROM php:${PHP_VERSION}-fpm-alpine AS app_php
# Update
RUN apk --no-cache update
RUN apk --no-cache add bash git
# Install Node
RUN apk --no-cache add --update nodejs npm
RUN apk --no-cache add --update python3
RUN apk --no-cache add --update make
RUN apk --no-cache add --update g++
# Install pdo
RUN docker-php-ext-install pdo_mysql
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Symfony CLI
RUN curl -sS https://get.symfony.com/cli/installer | bash && mv /root/.symfony/bin/symfony /usr/local/bin/symfony
# WORK DIR
WORKDIR /var/www/html
# https://getcomposer.org/doc/03-cli.md#composer-allow-superuser
ENV COMPOSER_ALLOW_SUPERUSER=1
ENV PATH="${PATH}:/root/.composer/vendor/bin"
COPY --from=composer:2 /usr/bin/composer /usr/bin/composer
# prevent the reinstallation of vendors at every changes in the source code
COPY composer.* symfony.* ./
RUN set -eux; \
if [ -f composer.json ]; then \
composer install --prefer-dist --no-dev --no-autoloader --no-scripts --no-progress; \
composer clear-cache; \
fi
RUN set -eux; \
mkdir -p var/cache var/log; \
if [ -f composer.json ]; then \
composer dump-autoload --classmap-authoritative --no-dev; \
composer dump-env prod; \
composer run-script --no-dev post-install-cmd; \
chmod +x bin/console; sync; \
fi
# copy sources
COPY . /var/www/html
RUN rm -Rf docker/
# Start Symfony server on Port 8000
EXPOSE 8000
#RUN symfony console doctrine:migrations:migrate
i can see that the packages were installed through the build process, but after docker-compose up the vendor folder isnt set.
Do u have an idea to solve this?
Running it for you, indeed there is no vendors files where you woudl expect them.
if you run a shell on your container you would see what's really happening:
Get your created container id or tag with docker image ls
And run it:
docker run --rm -it <CONTAINER_ID> /bin/bash
bash-5.1# ls /var/www/html/
Dockerfile composer.1 composer.2 symfony.1 var
bash-5.1# ls -al /root/.composer/
total 8
drwxr-xr-x 1 root root 50 Aug 14 15:47 .
drwx------ 1 root root 16 Aug 14 15:48 ..
-rw-r--r-- 1 root root 799 Aug 14 15:47 keys.dev.pub
-rw-r--r-- 1 root root 799 Aug 14 15:47 keys.tags.pub
bash-5.1# ls /usr/bin/composer
/usr/bin/composer
bash-5.1# ls /usr/local/bin/composer
/usr/local/bin/composer
bash-5.1# which composer
/usr/local/bin/composer
bash-5.1# which symfony
/usr/local/bin/symfony
bash-5.1#
The which command would make you realize:
you don't need to copy composer when you already install it
/usr/local/bin is already part of the PATH
The current ENV command is not necessary plus pointing to a non existing folder.
FYI to keep it slim I have created fake symfony.* and composer.* files and have no composer.json (not shared here).
I hope this helps you solve it.

sh: ./filebeat: not found in Docker container

Im trying to run filebeat in a docker container with the s6 overlay.
When s6 executes or when i manually execute the filebeat binary i get sh: ./filebeat: not found
This is my Dockerfile:
FROM alpine:3.15
ENV AM_I_IN_A_DOCKER_CONTAINER Yes
COPY root/ /
ADD https://github.com/just-containers/s6-overlay/releases/download/v1.21.8.0/s6-overlay-amd64.tar.gz /tmp/
ADD https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.0.0-linux-x86_64.tar.gz /tmp/
ADD requirements.txt /etc/services.d/01_instabot/requirements.txt
ADD src/ /etc/services.d/01_instabot/
RUN chmod +x /usr/local/bin/install.sh
RUN /usr/local/bin/install.sh
#ENTRYPOINT ["/init"]
This is my install.sh:
#!/bin/sh
echo "Unpacking s6 overlay"
gunzip -c /tmp/s6-overlay-amd64.tar.gz | tar -xf - -C /
echo "Creating user"
adduser -D -u 2000 -s /sbin/nologin -D -H botuser
adduser -D -u 2001 -s /sbin/nologin -D -H filebeatuser
echo "Set time"
ln -snf /usr/share/zoneinfo/"$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
apk add --no-cache tzdata
echo "Install filebeat"
gunzip -c /tmp/filebeat-8.0.0-linux-x86_64.tar.gz | \
tar -xf - -C /etc/services.d/00_filebeat/ --strip-components=1
mv /etc/services.d/00_filebeat/my_filebeat.yml /etc/services.d/00_filebeat/filebeat.yml
echo "Install app dependencies"
apk add --no-cache python3 py3-pip
pip3 install --no-cache-dir -r /etc/services.d/01_instabot/requirements.txt
mv /etc/services.d/01_instabot/settings_docker.py /etc/services.d/01_instabot/settings.py
echo "Cleanup"
rm -rf /tmp/*
If i take a look inside the docker container with the docker run command i see the binary present.
/etc/services.d/00_filebeat # ls
LICENSE.txt README.md filebeat filebeat.yml module run
NOTICE.txt fields.yml filebeat.reference.yml kibana modules.d
But when i execute it using ./filebeat i get the not found error.
/etc/services.d/00_filebeat # ./filebeat
sh: ./filebeat: not found
Why is this? And how do i fix it? Is it because of busybox or something?
libc6-compat was missing from my alpine image.

Docker build Gentoo operation not permitted

I have a docker-compose with this container to build Gentoo
default:
build: docker/gentoo
hostname: default.jpo.net
My Dockerfile to setup Gentoo in multi-stage build is
FROM gentoo/portage as portage
FROM gentoo/stage3-amd64
COPY --from=portage /usr/portage /usr/portage
RUN emerge --jobs $(nproc) -qv www-servers/apache net-misc/curl net-misc/openssh
RUN /usr/bin/ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key -N ''
RUN /usr/bin/ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ''
RUN sed -i 's/#PubkeyAuthentication/PubkeyAuthentication/' /etc/ssh/sshd_config
RUN mkdir -p /root/.ssh && chmod 700 /root/.ssh && touch /root/.ssh/authorized_keys
RUN wget -O telegraf.tar.gz http://get.influxdb.org/telegraf/telegraf-0.11.1-1_linux_amd64.tar.gz \
&& tar xvfz telegraf.tar.gz \
&& rm telegraf.tar.gz \
&& mv /usr/lib/telegraf /usr/lib64/telegraf \
&& rm -rf /usr/lib && ln -s /usr/lib64 /usr/lib
ADD telegraf.conf /etc/telegraf/telegraf.conf
COPY entrypoint.sh /
COPY infinite_curl.sh /
RUN chmod u+x /entrypoint.sh /infinite_curl.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["telegraf", "-config", "/etc/telegraf/telegraf.conf"]
The problem is the build fail during the emerge command when it setup packages.
Then I get this error
PermissionError: [Errno 1] Operation not permitted
* ERROR: dev-libs/apr-1.5.2::gentoo failed (install phase):
* dodoc failed
I tried adding privileged=true in my docker-compose file and with adding USER root inside my Dockerfile without success.
I also tried to use the last version of openssh without success too.
I searched the Internet but I haven't found anything successfull.
Docker version
Docker version 17.12.0-ce, build c97c6d6
Docker-compose version
docker-compose version 1.18.0, build 8dd22a9
I'm on Ubuntu 16.04 and this build work well on Ubuntu 17.10 with same docker/docker-compose versions
Do you have some clues ?
Looking at in src-install() for that ebuild, this appears to be a bug upstream.
# Prallel install breaks since apr-1.5.1
#make -j1 DESTDIR="${D}" install || die
There are several two bugs related to building apr in parallel.

Error on building Dockerfile to Image

I have the following Dockerfile. I'm trying to build it to an image, but somehow I receive the following error: ADD service /container/service
ADD failed: stat /mnt/sda1/var/lib/docker/tmp/docker-builder005872257/service: no such file or directory at Step 6/9. I don't know why... Can anyone help me?
FROM osixia/light-baseimage:1.1.1
ARG LDAP_OPENLDAP_GID
ARG LDAP_OPENLDAP_UID
RUN if [ -z "${LDAP_OPENLDAP_GID}" ]; then groupadd -r openldap; else groupadd -r -g ${LDAP_OPENLDAP_GID} openldap; fi && if [ -z "${LDAP_OPENLDAP_UID}" ]; then useradd -r -g openldap openldap; else useradd -r -g openldap -u ${LDAP_OPENLDAP_UID} openldap; fi
RUN echo "path-include /usr/share/doc/krb5*" >> /etc/dpkg/dpkg.cfg.d/docker && apt-get -y update && /container/tool/add-service-available :ssl-tools \
&& LC_ALL=C DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
ldap-utils \
libsasl2-modules \
libsasl2-modules-db \
libsasl2-modules-gssapi-mit \
libsasl2-modules-ldap \
libsasl2-modules-otp \
libsasl2-modules-sql \
openssl \
slapd \
krb5-kdc-ldap \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD service /container/service
RUN /container/tool/install-service
ADD environment /container/environment/99-default
EXPOSE 389 636
EDIT
After adding some ls commands in the Dockerfile I've seen the following line in logs:
Step 6/11 : RUN ls /container/
---> Running in 623dca399324
environment
service
service-available
tool
Removing intermediate container 623dca399324
---> 5f7fcb8a1857
Step 7/11 : RUN ls
---> Running in 7f3bd8662113
bin
boot
container
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
Removing intermediate container 7f3bd8662113
---> 99c17cefc572
Step 8/11 : ADD service /container/service
ADD failed: stat /mnt/sda1/var/lib/docker/tmp/docker-builder200387466/service: no such file or directory
Any idea how can I resolve this?
The error means it can't find the directory which mean it probably doesn't exist or you are doing it the wrong way.
One of the things you can do is to make directory and add service to it. Below is a snippet explanation that could teach or help you:
RUN mkdir /container/
Then ADD service to the directory you created. Thus
ADD service /container/service
This can only serve as what could help to put you to track. However I will advice #mohan08p answer above because that works for me.
it successfully build on my local machine.Can you delete the respective files or directories and try once. Also, check the permissions. Did you configure .dockerignore which will not allow to ADD those files. Or else try running with -f or --file command like,
$ docker build . -f Dockerfile
Hope this helps.

Unable to start container from jenkins

In Jenkins I installed Docker build step plugin.
In Jenkins, created job and in it, executed docker command selected build image. The image is created using the Dockerfile.The Dockerfile is :
FROM ubuntu:latest
#OS Update
RUN apt-get update
RUN apt-get -y install git git-core unzip python-pip make wget build-essential python-dev libpcre3 libpcre3-dev libssl-dev vim nano net-tools iputils-ping supervisor curl supervisor
WORKDIR /home/wipro
#Mongo Setup
RUN curl -O http://downloads.mongodb.org/linux/mongodb-linux-x86_64-3.0.2.tgz && tar -xzvf mongodb-linux-x86_64-3.0.2.tgz && cd mongodb-linux-x86_64-3.0.2/bin && cp * /usr/bin/
#RUN mongod --dbpath /home/azureuser/CI_service/data/ --logpath /home/azureuser/CI_service/log.txt --logappend --noprealloc --smallfiles --port 27017 --fork
#Node Setup
#RUN curl -O https://nodejs.org/dist/v0.12.7/node-v0.12.7.tar.gz && tar -xzvf node-v0.12.7.tar.gz && cd node-v0.12.7
#RUN cd /opt/node-v0.12.7 && ./configure && make && make install
#RUN cp /usr/local/bin/node /usr/bin/ && cp /usr/local/bin/npm /usr/bin/
RUN wget https://nodejs.org/dist/v0.12.7/node-v0.12.7-linux-x64.tar.gz
RUN cd /usr/local && sudo tar --strip-components 1 -xzf /home/wipro/node-v0.12.7-linux-x64.tar.gz
RUN npm install forever -g
#CI SERVICE
ADD prod /home//
ADD servicestart.sh /home/
RUN chmod +x /home/servicestart.sh
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["sh", "/home/servicestart.sh"]
EXPOSE 80
EXPOSE 27017
Then I tried to create the container and container is created.
When I tried to start the container, the container is not running.
When I checked with command:
docker ps -a
, it shows status as created only.
Its not in running or Exited state.
The output of docker ps -a is:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ac762c4dc84 d85c2d90be53 "sh /home/servi" 15 hours ago Created hungry_liskov
7d8864940515 d85c2d90be53 "sh /home/servi" 16 hours ago Created ciservice
How to start the container using jenkins?
It depends on your container main command (ENTRPOINT + CMD)
A created state (for non data-volume container) means the main command failed to execute.
Try a docker logs <container_id> to see if there is any error message recorded.
CMD ["sh", "/home/servicestart.sh"] should be:
CMD ["/home/servicestart.sh"]
(The default ENTRYPOINT for Ubuntu should be ["sh", "-c"], so no need to repeat an "sh")

Resources