How to setup a older meteor version in dockerfile, and docker container - docker

I have a project running with meteor and node.js in my local. The meteor version is 2.4, node.js version is 8.9.4, I have meteor/release file to make meteor version be 2.2 so that meteor and node can work together.
(base) xxx$ meteor --version
Meteor 2.4
(base) xxx$ node -v
v8.9.4
It seems fine so I deploy this project to docker container to server. The Dockerfile first line I wrote
# node version dependent on meteor version
FROM node:8.9.4
After successfully deployed, the docker logs shows error siad.
Waiting for mongodb server to start - sleeping
warn: --minUptime not set. Defaulting to: 1000ms
warn: --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
info: Forever processing file: /app/bundle/main.js
error: undefined
data: /app/bundle/main.js:34 - Meteor requires Node v12.0.0 or later.
data: /app/bundle/main.js:34 - error: Forever detected script exited with code: 1
I check inside docker, the node version is 8.9.4
(base) [xxx]$ docker exec -it -u root tblbuilder_meteor_1 /bin/bash -c 'node --version'
v8.9.4
So I assume it is meteor version. But first I dont know how to check meteor version inside the docker. And second why this happens? I am sure the release file is updated to push project folder.
With some great man help, I kinda understand it. In local I use meteor 2.2, in docker file I use node.js 8.9.4 work with meteor2.2. So the thing I left is to modify DOCKERFILE, change it from node 8.9.4 to node 12. Below is my Dockerfile file, I try to change it to node 12.22.2, but it keep give me error, I spent one day to solve them. Currently, I stack at install r-base part.
Is there some guide for change node 8 to node 12.
# node version dependent on meteor version
FROM node:8.9.4
# I am going to use 12.22.2
#FROM node:12.22.2
# (even if copied as root you still need to change)
# https://github.com/moby/moby/issues/6119
COPY ./compose/meteor/entrypoint.sh /entrypoint.sh
COPY ./compose/meteor/run_app.sh /run_app.sh
COPY ./compose/meteor/r-cran.pgp /r-cran.pgp
COPY ./settings/settings.json /app/settings.json
COPY ./requirements.txt /requirements.txt
COPY ./r_requirements.sh /r_requirements.sh
# set locale to utf8: https://github.com/docker-library/docs/pull/703/files
# added [check-valid-until=no] & Acquire::Check-Valid-Until "false"; https://unix.stackexchange.com/questions/508724/failed-to-fetch-jessie-backports-repository
# Needs work to bring it up-to-date
RUN \
echo "deb [check-valid-until=no] http://archive.debian.org/debian jessie-backports main" > /etc/apt/sources.list.d/jessie-backports.list && \
sed -i '/deb http:\/\/deb.debian.org\/debian jessie-updates main/d' /etc/apt/sources.list && \
apt-get -o Acquire::Check-Valid-Until=false update && \
\
sh -c 'echo "deb [check-valid-until=no] http://cran.rstudio.com/bin/linux/debian jessie-cran35/" >> /etc/apt/sources.list' && \
apt-key add /r-cran.pgp && \
\
apt-get -o Acquire::Check-Valid-Until=false update && \
apt-get -o Acquire::Check-Valid-Until=false install -y locales && \
\
localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 && \
export LC_ALL=en_US.UTF-8 && \
export LANG=en_US.UTF-8 && \
export LANGUAGE=en_US.UTF-8
ENV LANG en_US.utf8
ENV LC_ALL en_US.UTF-8
# add rstudio debian install for R (requires version >3.3)
# https://cran.r-project.org/bin/linux/debian/
# install R from apt-get
# install python 3.6 from source :/
RUN apt install -y --force-yes r-base-core r-recommended r-base-html r-base-core
RUN apt-get install -y --force-yes wget bsdtar r-base r-base-dev && \
apt-get clean && \
\
wget https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tgz && \
tar zxf Python-3.6.5.tgz && \
cd ./Python-3.6.5 && \
./configure && \
make && \
make altinstall && \
cd .. && \
rm Python-3.6.5.tgz && \
rm -rf ./Python-3.6.5
# create paths and users
# change executable permissions
RUN npm install forever -g && \
\
mkdir -p /app/production && \
mkdir -p /app/logs && \
mkdir -p /app/crons && \
\
groupadd -r app && \
useradd -m -d /home/app -g app app && \
\
chmod +x /entrypoint.sh && \
chmod +x /run_app.sh && \
chmod +x /r_requirements.sh && \
chmod +x /requirements.txt && \
\
chown -R app:app /app && \
chown app:app /entrypoint.sh && \
chown app:app /run_app.sh && \
chown app:app /r_requirements.sh &&\
chown app:app /requirements.txt
USER app
# 1) install R packages
# 2) install python packages
RUN export "R_LIBS=/home/app/R_libs" && \
mkdir /home/app/R_libs && \
bash /r_requirements.sh && \
\
/usr/local/bin/pip3.6 install --user -r /requirements.txt
USER root
COPY ./compose/meteor/src/src.tar.gz /app/src.tar.gz
COPY ./src/private /app/src/private
RUN chown -R app:app /app
USER app
RUN cd /app && \
bsdtar -xzvf src.tar.gz && \
npm install --prefix /app/bundle/programs/server --production
ENTRYPOINT ["/entrypoint.sh"]

There are many wrong understanding in your tests:
Your Meteor version is 2.2, because is the version inside your project;
To you see the Node version of this Meteor project, see this answers that many guys send to you in Meteor Docker Node.js version is not match
Usually, we build the Meteor, that mean transform it in a NodeJS package build, then, inside of the Docker you don't need Meteor.
We need see your Dockerfile and understand what process you do to build do Docker image.

Related

Run 'opentsdb' image as non-root

I'm trying to build a custom image of opentsdb to run as non-root user. Our k8s clusters have security policies that doesn't allow containers to run as root. I'm utilizing an existing Dockerfile from here https://hub.docker.com/r/petergrace/opentsdb-docker/dockerfile
Below is my Docker file where I have added extra step to create a new user 'opentsdb' and at the end running it as USER 'opentsdb'
FROM alpine:latest
ENV TINI_VERSION v0.18.0
ENV TSDB_VERSION 2.4.0
ENV HBASE_VERSION 1.4.4
ENV GNUPLOT_VERSION 5.2.4
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:/usr/lib/jvm/java-1.8-openjdk/bin/
ENV ALPINE_PACKAGES "rsyslog bash openjdk8 make wget libgd libpng libjpeg libwebp libjpeg-turbo cairo pango lua"
ENV BUILD_PACKAGES "build-base autoconf automake git python3-dev cairo-dev pango-dev gd-dev lua-dev readline-dev libpng-dev libjpeg-turbo-dev libwebp-dev sed"
ENV HBASE_OPTS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
ENV JVMARGS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -enableassertions -enablesystemassertions"
RUN addgroup opentsdb && adduser -D -u 100 -G opentsdb opentsdb
# Tini is a tiny init that helps when a container is being culled to stop things nicely
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-amd64 /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
# Add the base packages we'll need
RUN apk --update add apk-tools \
&& apk add ${ALPINE_PACKAGES} \
# repo required for gnuplot \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.0/testing/ \
&& mkdir -p /opt/opentsdb
WORKDIR /opt/opentsdb/
# Add build deps, build opentsdb, and clean up afterwards.
RUN set -ex && apk add --virtual builddeps ${BUILD_PACKAGES}
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN wget --no-check-certificate \
-O v${TSDB_VERSION}.zip \
https://github.com/OpenTSDB/opentsdb/archive/v${TSDB_VERSION}.zip \
&& unzip v${TSDB_VERSION}.zip \
&& rm v${TSDB_VERSION}.zip \
&& cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& echo "tsd.http.request.enable_chunked = true" >> src/opentsdb.conf \
&& echo "tsd.http.request.max_chunk = 1000000" >> src/opentsdb.conf
RUN cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& find . | xargs grep -s central.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/central/https:\/\/repo1/g" \
&& find . | xargs grep -s repo1.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/repo1/https:\/\/repo1/g" \
&& ./build.sh \
&& cp build-aux/install-sh build/build-aux \
&& cd build \
&& make install \
&& cd / \
&& rm -rf /opt/opentsdb/opentsdb-${TSDB_VERSION}
RUN cd /tmp && \
wget --no-check-certificate https://sourceforge.net/projects/gnuplot/files/gnuplot/${GNUPLOT_VERSION}/gnuplot-${GNUPLOT_VERSION}.tar.gz && \
tar xzf gnuplot-${GNUPLOT_VERSION}.tar.gz && \
cd gnuplot-${GNUPLOT_VERSION} && \
./configure && \
make install && \
cd /tmp && rm -rf /tmp/gnuplot-${GNUPLOT_VERSION} && rm /tmp/gnuplot-${GNUPLOT_VERSION}.tar.gz
RUN apk del builddeps && rm -rf /var/cache/apk/*
#Install HBase and scripts
RUN mkdir -p /data/hbase /root/.profile.d /opt/downloads
WORKDIR /opt/downloads
RUN wget -O hbase-${HBASE_VERSION}.bin.tar.gz http://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
&& tar xzvf hbase-${HBASE_VERSION}.bin.tar.gz \
&& mv hbase-${HBASE_VERSION} /opt/hbase \
&& rm -r /opt/hbase/docs \
&& rm hbase-${HBASE_VERSION}.bin.tar.gz
# Add misc startup files
RUN ln -s /usr/local/share/opentsdb/etc/opentsdb /etc/opentsdb \
&& rm /etc/opentsdb/opentsdb.conf \
&& mkdir /opentsdb-plugins
ADD files/opentsdb.conf /etc/opentsdb/opentsdb.conf.sample
ADD files/hbase-site.xml /opt/hbase/conf/hbase-site.xml.sample
ADD files/start_opentsdb.sh /opt/bin/
ADD files/create_tsdb_tables.sh /opt/bin/
ADD files/start_hbase.sh /opt/bin/
ADD files/entrypoint.sh /entrypoint.sh
# Fix ENV variables in installed scripts
RUN for i in /opt/bin/start_hbase.sh /opt/bin/start_opentsdb.sh /opt/bin/create_tsdb_tables.sh; \
do \
sed -i "s#::JAVA_HOME::#$JAVA_HOME#g; s#::PATH::#$PATH#g; s#::TSDB_VERSION::#$TSDB_VERSION#g;" $i; \
done
RUN echo "export HBASE_OPTS=\"${HBASE_OPTS}\"" >> /opt/hbase/conf/hbase-env.sh
#4242 is tsdb, rest are hbase ports
EXPOSE 60000 60010 60030 4242 16010 16070
USER opentsdb
#HBase is configured to store data in /data/hbase, vol-mount it to persist your data.
VOLUME ["/data/hbase", "/tmp", "/opentsdb-plugins"]
CMD ["/entrypoint.sh"]
however the newly built image is throwing error and says permission denied for /opt/bin/ files. And the opentsdb is not getting deployed correctly.
On local using docker desktop, everything works fine using root, when I run below command
docker run -dp 4242:4242 petergrace/opentsdb-docker
Do i need to use any chown commands too ?
Could you help how to make opentsdb get deployed correctly using uid 100 ? Thanks in advance!

How to access remote debugging page for dockerized Chromium launch by Puppeteer?

When the chromium succeed to launch, its Debugging WebSocket URL should be like ws://127.0.0.1:9222/devtools/browser/ec261e61-0e52-4016-a5d7-d541e82ecb0a.
127.0.0.1:9222 should be able to browse by Chrome to inspect the headless Chromium. However, I cannot access the remote debugger URL by Chrome after I dockerize my application.
launchOption for launching chromium by Puppeteer:
{
"args": [
"--remote-debugging-port=9222",
"--window-size=1920,1080",
"--mute-audio",
"--disable-notifications",
"--force-device-scale-factor=0.8",
"--no-sandbox",
"--disable-setuid-sandbox"
],
"defaultViewport": {
"height": 1080,
"width": 1920
},
"headless": true
}
Dockerfile:
FROM node:10.16.3-slim
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh -O /usr/sbin/wait-for-it.sh \
&& chmod +x /usr/sbin/wait-for-it.sh
WORKDIR /usr/app
COPY ./ ./
VOLUME ["......." ]
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /usr/app \
&& npm install
USER pptruser
CMD npm run start
EXPOSE 3000 9222
Run the new container by :
docker run \
-p 3000:3000 \
-p 9222:9222 \
pptr
Port 9222 should be accessible in my host machine. But Chrome shows the error ERR_EMPTY_RESPONSE when I browse 127.0.0.1:9222 and DOCKER-INTERNAL-IP:9222 will timeout.
I managed to make this work with puppeteer using the following Dockerfile, docker run and puppeteer config:
FROM ubuntu:18.04
RUN apt update \
&& apt install -y \
curl \
wget \
gnupg \
gcc \
g++ \
make \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser
ADD . /app
WORKDIR /app
RUN chown -R pptruser:pptruser /app
RUN rm -rf node_modules
RUN rm -rf build/*
USER pptruser
RUN npm install --dev
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT /app/entrypoint.sh
Docker run:
docker run -p 9223:9222 -it myimage
Puppeteer launch:
this.browser = await puppeteer.launch(
{
headless: true,
args: [
'--remote-debugging-port=9222',
'--remote-debugging-address=0.0.0.0',
'--no-sandbox'
]
}
);
The entrypoint just launches the platform like: node build/main.js
After that I just had to connect to localhost:9223 on Chrome to see the browser. Hope it helps!
I know there is already an accepted answer, but let me add onto this in hopes to greatly reduce your image size. One shouldn't add too many extras into the Dockerfile if one can help it. But ultimately, adding --remote-debugging-port=9222 and --remote-debugging-address=0.0.0.0 will allow you to access it.
Dockerfile
FROM ubuntu:latest
LABEL Full Name <email#email.com> https://yourwebsite.com
WORKDIR /home/
COPY wrapper-script.sh wrapper-script.sh
# install chromium-browser and cleanup.
RUN apt update && apt install chromium-browser --no-install-recommends -y && apt autoremove && apt clean && apt autoclean && rm -rf /var/lib/apt/lists/*
# Run your commands and add environment variables from your compose file.
CMD ["sh", "wrapper-script.sh"]
I use a wrapper script so that I can include environment variables here. You can see URL and USERNAME set so that I can configure them from the compose file. Of course, i'm sure there is a better way to do this, but I do this so that I can scale my containers horizontally with ease.
wrapper-script.sh
#!/bin/bash
# Start the process
chromium-browser --headless --disable-gpu --no-sandbox --remote-debugging-port=9222 --remote-debugging-address=0.0.0.0 ${URL}${USERNAME}
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start chromium-browser: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep chromium-browser | grep -q -v grep
PROCESS_1_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Lastly, I have the docker-compose file. This is where I define all my settings so that I can configure my wrapper-script.sh with what I need and scale horizontally. Notice the environment section of the docker-compose file. USERNAME and URL are environment variables, and they can be called from the wrapper script.
docker-compose.yml
version: '3.7'
services:
chrome:
command: [ 'sh', 'wrapper-script.sh' ]
image: headless-chrome
build:
context: .
dockerfile: Dockerfile
environment:
- USERNAME=eaglejs
- URL=https://teamtreehouse.com/
ports:
- 9222:9222
If you are wondering what my folder structure looks like. all three files are at the root of the folder. For example:
My_Docker_Repo:
Dockerfile
docker-compose.yml
wrapper-script.sh
After that is all said and done, I simply run docker-compose up and I have one container running. Right now, using the ports section, you'll have to do something to scale that as well. if you were to run docker-compose up --scale chrome=5 your ports will clash, but let me know if you want to try that and i'll see what I can do for scaling, but other than that, if it is for testing, this should work well the way it is. :) Happy coding!
eaglejs

Docker doesn't find file

I'm working on a project that uses a Docker image for a specific feature, other than that I don't need docker at all so I don't understand much about it. The issue is that Docker doesn't finds a file that is actually in the folder and the build process breaks.
When trying to create the image using docker build -t project/render-worker . the error is this:
Step 18/23 : RUN bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo
---> Running in 695db3bf2f02
/bin/sh: 1: bin/composer-install: not found
The command '/bin/sh -c bin/composer-install && php composer-setup.php --install-dir=/bin && php -r 'unlink("composer-setup.php");' && php /bin/composer.phar global require hirak/prestissimo' returned a non-zero code: 127
As mentioned the file composer-install does exist and this is what's in it:
#!/bin/sh
EXPECTED_SIGNATURE="$(wget -q -O - https://composer.github.io/installer.sig)"
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
ACTUAL_SIGNATURE="$(php -r "echo hash_file('SHA384', 'composer-setup.php');")"
if [ "$EXPECTED_SIGNATURE" != "$ACTUAL_SIGNATURE" ]
then
echo 'ERROR: Invalid installer signature'
rm composer-setup.php
fi
Basically this is to get composer as you can see.
This is the Docker file:
FROM php:7.2-apache
RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' > /etc/apt/sources.list.d/backports.list
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
libpq-dev \
libxml2-dev \
ffmpeg \
imagemagick \
wget \
git \
zlib1g-dev \
libpng-dev \
unzip \
mencoder \
parallel \
ruby-dev
RUN apt-get -t stretch-backports install -y --no-install-recommends \
libav-tools \
&& rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install \
pcntl \
pdo_pgsql \
pgsql \
soap \
gd \
zip
RUN gem install compass
RUN a2enmod rewrite
ENV APACHE_RUN_USER root
ENV APACHE_RUN_GROUP root
EXPOSE 80
WORKDIR /app
COPY . /app
# Configuring apache to run the symfony app
COPY config/docker/apache.conf /etc/apache2/sites-enabled/000-default.conf
RUN echo "export DATABASE_URL" >> /etc/apache2/envvars \
&& echo ". /etc/environment" >> /etc/apache2/envvars
RUN wget -cqO- https://nodejs.org/dist/v10.15.3/node-v10.15.3-linux-x64.tar.xz | tar -xJ
RUN cp -a node-v10.15.3-linux-x64/bin /usr \
&& cp -a node-v10.15.3-linux-x64/include /usr \
&& cp -a node-v10.15.3-linux-x64/lib /usr \
&& cp -a node-v10.15.3-linux-x64/share /usr/ \
&& rm -rf node-v10.15.3-linux-x64 node-v10.15.3-linux-x64.tar.xz
RUN bin/composer-install \
&& php composer-setup.php --install-dir=/bin \
&& php -r "unlink('composer-setup.php');" \
# Install prestissimo for dramatically faster `composer install`
&& php /bin/composer.phar global require hirak/prestissimo
RUN APP_ENV=prod APP_SECRET= DATABASE_URL= AWS_KEY= AWS_SECRET= AWS_REGION= MEDIA_S3_BUCKET= \
GIPHY_API_KEY= FACEBOOK_APP_ID= FACEBOOK_APP_SECRET= \
GOOGLE_API_KEY= GOOGLE_CLIENT_ID= GOOGLE_CLIENT_SECRET= STRIPE_SECRET_KEY= STRIPE_ENDPOINT_SECRET= \
THEYSAIDSO_API_KEY= REV_CLIENT_API_KEY= REV_USER_API_KEY= REV_API_ENDPOINT= RENDER_QUEUE_URL= \
CLOUDWATCH_LOG_GROUP_NAME= \
php /bin/composer.phar install --no-interaction --no-dev --prefer-dist --optimize-autoloader --no-scripts \
&& php /bin/composer.phar clear-cache
RUN npm install \
&& node_modules/bower/bin/bower install --allow-root \
&& node_modules/grunt/bin/grunt
# Don't allow it to keep logs around; they're emitted on STDOUT and sent to AWS
# CloudWatch from there, so we don't need them on disk filling up the space
RUN mkdir -p var/cache/prod && chmod -R 777 var/cache/prod
RUN mkdir -p var/log && ln -s /dev/null var/log/prod.log \
&& ln -s /dev/null var/log/prod.deprecations.log && chmod -R 777 var/log
CMD ["/usr/bin/env", "bash", "./bin/start_render_worker"]
Like I said, unfortunately I don't have the slightest idea of how docker works and what's going on, just that I need it. I'm running docker in Win10 Pro and to make matters even worst it is actually working for another dev running Win10. We tried a few things but we can't make it work. I tried cloning the repo in other locations with no success at all. Everything before this particular step runs correctly.
[EDIT]
As suggested by the users I ran RUN ls bin/ before the composer install line and this is the result:
Step 18/24 : RUN ls bin/
---> Running in 6cb72090a069
append_captions
capture
composer-install
concat_project_video
console
encode_frames
encode_frames_to_gif
format_video_for_concatenation
generate_meme_bar
image_to_video
install.sh
phpcs
phpunit
process_render_queue
publish_docker_image
run_animation_worker
run_render_worker
run_render_worker_osx
start_render_worker
update
Removing intermediate container 6cb72090a069
As you can see composer-install is there so this is quite baffling.
Also I checked and set the line ending sequence to LF and the result is the same error.
[SECOND EDIT]
I added COPY bin/composer-install /bin
Then RUN ls bin/
And the results are the same. The ls command finds the file but the error persists. Also adding a slash before bin doesn't change anything :(

how to merge Docker's layers of image and slim down the image file

docker image inspect <name>
gives me 16GB
and about 20 layers
When I am logged as root, this
du -hs /
show me just 2GB
FYI, there are already very multi-lines RUN commands in Dockerfile.
can I squash all layers into one layer without touching Dockerfile, rebuilding etc?
or possibly by adding extra action to Dockerfile which clear/improve caching
Dockerfile is
FROM heroku/heroku:18
ENV PYENV_ROOT="/pyenv"
ENV PATH="/pyenv/shims:/pyenv/bin:$PATH"
ENV PYTHON_VERSION 3.5.6
ENV GPG_KEY <value>
ENV PYTHONUNBUFFERED 1
ENV TERM xterm
ENV EDITOR vim
RUN apt-get update && apt-get install -y \
build-essential \
gdal-bin \
binutils \
iputils-ping \
libjpeg8 \
libproj-dev \
libjpeg8-dev \
libtiff-dev \
zlib1g-dev \
libfreetype6-dev \
liblcms2-dev \
libxml2-dev \
libxslt1-dev \
libssl-dev \
libncurses5-dev \
virtualenv \
python-pip \
python3-pip \
python-dev \
libmysqlclient-dev \
mysql-client-5.7 \
libpq-dev \
libcurl4-gnutls-dev \
libgnutls28-dev \
libbz2-dev \
tig \
git \
vim \
nano \
tmux \
tmuxinator \
fish \
sudo \
libnet-ifconfig-wrapper-perl \
ruby \
libssl-dev \
nodejs \
strace \
tcpdump \
# npm & grunt
&& curl -L https://npmjs.com/install.sh | sh \
&& npm install -g grunt-cli grunt \
# ruby & foreman
&& gem install foreman \
# installing pyenv
&& curl https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash
COPY . /app
COPY ./requirements /requirements
COPY ./requirements.txt /requirements.txt
COPY ./docker/docker_compose/django/foreman.sh /foreman.sh
COPY ./docker/docker_compose/django/Procfile /Procfile
COPY ./docker/docker_compose/django/entrypoint.sh /entrypoint.sh
# ADD sudoer user django with password django
RUN groupadd -r django -g 1000 && \
useradd -ms /usr/bin/fish -p $(openssl passwd -1 django) --uid 1000 --gid 1000 -r -g django django && \
usermod -a -G sudo django && \
chown -R django:django /app
COPY --chown=django:django ./docker/docker_compose/django/fish /home/django/.config/fish
COPY --chown=django:django ./docker/docker_compose/django/tmuxinator /home/django/.tmuxinator
COPY ./docker/docker_compose/django/fish /root/.config/fish
WORKDIR /app
RUN sed -i 's/\r//' /entrypoint.sh \
&& sed -i 's/\r//' /foreman.sh \
&& chmod +x /entrypoint.sh \
&& chown django /entrypoint.sh \
&& chmod +x /foreman.sh \
&& chown django /foreman.sh \
&& chown -R django:django /home/django/ \
&& pyenv install ${PYTHON_VERSION%%} \
&& mkdir -p /app/log \
&& pyenv global ${PYTHON_VERSION%%} \
&& pyenv rehash \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -U pip \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements.txt \
&& chown -R django:django /pyenv/ \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements/dev_requirements.txt
# this user receives ENVs from the top
USER django
ENTRYPOINT ["/entrypoint.sh"]
What I've tried so far:
The --squash option from experimental mode of docker build is rather not for me. That Dockerfile is one of more Dockerfiles inside docker-compose.
I've also checked this:
https://github.com/jwilder/docker-squash
but seems docker load cannot load a squashed image.
also, that squash gives me 8GB (still far away from expected ~2GB)
docker save <image_id> | docker-squash -t latest_tiny | docker load
update after answers:
when I've added this:
&& apt-get autoremove \ # ? to consider
&& apt-get clean \ # ? to consider
&& rm -rf /var/lib/apt/lists/*
to apt-get and --no-cache-dir to each pip, the result was 72GB (yes, even much more - docker images shows 36GB before pip command, and 72GB as final size).
my working directory is clear (regarding COPY). du -hs / (as a root) still has 2GB. And all images were removed before rebuilding.
Following the #Mihai approach, I was able to slim down the image from 16GB to 9GB.
There is a simple trick to get rid of the intermediate layers. It will bring down the size as well but with how much depends on how it was built.
Create a Dockerfile like this:
FROM your_image as initial
FROM your_image_base
COPY --from=initial / /
your_image_base should be something like 'alpine' - so the smallest image from which your image and its parents descend from.
Now build the image and check the history and size:
docker build -t your-image:2.0 .
docker image history your-image:2.0
docker image ls
This way you do create a new Dockerfile (if that is acceptable for your process) without touching the initial Dockerfile.
Let me know if this solves your issue.
UPDATE AFTER SEEING THE Dockerfile:
maybe I miss it but I don't see you cleaning up the apt-get cache after you perform the installations. Your big RUN command should end with "&& rm -rf /var/lib/apt/lists/*" on the same line so that it doesn't store the whole cache on the layer.
Definitely add && rm -rf /var/lib/apt/lists/* on the end of your main run command, like Mihai said. Another thing that may help (depending on how big your dependencies are) is installing with pip using the --no-cache-dir option . Also, make sure you understand build context and consider using either a .dockerignore or sending the context to another directory (totally depends on how you're directory is setup)
I've also had luck exploring an image using dive. Honestly this looks like a pretty big image so not sure how much you're going to be able to get it down
To squash a (Docker) container image, without re-building the image or manipulating the original Dockerfile,
You can extend from your image and squash it:
docker build --squash -t your_image_squashed - <<< "FROM your_image"
It's very easy, just use
docker commit YOUR_CONTAINER_ID NEW_IMAGE_ID
The docker will throw away the intermediate layers, you lost history but the size is small

in what scope are docker-compose commands run

I am hitting issues running a start script (eg npm run gulp-dist) for my container as specified in my docker compose file. I traced the issue down to a node version compatibility issue which has led me to some confusion.
If I enter the container with docker-compose run workspace bash and then run node -v I get back v10.5.0 as expected (and what my script requires).
Yet if in docker-compose I set command: node -v it prints v4.2.6 when bringing up the container with docker-compose up workspace.
So I'm wondering where are the commands run that I specify in docker-compose (I thought they were run in the container once it had started). And how do I run a command in the container - I want to specify it in docker-compose as I run a different command in two different docker-compose files (one for dev env, one for production).
Note: My dev machine has node version 11, so I have no idea where four is.
Also, if run docker-compose run workspace bash and then run the original script, it works fine - it is just failing when run as a docker-compose command.
Here's my dockerfile (sorry, it's big):
# FROM laradock/workspace:1.8-71
# copied the contents of the above laradock workspace
# dockerfile and replaced put here directly.
FROM phusion/baseimage:latest
MAINTAINER Mahmoud Zalt <mahmoud#zalt.me>
RUN DEBIAN_FRONTEND=noninteractive
RUN locale-gen en_US.UTF-8
ENV LANGUAGE=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
ENV LC_CTYPE=en_US.UTF-8
ENV LANG=en_US.UTF-8
ENV TERM xterm
# Add the "PHP 7" ppa
RUN apt-get install -y software-properties-common && \
add-apt-repository -y ppa:ondrej/php
#
#--------------------------------------------------------------------------
# Software's Installation
#--------------------------------------------------------------------------
#
# Install "PHP Extentions", "libraries", "Software's"
RUN apt-get update && \
apt-get install -y --allow-downgrades --allow-remove-essential \
--allow-change-held-packages \
php7.1-cli \
php7.1-common \
php7.1-curl \
php7.1-intl \
php7.1-json \
php7.1-xml \
php7.1-mbstring \
php7.1-mcrypt \
php7.1-mysql \
php7.1-pgsql \
php7.1-sqlite \
php7.1-sqlite3 \
php7.1-zip \
php7.1-bcmath \
php7.1-memcached \
php7.1-gd \
php7.1-dev \
pkg-config \
libcurl4-openssl-dev \
libedit-dev \
libssl-dev \
libxml2-dev \
xz-utils \
libsqlite3-dev \
sqlite3 \
git \
curl \
vim \
nano \
postgresql-client \
&& apt-get clean
#####################################
# Composer:
#####################################
# Install composer and add its bin to the PATH.
RUN curl -s http://getcomposer.org/installer | php && \
echo "export PATH=${PATH}:/var/www/vendor/bin" >> ~/.bashrc && \
mv composer.phar /usr/local/bin/composer
# Source the bash
RUN . ~/.bashrc
#
# other - workspace specific config
#
RUN apt-get -y update && \
apt-get install pkg-config libmagickwand-dev -y && \
pecl install imagick
#####################################
# Non-Root User:
#####################################
# Add a non-root user to prevent files being created with root permissions on host machine.
ENV PUID 1000
ENV PGID 1000
RUN groupadd -g ${PGID} laradock && \
useradd -u ${PUID} -g laradock -m laradock && \
apt-get update -yqq
#####################################
# Set Timezone
#####################################
ARG TZ=UTC
ENV TZ ${TZ}
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
#####################################
# Composer:
#####################################
# Add the composer.json
COPY ./composer.json /home/laradock/.composer/composer.json
# Make sure that ~/.composer belongs to laradock
RUN chown -R laradock:laradock /home/laradock/.composer
USER laradock
# Check if global install need to be ran
ARG COMPOSER_GLOBAL_INSTALL=false
ENV COMPOSER_GLOBAL_INSTALL ${COMPOSER_GLOBAL_INSTALL}
RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then \
# run the install
composer global install \
;fi
USER root
#####################################
# Node / NVM:
#####################################
# Check if NVM needs to be installed
ARG NODE_VERSION=10.5.0
ENV NODE_VERSION 10.5.0
ENV NVM_DIR /home/laradock/.nvm
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | bash && \
. $NVM_DIR/nvm.sh && \
nvm install ${NODE_VERSION} && \
nvm use ${NODE_VERSION} && \
npm install -g gulp bower vue-cli \
;fi
# link node and nodejs
RUN ln -s /usr/bin/nodejs /usr/bin/node
# Wouldn't execute when added to the RUN statement in the above block
# Source NVM when loading bash since ~/.profile isn't loaded on non-login shell
RUN echo "" >> ~/.bashrc && \
echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.bashrc && \
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm' >> ~/.bashrc \
;fi
# install required things
RUN apt-get update && apt-get install apt-transport-https && \
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt-get update && apt-get install -y --allow-unauthenticated yarn mysql-client
# Add NVM binaries to root's .bashrc
USER root
RUN apt-get install npm -y
# set npm registry address
RUN npm config set registry http://registry.npmjs.org/
#
#--------------------------------------------------------------------------
# Final Touch
#--------------------------------------------------------------------------
#
# Clean up
USER root
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set default work directory
WORKDIR /var/www
# # copy in our code, so as not to rely on a volume in prod
COPY . /var/www
# ensure directories we need are writable
RUN chmod -R o+w /var/www/user-api-laravel/storage
RUN chmod -R o+w /var/www/user-api-laravel/bootstrap/cache
RUN chmod -R o+w /var/www/auto/storage
RUN chmod -R o+w /var/www/auto/bootstrap/cache
# install php project dependencies
RUN cd /var/www/user-api-laravel && composer install
RUN cd /var/www/auto && composer install
WORKDIR /var/www
USER root
# install auto-scalar deps
RUN cd /var/www/auto-scaler && npm i
# php.ini for cli
ADD ./php-cli.ini /etc/php/7.1/cli/php.ini
And relevant part of docker-compose:
workspace:
build:
context: ./www-workspace
args:
- TZ=${WORKSPACE_TIMEZONE}
- NODE_VERSION=${WORKSPACE_NODE_VERSION}
command: [bash, -c, "cd /var/www/spa && npm run dist-prod"]
Still don't know what context the commands run in, but made mine work. It was due to node being installed via NVM. Or at least when I installed, as #Noogen suggested, via curl -sL https://deb.nodesource.com/setup_10.x | sudo bash - I could then run commands against my container and they would have access to the correct node version. I had to settled for a lower node version (not 10.5.0 as I could specify with NVM) but in the end it worked so no worries.

Resources