docker-compose.yml not working for rails app - ruby-on-rails

I have developed a simple rails app that uses Redis, sidekiq, and mysql2. I'm trying to run the app using docker-compose. I wrote a docker-compose.yml which is working fine. I've made a few changes to the docker file which isn't working and when I see the logs the webapp container is exiting with exit code 1.
These are my files
Dockerfile
FROM ubuntu:16.04
ENV RUBY_MAJOR="2.6" \
RUBY_VERSION="2.6.3" \
RUBYGEMS_VERSION="3.0.8" \
BUNDLER_VERSION="1.17.3" \
RAILS_VERSION="5.2.1" \
RAILS_ENV="production" \
GEM_HOME="/usr/local/bundle"
ENV BUNDLE_PATH="$GEM_HOME" \
BUNDLE_BIN="$GEM_HOME/bin" \
BUNDLE_SILENCE_ROOT_WARNING=1 \
BUNDLE_APP_CONFIG="$GEM_HOME"
ENV PATH="$BUNDLE_BIN:$GEM_HOME/bin:$GEM_HOME/gems/bin:$PATH"
USER root
RUN apt-get update && \
apt-get -y install sudo
RUN echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
addgroup --gid 1024 stars && \
useradd -G stars,sudo -d /home/user --shell /bin/bash -m user
RUN mkdir -p /usr/local/etc \
&& echo 'install: --no-document' >> /usr/local/etc/gemrc \
&& echo 'update: --no-document' >> /usr/local/etc/gemrc
USER user
RUN sudo apt-get -y install --no-install-recommends vim make gcc zlib1g-dev autoconf build-essential libssl-dev libsqlite3-dev \
curl htop unzip mc openssh-server openssl bison libgdbm-dev ruby git libmysqlclient-dev tzdata mysql-client
RUN sudo rm -rf /var/lib/apt/lists/* \
&& sudo curl -fSL -o ruby.tar.gz "http://cache.ruby-lang.org/pub/ruby/$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.gz" \
&& sudo mkdir -p /usr/src/ruby \
&& sudo tar -xzf ruby.tar.gz -C /usr/src/ruby --strip-components=1 \
&& sudo rm ruby.tar.gz
USER root
RUN cd /usr/src/ruby \
&& { sudo echo '#define ENABLE_PATH_CHECK 0'; echo; cat file.c; } > file.c.new && mv file.c.new file.c \
&& autoconf \
&& ./configure --disable-install-doc
USER user
RUN cd /usr/src/ruby \
&& sudo make -j"$(nproc)" \
&& sudo make install \
&& sudo gem update --system $RUBYGEMS_VERSION \
&& sudo rm -r /usr/src/ruby
RUN sudo gem install bundler --version "$BUNDLER_VERSION"
RUN sudo mkdir -p "$GEM_HOME" "$BUNDLE_BIN" \
&& sudo chmod 777 "$GEM_HOME" "$BUNDLE_BIN" \
&& sudo gem install rails --version "$RAILS_VERSION"
RUN mkdir -p ~/.ssh && \
chmod 0700 ~/.ssh && \
ssh-keyscan github.com > ~/.ssh/known_hosts
ARG ssh_pub_key
ARG ssh_prv_key
RUN echo "$ssh_pub_key" > ~/.ssh/id_rsa.pub && \
echo "$ssh_prv_key" > ~/.ssh/id_rsa && \
chmod 600 ~/.ssh/id_rsa.pub && \
chmod 600 ~/.ssh/id_rsa
USER root
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get install -y nodejs
USER user
WORKDIR /data
RUN sudo mkdir /data/checklist
WORKDIR /data/checklist
ADD Gemfile Gemfile.lock ./
RUN sudo chown -R user /data/checklist
RUN bundle install
ADD . .
RUN sudo chown -R user /data/checklist
EXPOSE 3001
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
RUN chmod +x ./config/docker/compile.sh && ./config/docker/compile.sh
CMD ["bundle", "exec", "rails", "s", "-p", "3001"]
compile.sh
bundle exec rake assets:precompile
bundle exec rake db:migrate 2>/dev/null || bundle exec rake db:create db:migrate
echo "Assets Pre-compiled!"
This is my docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: "list"
MYSQL_ROOT_PASSWORD: "Mission2019"
MYSQL_USERNAME: "root"
webapp:
build: .
ports:
- '3001:3001'
volumes:
- '.:/data/checklist'
depends_on:
- db
- redis
command: rails db:migrate
environment:
DB_USERNAME: "root"
DB_PASSWORD: "Mission2019"
DB_DATABASE: "list"
DB_PORT: 3306
DB_HOST: db
RAILS_ENV: production
RAILS_MAX_THREADS: 5
redis:
image: redis:4.0-alpine
command: redis-server
ports:
- '6379:6379'
sidekiq:
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- "db"
- "redis"
Working Dockerfile
FROM ubuntu:16.04
ENV RUBY_MAJOR="2.6" \
RUBY_VERSION="2.6.3" \
RUBYGEMS_VERSION="3.0.8" \
BUNDLER_VERSION="1.17.3" \
RAILS_VERSION="5.2.1" \
RAILS_ENV="production" \
GEM_HOME="/usr/local/bundle"
ENV BUNDLE_PATH="$GEM_HOME" \
BUNDLE_BIN="$GEM_HOME/bin" \
BUNDLE_SILENCE_ROOT_WARNING=1 \
BUNDLE_APP_CONFIG="$GEM_HOME"
ENV PATH="$BUNDLE_BIN:$GEM_HOME/bin:$GEM_HOME/gems/bin:$PATH"
USER root
RUN apt-get update && \
apt-get -y install sudo
RUN echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
addgroup --gid 1024 stars && \
useradd -G stars,sudo -d /home/user --shell /bin/bash -m user
RUN mkdir -p /usr/local/etc \
&& echo 'install: --no-document' >> /usr/local/etc/gemrc \
&& echo 'update: --no-document' >> /usr/local/etc/gemrc
USER user
RUN sudo apt-get -y install --no-install-recommends vim make gcc zlib1g-dev autoconf build-essential libssl-dev libsqlite3-dev \
curl htop unzip mc openssh-server openssl bison libgdbm-dev ruby git libmysqlclient-dev tzdata mysql-client
RUN sudo rm -rf /var/lib/apt/lists/* \
&& sudo curl -fSL -o ruby.tar.gz "http://cache.ruby-lang.org/pub/ruby/$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.gz" \
&& sudo mkdir -p /usr/src/ruby \
&& sudo tar -xzf ruby.tar.gz -C /usr/src/ruby --strip-components=1 \
&& sudo rm ruby.tar.gz
USER root
RUN cd /usr/src/ruby \
&& { sudo echo '#define ENABLE_PATH_CHECK 0'; echo; cat file.c; } > file.c.new && mv file.c.new file.c \
&& autoconf \
&& ./configure --disable-install-doc
USER user
RUN cd /usr/src/ruby \
&& sudo make -j"$(nproc)" \
&& sudo make install \
&& sudo gem update --system $RUBYGEMS_VERSION \
&& sudo rm -r /usr/src/ruby
RUN sudo gem install bundler --version "$BUNDLER_VERSION"
RUN sudo mkdir -p "$GEM_HOME" "$BUNDLE_BIN" \
&& sudo chmod 777 "$GEM_HOME" "$BUNDLE_BIN" \
&& sudo gem install rails --version "$RAILS_VERSION"
RUN mkdir -p ~/.ssh && \
chmod 0700 ~/.ssh && \
ssh-keyscan github.com > ~/.ssh/known_hosts
ARG ssh_pub_key
ARG ssh_prv_key
RUN echo "$ssh_pub_key" > ~/.ssh/id_rsa.pub && \
echo "$ssh_prv_key" > ~/.ssh/id_rsa && \
chmod 600 ~/.ssh/id_rsa.pub && \
chmod 600 ~/.ssh/id_rsa
USER root
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get install -y nodejs
USER user
WORKDIR /data
RUN sudo mkdir /data/checklist
WORKDIR /data/checklist
ADD Gemfile Gemfile.lock ./
RUN sudo chown -R user /data/checklist
RUN bundle install
ADD . .
RUN sudo chown -R user /data/checklist
EXPOSE 3001
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
ENTRYPOINT ["sh", "./config/docker/startup.sh"]
startup.sh for the working Dockerfile
kill -9 `cat /data/checklist/tmp/pids/server.pid`
bundle exec rake assets:precompile
bundle exec rake db:migrate 2>/dev/null || bundle exec rake db:create db:migrate
rails s -p 3001 -b 0.0.0.0 -e PRODUCTION
echo "Assets Pre-compiled!"
The reason for changing the Dockerfile from the one which works is, since I'm using sidekiq and I want sidekiq to run in another separate container all together. If I give command option in the docker-compose.yml it's not picking up and bundle exec sidekiq -C config/sidekiq.yml is not running in the sidekiq container.
I've kubernetes yaml files for the same app. Facing same problem there as well. I'm not able to overwrite the entrypoint instruction from the YAML for sidekiq pod.
Please let me know if any other info is needed.

You should use CMD in the second Dockerfile too, and then it can be overridden by the Docker Compose command:.
# CMD, not ENTRYPOINT
CMD ["sh", "./config/docker/startup.sh"]
One common use for ENTRYPOINT is to be a wrapper program that does some environment or other first-time setup, then executes the CMD that's passed in as the remainder of the command-line arguments. Then you can separately replace the command part, while keeping the setup part. In a Ruby environment bundle exec ... has the right semantics, so you can also consider:
# Note: MUST be JSON-array syntax
ENTRYPOINT ["bundle", "exec"]
# Can be either string or JSON-array form
CMD ["./config/docker/startup.sh"]
version: '3.8'
services:
webapp:
build: .
# Use default CMD/ENTRYPOINT from image
sidekiq:
build: .
# Overrides CMD, leaves ENTRYPOINT in place
command: sidekiq -C config/sidekiq.yml
There is a separate Compose entrypoint: override, but you should rarely need it.
(Your Dockerfile can be much much simpler. In general you do not need to configure sudo or user passwords, and Dockerfiles run as root by default unless you explicitly use USER to switch the current user IDs. Adding ssh credentials into a Dockerfile where they can be trivially docker cp'd out is also not a best practice. Also consider using the Docker Hub ruby image over building your own from source.)

Related

How to access remote debugging page for dockerized Chromium launch by Puppeteer?

When the chromium succeed to launch, its Debugging WebSocket URL should be like ws://127.0.0.1:9222/devtools/browser/ec261e61-0e52-4016-a5d7-d541e82ecb0a.
127.0.0.1:9222 should be able to browse by Chrome to inspect the headless Chromium. However, I cannot access the remote debugger URL by Chrome after I dockerize my application.
launchOption for launching chromium by Puppeteer:
{
"args": [
"--remote-debugging-port=9222",
"--window-size=1920,1080",
"--mute-audio",
"--disable-notifications",
"--force-device-scale-factor=0.8",
"--no-sandbox",
"--disable-setuid-sandbox"
],
"defaultViewport": {
"height": 1080,
"width": 1920
},
"headless": true
}
Dockerfile:
FROM node:10.16.3-slim
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh -O /usr/sbin/wait-for-it.sh \
&& chmod +x /usr/sbin/wait-for-it.sh
WORKDIR /usr/app
COPY ./ ./
VOLUME ["......." ]
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /usr/app \
&& npm install
USER pptruser
CMD npm run start
EXPOSE 3000 9222
Run the new container by :
docker run \
-p 3000:3000 \
-p 9222:9222 \
pptr
Port 9222 should be accessible in my host machine. But Chrome shows the error ERR_EMPTY_RESPONSE when I browse 127.0.0.1:9222 and DOCKER-INTERNAL-IP:9222 will timeout.
I managed to make this work with puppeteer using the following Dockerfile, docker run and puppeteer config:
FROM ubuntu:18.04
RUN apt update \
&& apt install -y \
curl \
wget \
gnupg \
gcc \
g++ \
make \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser
ADD . /app
WORKDIR /app
RUN chown -R pptruser:pptruser /app
RUN rm -rf node_modules
RUN rm -rf build/*
USER pptruser
RUN npm install --dev
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT /app/entrypoint.sh
Docker run:
docker run -p 9223:9222 -it myimage
Puppeteer launch:
this.browser = await puppeteer.launch(
{
headless: true,
args: [
'--remote-debugging-port=9222',
'--remote-debugging-address=0.0.0.0',
'--no-sandbox'
]
}
);
The entrypoint just launches the platform like: node build/main.js
After that I just had to connect to localhost:9223 on Chrome to see the browser. Hope it helps!
I know there is already an accepted answer, but let me add onto this in hopes to greatly reduce your image size. One shouldn't add too many extras into the Dockerfile if one can help it. But ultimately, adding --remote-debugging-port=9222 and --remote-debugging-address=0.0.0.0 will allow you to access it.
Dockerfile
FROM ubuntu:latest
LABEL Full Name <email#email.com> https://yourwebsite.com
WORKDIR /home/
COPY wrapper-script.sh wrapper-script.sh
# install chromium-browser and cleanup.
RUN apt update && apt install chromium-browser --no-install-recommends -y && apt autoremove && apt clean && apt autoclean && rm -rf /var/lib/apt/lists/*
# Run your commands and add environment variables from your compose file.
CMD ["sh", "wrapper-script.sh"]
I use a wrapper script so that I can include environment variables here. You can see URL and USERNAME set so that I can configure them from the compose file. Of course, i'm sure there is a better way to do this, but I do this so that I can scale my containers horizontally with ease.
wrapper-script.sh
#!/bin/bash
# Start the process
chromium-browser --headless --disable-gpu --no-sandbox --remote-debugging-port=9222 --remote-debugging-address=0.0.0.0 ${URL}${USERNAME}
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start chromium-browser: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep chromium-browser | grep -q -v grep
PROCESS_1_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Lastly, I have the docker-compose file. This is where I define all my settings so that I can configure my wrapper-script.sh with what I need and scale horizontally. Notice the environment section of the docker-compose file. USERNAME and URL are environment variables, and they can be called from the wrapper script.
docker-compose.yml
version: '3.7'
services:
chrome:
command: [ 'sh', 'wrapper-script.sh' ]
image: headless-chrome
build:
context: .
dockerfile: Dockerfile
environment:
- USERNAME=eaglejs
- URL=https://teamtreehouse.com/
ports:
- 9222:9222
If you are wondering what my folder structure looks like. all three files are at the root of the folder. For example:
My_Docker_Repo:
Dockerfile
docker-compose.yml
wrapper-script.sh
After that is all said and done, I simply run docker-compose up and I have one container running. Right now, using the ports section, you'll have to do something to scale that as well. if you were to run docker-compose up --scale chrome=5 your ports will clash, but let me know if you want to try that and i'll see what I can do for scaling, but other than that, if it is for testing, this should work well the way it is. :) Happy coding!
eaglejs

env var in a docker container cannot be echoed

I've built a docker image containing a number of environment variables, including one called SPARK_HOME. Here is the line from the Dockerfile that declares that env var:
ENV SPARK_HOME="/opt/spark"
When I issue docker run I can see that the env var exists but any reference to it doesn't return anything, as demonstrated in a simple echo:
$ docker run --rm myimage /bin/bash -c "env | grep SPARK_HOME ; echo SPARK_HOME=$SPARK_HOME"
SPARK_HOME=/opt/spark
SPARK_HOME=
$
Am I missing something obvious here? Why can I not refer to the value of an existing env var?
EDIT 1: As requested in the comments the Dockerfile content is included below, below the break.
EDIT 2: Discovered that the var can be referred to if I run the container interactively
$ docker run --rm -it myimage /bin/bash
root#419dd5f13a6f:/tmp# echo $SPARK_HOME
/opt/spark
FROM our.internal.artifact.store/python:3.7-stretch
WORKDIR /tmp
ENV SPARK_VERSION=2.2.1
ENV HADOOP_VERSION=2.8.4
ARG ARTIFACTORY_USER
ARG ARTIFACTORY_ENCRYPTED_PASSWORD
ARG ARTIFACTORY_PATH=our.internal.artifact.store/artifactory/generic-dev/ceng/external-dependencies
ARG SPARK_BINARY_PATH=https://${ARTIFACTORY_PATH}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
ARG HADOOP_BINARY_PATH=https://${ARTIFACTORY_PATH}/hadoop-${HADOOP_VERSION}.tar.gz
ADD files/apt-transport-https_1.4.8_amd64.deb /tmp
RUN echo "deb https://username:password#our.internal.artifact.store/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list &&\
echo "deb https://username:password#our.internal.artifact.store/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list &&\
echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update &&\
echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout &&\
echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout &&\
dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb &&\
apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb &&\
apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"
RUN apt-get update && \
apt-get -y install default-jdk
# Detect JAVA_HOME and export in bashrc.
# This will result in something like this being added to /etc/bash.bashrc
# export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN echo export JAVA_HOME="$(readlink -f /usr/bin/java | sed "s:/jre/bin/java::")" >> /etc/bash.bashrc
# Configure Spark-${SPARK_VERSION}
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${SPARK_BINARY_PATH}" -o /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& cd /opt \
&& tar -xvzf /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark \
&& sed -i '/log4j.rootCategory=INFO, console/c\log4j.rootCategory=CRITICAL, console' /opt/spark/conf/log4j.properties.template \
&& mv /opt/spark/conf/log4j.properties.template /opt/spark/conf/log4j.properties \
&& mkdir /opt/spark-optional-jars/ \
&& mv /opt/spark/conf/spark-defaults.conf.template /opt/spark/conf/spark-defaults.conf \
&& printf "spark.driver.extraClassPath /opt/spark-optional-jars/*\nspark.executor.extraClassPath /opt/spark-optional-jars/*\n">>/opt/spark/conf/spark-defaults.conf \
&& printf "spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby" >> /opt/spark/conf/spark-defaults.conf
# Configure Hadoop-${HADOOP_VERSION}
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${HADOOP_BINARY_PATH}" -o /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& tar -xvzf /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& rm /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& ln -s hadoop-${HADOOP_VERSION} hadoop
# Set Environment Variables.
ENV SPARK_HOME="/opt/spark" \
HADOOP_HOME="/opt/hadoop" \
PYSPARK_SUBMIT_ARGS="--master=local[*] pyspark-shell --executor-memory 1g --driver-memory 1g --conf spark.ui.enabled=false spark.executor.extrajavaoptions=-Xmx=1024m" \
PYTHONPATH="/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH" \
PATH="$PATH:/opt/spark/bin:/opt/hadoop/bin" \
PYSPARK_DRIVER_PYTHON="/usr/local/bin/python" \
PYSPARK_PYTHON="/usr/local/bin/python"
# Upgrade pip and setuptools
RUN pip install --index-url https://username:password#our.internal.artifact.store/artifactory/api/pypi/pypi-virtual-all/simple --upgrade pip setuptools
# Install core python packages
RUN pip install --index-url https://username:password#our.internal.artifact.store/artifactory/api/pypi/pypi-virtual-all/simple pipenv
ADD Pipfile /tmp
ADD pysparkdf_helloworld.py /tmp
Ok, contrary to my comment, thats not weird at all.
The issue is just that your local shell already interpolates $SPARK_HOME before sending it to the container, so you're basically calling echo SPARK_HOME=
To fix, just escape the env var in the command: $SPARK_HOME->\$SPARK_HOME
Demo:
$ export SPARK_HOME=foo
$ docker run ... /bin/bash -c "env | grep SPARK_HOME ; echo SPARK_HOME=$SPARK_HOME"
> SPARK_HOME=/opt/spark
> SPARK_HOME=foo

Reset a docker image to initial state

I'm new to docker and recently I tried to use setup openstreetmap-tileserver. I tried a manual installation by cloning the project and run docker build -t SampleMap and docker run -v openstreetmap-data:/var/lib/postgresql/10/main SampleMap import and then run the proper command to run the container. I got three images using docker image ls:
ubuntu
none
SampleMap
Everything worked fined. Next, I tried to erase the DB and do the whole process for a new map (a different .osm.pbf file). I removed the image SampleMap (with docker image rm) and tried to do the whole process again but the problem is all the DB tables still exist. It seems that all the changes are written into the Ubuntu image rather than the SampleMap. I'm asking generally is there any way that I can reset the whole Ubuntu image to its initial state? It seems that all the changes are permanent in the Ubuntu image.
Here is the Dockerfile:
FROM ubuntu:18.04
# Based on
# https://switch2osm.org/manually-building-a-tile-server-18-04-lts/
# Set up environment
ENV TZ=UTC
ENV AUTOVACUUM=on
ENV UPDATES=disabled
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install dependencies
RUN echo "deb [ allow-insecure=yes ] http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main" >> /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y apt-transport-https ca-certificates \
&& apt-get install -y --no-install-recommends --allow-unauthenticated \
apache2 \
apache2-dev \
autoconf \
build-essential \
bzip2 \
cmake \
fonts-noto-cjk \
fonts-noto-hinted \
fonts-noto-unhinted \
clang \
gdal-bin \
git-core \
libagg-dev \
libboost-all-dev \
libbz2-dev \
libcairo-dev \
libcairomm-1.0-dev \
libexpat1-dev \
libfreetype6-dev \
libgdal-dev \
libgeos++-dev \
libgeos-dev \
libgeotiff-epsg \
libicu-dev \
liblua5.3-dev \
libmapnik-dev \
libpq-dev \
libproj-dev \
libprotobuf-c0-dev \
libtiff5-dev \
libtool \
libxml2-dev \
lua5.3 \
make \
mapnik-utils \
nodejs \
npm \
postgis \
postgresql-10 \
postgresql-10-postgis-2.5 \
postgresql-10-postgis-2.5-scripts \
postgresql-contrib-10 \
protobuf-c-compiler \
python-mapnik \
sudo \
tar \
ttf-unifont \
unzip \
wget \
zlib1g-dev \
osmosis \
osmium-tool \
cron \
python3-psycopg2 python3-shapely python3-lxml \
&& apt-get clean autoclean \
&& apt-get autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}/
# Set up renderer user
RUN adduser --disabled-password --gecos "" renderer
USER renderer
# Install latest osm2pgsql
RUN mkdir /home/renderer/src
WORKDIR /home/renderer/src
RUN git clone https://github.com/openstreetmap/osm2pgsql.git
WORKDIR /home/renderer/src/osm2pgsql
RUN mkdir build
WORKDIR /home/renderer/src/osm2pgsql/build
RUN cmake .. \
&& make -j $(nproc)
USER root
RUN make install
USER renderer
# Install and test Mapnik
RUN python -c 'import mapnik'
# Install mod_tile and renderd
WORKDIR /home/renderer/src
RUN git clone -b switch2osm https://github.com/SomeoneElseOSM/mod_tile.git
WORKDIR /home/renderer/src/mod_tile
RUN ./autogen.sh \
&& ./configure \
&& make -j $(nproc)
USER root
RUN make -j $(nproc) install \
&& make -j $(nproc) install-mod_tile \
&& ldconfig
USER renderer
# Configure stylesheet
WORKDIR /home/renderer/src
RUN git clone https://github.com/gravitystorm/openstreetmap-carto.git
WORKDIR /home/renderer/src/openstreetmap-carto
USER root
RUN npm install -g carto
USER renderer
RUN carto project.mml > mapnik.xml
# Load shapefiles
WORKDIR /home/renderer/src/openstreetmap-carto
RUN scripts/get-shapefiles.py
# Configure renderd
USER root
RUN sed -i 's/renderaccount/renderer/g' /usr/local/etc/renderd.conf \
&& sed -i 's/hot/tile/g' /usr/local/etc/renderd.conf
USER renderer
# Configure Apache
USER root
RUN mkdir /var/lib/mod_tile \
&& chown renderer /var/lib/mod_tile \
&& mkdir /var/run/renderd \
&& chown renderer /var/run/renderd
RUN echo "LoadModule tile_module /usr/lib/apache2/modules/mod_tile.so" >> /etc/apache2/conf-available/mod_tile.conf \
&& a2enconf mod_tile
COPY apache.conf /etc/apache2/sites-available/000-default.conf
COPY leaflet-demo.html /var/www/html/index.html
RUN ln -sf /proc/1/fd/1 /var/log/apache2/access.log \
&& ln -sf /proc/1/fd/2 /var/log/apache2/error.log
# Configure PosgtreSQL
COPY postgresql.custom.conf.tmpl /etc/postgresql/10/main/
RUN chown -R postgres:postgres /var/lib/postgresql \
&& chown postgres:postgres /etc/postgresql/10/main/postgresql.custom.conf.tmpl \
&& echo "\ninclude 'postgresql.custom.conf'" >> /etc/postgresql/10/main/postgresql.conf
# copy update scripts
COPY openstreetmap-tiles-update-expire /usr/bin/
RUN chmod +x /usr/bin/openstreetmap-tiles-update-expire \
&& mkdir /var/log/tiles \
&& chmod a+rw /var/log/tiles \
&& ln -s /home/renderer/src/mod_tile/osmosis-db_replag /usr/bin/osmosis-db_replag \
&& echo "* * * * * renderer openstreetmap-tiles-update-expire\n" >> /etc/crontab
# install trim_osc.py helper script
USER renderer
RUN cd ~/src \
&& git clone https://github.com/zverik/regional \
&& cd regional \
&& git checkout 612fe3e040d8bb70d2ab3b133f3b2cfc6c940520 \
&& chmod u+x ~/src/regional/trim_osc.py
# Start running
USER root
COPY run.sh /
COPY indexes.sql /
ENTRYPOINT ["/run.sh"]
CMD []
EXPOSE 80 5432
And here is my run.sh file:
#!/bin/bash
set -x
function CreatePostgressqlConfig()
{
cp /etc/postgresql/10/main/postgresql.custom.conf.tmpl /etc/postgresql/10/main/postgresql.custom.conf
sudo -u postgres echo "autovacuum = $AUTOVACUUM" >> /etc/postgresql/10/main/postgresql.custom.conf
cat /etc/postgresql/10/main/postgresql.custom.conf
}
if [ "$#" -ne 1 ]; then
ls /home/renderer
echo "usage: <import|run>"
echo "commands:"
echo " import: Set up the database and import /data.osm.pbf"
echo " run: Runs Apache and renderd to serve tiles at /tile/{z}/{x}/{y}.png"
echo "environment variables:"
echo " THREADS: defines number of threads used for importing / tile rendering"
echo " UPDATES: consecutive updates (enabled/disabled)"
exit 1
fi
if [ "$1" = "import" ]; then
# Initialize PostgreSQL
CreatePostgressqlConfig
service postgresql start
sudo -u postgres createuser renderer
sudo -u postgres createdb -E UTF8 -O renderer gis
sudo -u postgres psql -d gis -c "CREATE EXTENSION postgis;"
sudo -u postgres psql -d gis -c "CREATE EXTENSION hstore;"
sudo -u postgres psql -d gis -c "ALTER TABLE geometry_columns OWNER TO renderer;"
sudo -u postgres psql -d gis -c "ALTER TABLE spatial_ref_sys OWNER TO renderer;"
# Download Luxembourg as sample if no data is provided
if [ ! -f /data.osm.pbf ]; then
echo "WARNING: No import file at /data.osm.pbf, so importing iran-latest as example..."
wget -nv http://download.geofabrik.de/north-america/canada-latest.osm.pbf -O /data.osm.pbf
# wget -nv http://download.geofabrik.de/europe/luxembourg.poly -O /data.poly
fi
# determine and set osmosis_replication_timestamp (for consecutive updates)
osmium fileinfo /data.osm.pbf > /var/lib/mod_tile/data.osm.pbf.info
osmium fileinfo /data.osm.pbf | grep 'osmosis_replication_timestamp=' | cut -b35-44 > /var/lib/mod_tile/replication_timestamp.txt
REPLICATION_TIMESTAMP=$(cat /var/lib/mod_tile/replication_timestamp.txt)
# initial setup of osmosis workspace (for consecutive updates)
sudo -u renderer openstreetmap-tiles-update-expire $REPLICATION_TIMESTAMP
# copy polygon file if available
if [ -f /data.poly ]; then
sudo -u renderer cp /data.poly /var/lib/mod_tile/data.poly
fi
# Import data
sudo -u renderer osm2pgsql -d gis --create --slim -G --hstore --tag-transform-script /home/renderer/src/openstreetmap-carto/openstreetmap-carto.lua -C 2048 --number-processes ${THREADS:-4} -S /home/renderer/src/openstreetmap-carto/openstreetmap-carto.style /data.osm.pbf
# Create indexes
sudo -u postgres psql -d gis -f indexes.sql
service postgresql stop
exit 0
fi
if [ "$1" = "run" ]; then
# Clean /tmp
rm -rf /tmp/*
# Fix postgres data privileges
chown postgres:postgres /var/lib/postgresql -R
# Initialize PostgreSQL and Apache
CreatePostgressqlConfig
service postgresql start
service apache2 restart
# Configure renderd threads
sed -i -E "s/num_threads=[0-9]+/num_threads=${THREADS:-4}/g" /usr/local/etc/renderd.conf
# start cron job to trigger consecutive updates
if [ "$UPDATES" = "enabled" ]; then
/etc/init.d/cron start
fi
# Run
sudo -u renderer renderd -f -c /usr/local/etc/renderd.conf
service postgresql stop
exit 0
fi
echo "invalid command"
exit 1
When you create a container from your image, you mount a volume, using the -v option:
docker run -v openstreetmap-data:/var/lib/postgresql/10/main SampleMap import
Your persistent data is stored in openstreetmap-data. That file/folder is not in your container (that is created every time), it is mounted from your host's filesystem. That's why it persists

Dockerfile entrypoint unable to switch user

I am unable to switch user to a non-root user from the entry point script. The User directive to change the user in Dockerfile works, but I am not able to change permissions using chmod. To overcome this issue I created entrypoint.sh script to change the folder permissions but when I try to switch user using su command, it apparently doesn't work, the container is still running as root.
The Dockerfile
FROM php:7.2-fpm
# Installing dependencies
RUN apt-get update && apt-get install -y \
build-essential \
mysql-client \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Installing composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
ENV USER_ID=1000
ENV GROUP_ID=1000
ENV USER_NAME=www
ENV GROUP_NAME=www
RUN groupadd -g $GROUP_ID $GROUP_NAME
RUN useradd -u $USER_ID -ms /bin/bash -g $GROUP_NAME $USER_NAME
RUN mkdir /app
WORKDIR /app
EXPOSE 9000
COPY ./entrypoint.sh /
RUN ["chmod", "+x", "/entrypoint.sh"]
ENTRYPOINT ["/entrypoint.sh"]
Entrypoint.sh file
#!/bin/bash
if [ -n "$USER_ID" -a -n "$GROUP_ID" ]; then
chown -R $USER_NAME:$GROUP_NAME .
su $USER_NAME
fi
php-fpm
exec "$#"
whatever I do I am not able to switch user from the entrypoint.sh script.
My case is to run the container as non-root user.
I think that your su command should be something like
su $USERNAME --command "/doit.sh"
b/c your entrpoiny script is switching user, doing nothing, and then switching back to root.
To solve this you need to change your dockerfile and add:
RUN echo "root ALL = NOPASSWD: /bin/su ALL" >> /etc/sudoers
Or use gosu what is better:
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
Then inside entrypoint.sh:
gosu root yourservice &
#ie: gosu root /usr/sbin/sshd -D &
exec gosu no-root-user yourservice2
# ie: exec gosu no-root-user tail -f /dev/null

How to run 2 services during 'docker run'?

I have a Dockerfile that creates an image with Apache/php and redis inside.
I am aware that it should be splitted in 2 containers. But I whant to know if it is possible to start apache and redis during the run process.
For now I could run in two different ways:
docker run --rm -p 80:80 -p 6379:6379 -v $MY_FULLPATH:/var/www/html -e REMOTE_HOST=$REMOTE_HOST my_img redis-server
docker run --rm -p 80:80 -p 6379:6379 -v $MV_FULLPATH:/var/www/html -e REMOTE_HOST=$REMOTE_HOST my_img apache2-foreground
If I run using the first method I must open the terminal to manually start apache.
If I run using the second one I must start REDIS manually.
By the documentation : "If you list more than one CMD then only the last CMD will take effect." I know that only "redis-server" will be working at the start.
So Is there a way to set booth automatically? .
This is my Dockerfile:
FROM php:5-apache
## Update apt-get
RUN apt-get update
RUN apt-get install -y figlet
RUN figlet MV_Docker_Build
## UTILITIES
RUN figlet vim
RUN apt-get install -y vim
RUN figlet wget
RUN apt-get install -y wget
RUN figlet CURL
RUN apt-get install -y curl
## APACHE2 basic installation
RUN figlet APACHE2
RUN apachectl -M
RUN a2enmod rewrite
RUN a2enmod expires
RUN service apache2 restart
RUN apachectl -M
## ====================================================================== > PHP modules
## Note: when installing from php5 for some modules we need to copy from php5/mods-available to local/etc/php/conf.d and create a simbolic link
RUN figlet PHP_MODULES
RUN php -m
RUN apt-get install -y php5-common
RUN apt-get install -y php-calendar
#RUN cp /etc/php5/mods-available/calendar.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/calendar.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/calendar.so
#RUN docker-php-ext-install calendar
RUN docker-php-ext-install bcmath
RUN apt-get install -y php5-mhash
#RUN cp /etc/php5/mods-available/mhash.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mhash.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mhash.so
RUN apt-get install -y php5-intl
RUN cp /etc/php5/mods-available/intl.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/intl.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/intl.so
RUN apt-get install -y php5-mcrypt
RUN cp /etc/php5/mods-available/mcrypt.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mcrypt.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mcrypt.so
RUN apt-get install -y php5-redis
RUN cp /etc/php5/mods-available/redis.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/redis.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/redis.so
RUN apt-get install -y php5-mysql
RUN cp /etc/php5/mods-available/mysql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mysql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mysql.so
RUN cp /etc/php5/mods-available/opcache.ini /usr/local/etc/php/conf.d
RUN apt-get install -y php5-gd
RUN cp /etc/php5/mods-available/gd.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/gd.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/gd.so
RUN apt-get install -y php5-gdcm
RUN cp /etc/php5/mods-available/gdcm.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/gdcm.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/gdcm.so
RUN apt-get install -y php5-vtkgdcm
RUN cp /etc/php5/mods-available/vtkgdcm.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/vtkgdcm.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/vtkgdcm.so
RUN apt-get install -y php5-ldap
RUN cp /etc/php5/mods-available/ldap.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/ldap.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/ldap.so
RUN apt-get install -y php5-xsl
RUN cp /etc/php5/mods-available/xsl.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/xsl.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/xsl.so
RUN apt-get install -y php5-tidy
RUN cp /etc/php5/mods-available/tidy.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/tidy.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/tidy.so
RUN apt-get install -y php5-xmlrpc
RUN cp /etc/php5/mods-available/xmlrpc.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/xmlrpc.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/xmlrpc.so
RUN apt-get install -y php5-pgsql
RUN cp /etc/php5/mods-available/pgsql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pgsql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pgsql.so
RUN cp /etc/php5/mods-available/mysqli.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/mysqli.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/mysqli.so
RUN cp /etc/php5/mods-available/pdo.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pdo.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pdo.so
RUN cp /etc/php5/mods-available/pdo_mysql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pdo_mysql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pdo_mysql.so
RUN cp /etc/php5/mods-available/pdo_pgsql.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/pdo_pgsql.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/pdo_pgsql.so
RUN cp /etc/php5/mods-available/readline.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/readline.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/readline.so
#RUN apt-get install -y php5-snmp
#RUN cp /etc/php5/mods-available/snmp.ini /usr/local/etc/php/conf.d && ln -s /usr/lib/php5/20131226/snmp.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/snmp.so
RUN figlet PHP_MODULES
RUN php -m
## ====================================================================== > End of PHP modules
## ====================================================================== > REDIS
RUN figlet REDIS
RUN apt-get install -y telnet redis-server
RUN apt-get install -y redis-server
## ====================================================================== > NPM
RUN figlet NPM
RUN apt-get install -y npm
## ====================================================================== > COPYING php.ini
RUN figlet COPYING__php.ini
RUN cp /etc/php5/cli/php.ini /usr/local/etc/php/
RUN ls -l /usr/local/etc/
## ====================================================================== > XDEBUG
# XDEBUG EXTENSION FOR PHP | DOCUMENTATION => https://xdebug.org/docs/remote
#
# install xdebug and enable it. This block of code goes through the installion from source and compiling steps found
# on the xdebug website
# https://xdebug.org/docs/install
RUN figlet INSTALLING__XDEBUG
RUN cd /tmp \
&& wget http://xdebug.org/files/xdebug-2.5.4.tgz \
&& tar -xvzf xdebug-2.5.4.tgz \
&& cd xdebug-2.5.4 \
&& phpize \
&& ./configure \
&& make \
&& cp modules/xdebug.so /usr/local/lib/php/extensions/no-debug-non-zts-20131226/
RUN figlet INSIDE_no-debug-non-zts-20131226/
RUN ls -l /usr/local/lib/php/extensions/no-debug-non-zts-20131226/
#https://stackoverflow.com/questions/47596381/how-to-setup-an-variable-env-inside-dockerfile-to-be-overriden-in-a-docker-conta?noredirect=1#comment82150863_47596381
# ADD xdebug configurations
RUN figlet SETTING__XDEBUG__php.ini
RUN { \
echo '[xdebug]'; \
echo 'zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20131226/xdebug.so'; \
echo 'xdebug.remote_enable=1'; \
echo 'xdebug.remote_port=9000'; \
echo 'xdebug.remote_autostart=1'; \
echo 'xdebug.remote_handler=dbgp'; \
echo 'xdebug.idekey=dockerdebug'; \
echo 'xdebug.profiler_output_dir="/var/www/html"'; \
echo 'xdebug.remote_connect_back=0'; \
echo 'xdebug.remote_host=$REMOTE_HOST'; \
} >> /usr/local/etc/php/php.ini
RUN figlet XDEGUB__IN__php.ini
RUN cat /usr/local/etc/php/php.ini
## ====================================================================== > COMPOSER
RUN figlet Escape_SUDO
RUN exit
RUN figlet Install__COMPOSER
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& mv composer.phar /usr/bin/composer
RUN composer
## ====================================================================== > PhpUnit
RUN figlet PhpUnit
RUN curl https://phar.phpunit.de/phpunit-5.6.0.phar -L -o phpunit.phar
RUN chmod +x phpunit.phar
RUN mv phpunit.phar /usr/local/bin/phpunit
RUN figlet COPYING_entrypoint.sh
COPY entrypoint.sh /usr/local/bin/
RUN figlet Permission_entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
ENTRYPOINT [ "entrypoint.sh" ]
# EXPOSE - PORTS
RUN figlet EXPOSE_PORTS
EXPOSE 80
#EXPOSE 6379
EXPOSE 9000
#CMD ["apache2-foreground","redis-server"]
#ADD run.sh /run.sh
COPY run.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/run.sh
CMD ["/bin/sh", "-c", "/run.sh"]
This is the run.sh
#!/usr/bin/env bash
exec apache2-foreground &
exec redis-server &
This is entrypoint.sh
#!/bin/bash
set -e
# Check if our environment variable has been passed.
if [ -z "${REMOTE_HOST}" ]
then
echo "REMOTE_HOST has not been set."
exit 1
else
sed -i.bak "s/\$REMOTE_HOST/${REMOTE_HOST}/g" /usr/local/etc/php/php.ini
fi
exec "$#"
You can start more than one process a couple of ways:
Start them as a service
Start them trough a cron job (#reboot)
Start processes in backgound
UPDATE after your Dockerfile post
Before I'll try to answer a couple of pointers:
Every time you enter a RUN command in the Dockerfile it will create a new layer and it makes the image bigger and the build slower.
This container clearly tries to do too much. A container should do 1 thing and 1 thing good.
Having said that, I think I have a solution :-)
remove the run.sh
change your entrypoint to this:
#!/bin/bash
set -e
# Check if our environment variable has been passed.
if [ -z "${REMOTE_HOST}" ]
then
echo "REMOTE_HOST has not been set."
exit 1
else
sed -i.bak "s/\$REMOTE_HOST/${REMOTE_HOST}/g" /usr/local/etc/php/php.ini
fi
echo "Starting redis"
exec redis-server &
exec "$#"
and The end of your Dockerfile to this:
RUN figlet EXPOSE_PORTS
EXPOSE 80
#EXPOSE 6379
EXPOSE 9000
CMD ["apache2-foreground"]
rebuild and have fun :-)
Screenshot of my running console

Resources