I am new to docker community want webpanel image for my php website when I run docker image building command I facing this error please some help me 4 days had been passed and am still stuck in this.in execution its showing error.
This is error snap Shot Link
#Docker File Code.
FROM lolhens/baseimage:latest
MAINTAINER LolHens <pierrekisters#gmail.com>
RUN apt-get update
RUN apt-get install -y \
curl
RUN cd /tmp
RUN curl http://vestacp.com/pub/vst-install.sh | bash -s \
y no -f \
password admin \
nginx yes \
apache yes \
phpfpm no \
vsftpd no \
proftpd no \
exim yes \
dovecot yes \
spamassassin yes \
clamav yes \
named yes \
iptables no \
fail2ban no \
mysql no \
postgresql yes \
remi yes \
quota yes \
cleanimage
ADD dovecot /etc/init.d/dovecot
RUN chmod +x /etc/init.d/dovecot
RUN cd /usr/local/vesta/data/ips && mv * 127.0.0.1 \
&& cd /etc/apache2/conf.d && sed -i -- 's/172.*.*.*:80/127.0.0.1:80/g' * && sed -i -- 's/172.*.*.*:8443/127.0.0.1:8443/g' * \
&& cd /etc/nginx/conf.d && sed -i -- 's/172.*.*.*:80;/80;/g' * && sed -i -- 's/172.*.*.*:8080/127.0.0.1:8080/g' * \
&& cd /home/admin/conf/web && sed -i -- 's/172.*.*.*:80;/80;/g' * && sed -i -- 's/172.*.*.*:8080/127.0.0.1:8080/g' *
ADD startup.sh /etc/my_init.d/startup.sh
RUN chmod +x /etc/my_init.d/startup.sh
CMD bash
EXPOSE 80 8083 8080 3306 443 25 993 110 53 54
You need to replace the first part of your script with the below snippet (the rest stays the same I guess - I can't test that part). I added inline comments explaining the new lines.
#Docker File Code.
FROM lolhens/baseimage:latest
MAINTAINER LolHens <pierrekisters#gmail.com>
RUN apt-get update
RUN apt-get install -y \
curl
RUN cd /tmp
# First save the script locally
RUN curl -O http://vestacp.com/pub/vst-install.sh
# Make the script runnable
RUN chmod a+x ./vst-install.sh
# Disable the apt-get check for unauthenticated packages
RUN echo "APT::Get::AllowUnauthenticated \"true\";" > /etc/apt/apt.conf.d/99vesta
# Pass 'yes' automatically when asked
RUN yes | ./vst-install.sh -y no -p admin \
nginx yes \
apache yes \
phpfpm no \
vsftpd no \
proftpd no \
exim yes \
dovecot yes \
spamassassin yes \
clamav yes \
named yes \
iptables no \
fail2ban no \
mysql no \
postgresql yes \
remi yes \
quota yes \
cleanimage
Related
I'm trying to build a custom image of opentsdb to run as non-root user. Our k8s clusters have security policies that doesn't allow containers to run as root. I'm utilizing an existing Dockerfile from here https://hub.docker.com/r/petergrace/opentsdb-docker/dockerfile
Below is my Docker file where I have added extra step to create a new user 'opentsdb' and at the end running it as USER 'opentsdb'
FROM alpine:latest
ENV TINI_VERSION v0.18.0
ENV TSDB_VERSION 2.4.0
ENV HBASE_VERSION 1.4.4
ENV GNUPLOT_VERSION 5.2.4
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:/usr/lib/jvm/java-1.8-openjdk/bin/
ENV ALPINE_PACKAGES "rsyslog bash openjdk8 make wget libgd libpng libjpeg libwebp libjpeg-turbo cairo pango lua"
ENV BUILD_PACKAGES "build-base autoconf automake git python3-dev cairo-dev pango-dev gd-dev lua-dev readline-dev libpng-dev libjpeg-turbo-dev libwebp-dev sed"
ENV HBASE_OPTS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
ENV JVMARGS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -enableassertions -enablesystemassertions"
RUN addgroup opentsdb && adduser -D -u 100 -G opentsdb opentsdb
# Tini is a tiny init that helps when a container is being culled to stop things nicely
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-amd64 /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
# Add the base packages we'll need
RUN apk --update add apk-tools \
&& apk add ${ALPINE_PACKAGES} \
# repo required for gnuplot \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.0/testing/ \
&& mkdir -p /opt/opentsdb
WORKDIR /opt/opentsdb/
# Add build deps, build opentsdb, and clean up afterwards.
RUN set -ex && apk add --virtual builddeps ${BUILD_PACKAGES}
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN wget --no-check-certificate \
-O v${TSDB_VERSION}.zip \
https://github.com/OpenTSDB/opentsdb/archive/v${TSDB_VERSION}.zip \
&& unzip v${TSDB_VERSION}.zip \
&& rm v${TSDB_VERSION}.zip \
&& cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& echo "tsd.http.request.enable_chunked = true" >> src/opentsdb.conf \
&& echo "tsd.http.request.max_chunk = 1000000" >> src/opentsdb.conf
RUN cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& find . | xargs grep -s central.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/central/https:\/\/repo1/g" \
&& find . | xargs grep -s repo1.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/repo1/https:\/\/repo1/g" \
&& ./build.sh \
&& cp build-aux/install-sh build/build-aux \
&& cd build \
&& make install \
&& cd / \
&& rm -rf /opt/opentsdb/opentsdb-${TSDB_VERSION}
RUN cd /tmp && \
wget --no-check-certificate https://sourceforge.net/projects/gnuplot/files/gnuplot/${GNUPLOT_VERSION}/gnuplot-${GNUPLOT_VERSION}.tar.gz && \
tar xzf gnuplot-${GNUPLOT_VERSION}.tar.gz && \
cd gnuplot-${GNUPLOT_VERSION} && \
./configure && \
make install && \
cd /tmp && rm -rf /tmp/gnuplot-${GNUPLOT_VERSION} && rm /tmp/gnuplot-${GNUPLOT_VERSION}.tar.gz
RUN apk del builddeps && rm -rf /var/cache/apk/*
#Install HBase and scripts
RUN mkdir -p /data/hbase /root/.profile.d /opt/downloads
WORKDIR /opt/downloads
RUN wget -O hbase-${HBASE_VERSION}.bin.tar.gz http://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
&& tar xzvf hbase-${HBASE_VERSION}.bin.tar.gz \
&& mv hbase-${HBASE_VERSION} /opt/hbase \
&& rm -r /opt/hbase/docs \
&& rm hbase-${HBASE_VERSION}.bin.tar.gz
# Add misc startup files
RUN ln -s /usr/local/share/opentsdb/etc/opentsdb /etc/opentsdb \
&& rm /etc/opentsdb/opentsdb.conf \
&& mkdir /opentsdb-plugins
ADD files/opentsdb.conf /etc/opentsdb/opentsdb.conf.sample
ADD files/hbase-site.xml /opt/hbase/conf/hbase-site.xml.sample
ADD files/start_opentsdb.sh /opt/bin/
ADD files/create_tsdb_tables.sh /opt/bin/
ADD files/start_hbase.sh /opt/bin/
ADD files/entrypoint.sh /entrypoint.sh
# Fix ENV variables in installed scripts
RUN for i in /opt/bin/start_hbase.sh /opt/bin/start_opentsdb.sh /opt/bin/create_tsdb_tables.sh; \
do \
sed -i "s#::JAVA_HOME::#$JAVA_HOME#g; s#::PATH::#$PATH#g; s#::TSDB_VERSION::#$TSDB_VERSION#g;" $i; \
done
RUN echo "export HBASE_OPTS=\"${HBASE_OPTS}\"" >> /opt/hbase/conf/hbase-env.sh
#4242 is tsdb, rest are hbase ports
EXPOSE 60000 60010 60030 4242 16010 16070
USER opentsdb
#HBase is configured to store data in /data/hbase, vol-mount it to persist your data.
VOLUME ["/data/hbase", "/tmp", "/opentsdb-plugins"]
CMD ["/entrypoint.sh"]
however the newly built image is throwing error and says permission denied for /opt/bin/ files. And the opentsdb is not getting deployed correctly.
On local using docker desktop, everything works fine using root, when I run below command
docker run -dp 4242:4242 petergrace/opentsdb-docker
Do i need to use any chown commands too ?
Could you help how to make opentsdb get deployed correctly using uid 100 ? Thanks in advance!
I am struggling to find the cause of the following error after building an image and trying to run it. The error is bellow:
standard_init_linux.go:228: exec user process caused: no such file or directory
the Dockerfile
FROM rocker/r-ver:3.6.3
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev \
libxt-dev \
xtail \
wget \
dos2unix
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-$VERSION-
amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb && \
. /etc/environment && \
R -e "install.packages(c('shiny', 'rmarkdown'), repos='$MRAN')" && \
cp -R /usr/local/lib/R/site-library/shiny/examples/* /srv/shiny-server/ && \
chown shiny:shiny /var/lib/shiny-server
EXPOSE 3838
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
the file shiny-server.sh
#!/bin/sh
# Make sure the directory for individual app logs exists
mkdir -p /var/log/shiny-server
chown shiny.shiny /var/log/shiny-server
if [ "$APPLICATION_LOGS_TO_STDOUT" != "false" ];
then
# push the "real" application logs to stdout with xtail in detached mode
exec xtail /var/log/shiny-server/ &
fi
# start shiny server
exec shiny-server 2>&1
Appreciate any help
correct the windows line endings with notepad++. That should work.
When I am using the elasticsearch official docker image ELASTIC_PASSWORD env variable is working good
docker run -dti -e ELASTIC_PASSWORD=my_own_password -e discovery.type=single-node elasticsearch:7.8.0
But when I build my own customized docker image the ELASTIC_PASSWORD is not working can you please help me on this
Here is my Docker file
FROM ubuntu:18.04
ENV \
REFRESHED_AT=2020-06-20
###############################################################################
# INSTALLATION
###############################################################################
### install prerequisites (cURL, gosu, tzdata, JDK for Logstash)
RUN set -x \
&& apt update -qq \
&& apt install -qqy --no-install-recommends ca-certificates curl gosu tzdata openjdk-11-jdk-headless \
&& apt clean \
&& rm -rf /var/lib/apt/lists/* \
&& gosu nobody true \
&& set +x
### set current package version
ARG ELK_VERSION=7.8.0
### install Elasticsearch
# predefine env vars, as you can't define an env var that references another one in the same block
ENV \
ES_VERSION=${ELK_VERSION} \
ES_HOME=/opt/elasticsearch
ENV \
ES_PACKAGE=elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz \
ES_GID=991 \
ES_UID=991 \
ES_PATH_CONF=/etc/elasticsearch \
ES_PATH_BACKUP=/var/backups \
KIBANA_VERSION=${ELK_VERSION}
RUN DEBIAN_FRONTEND=noninteractive \
&& mkdir ${ES_HOME} \
&& curl -O https://artifacts.elastic.co/downloads/elasticsearch/${ES_PACKAGE} \
&& tar xzf ${ES_PACKAGE} -C ${ES_HOME} --strip-components=1 \
&& rm -f ${ES_PACKAGE} \
&& groupadd -r elasticsearch -g ${ES_GID} \
&& useradd -r -s /usr/sbin/nologin -M -c "Elasticsearch service user" -u ${ES_UID} -g elasticsearch elasticsearch \
&& mkdir -p /var/log/elasticsearch ${ES_PATH_CONF} ${ES_PATH_CONF}/scripts /var/lib/elasticsearch ${ES_PATH_BACKUP}
As I think in order to achieve this functionality (set ELASTIC_PASSWORD from command line and it works) for your own container you need to re-configure Elasticsearch startup script. It's not a trivial task.
For example here is docker-entrypoint.sh from official docker image.
https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/bin/docker-entrypoint.sh
You can see that script do all 'hidden' work to allow us to run it by only command.
I ran a docker container with binding a host directory to a container directory, but the permissions for the container directory and its files are given differently depending on the hosts.
docker run -w /vlc-android -v $(pwd)/vlc-android:/vlc-android --rm vlc-android:latest bash -c "ls -ld /vlc-android"
result on Mac OS 10.14.6 (Docker desktop version 2.1.0.3)
drwxr-xr-x 2 videolan videolan 64 Sep 27 04:34 /vlc-android
result on Ubuntu server 18.04.3
drwxr-xr-x 2 root root 4096 Sep 27 06:11 /vlc-android
I'm trying to build the VLC player android app. from the source code via a docker image of the vlc-android build environment or below...
FROM debian:stretch-20190506
MAINTAINER VideoLAN roots <roots#videolan.org>
ENV IMAGE_DATE=201907171600
ENV ANDROID_NDK="/sdk/android-ndk" \
ANDROID_SDK="/sdk/android-sdk-linux"
# If someone wants to use VideoLAN docker images on a local machine and does
# not want to be disturbed by the videolan user, we should not take an uid/gid
# in the user range of main distributions, which means:
# - Debian based: <1000
# - RPM based: <500 (CentOS, RedHat, etc.)
ARG VIDEOLAN_CI_UID=499
RUN addgroup --quiet --gid ${VIDEOLAN_CI_UID} videolan && \
adduser --quiet --uid ${VIDEOLAN_CI_UID} --ingroup videolan videolan && \
echo "videolan:videolan" | chpasswd && \
apt-get update && \
apt-get install --no-install-suggests --no-install-recommends -y \
openjdk-8-jdk-headless ca-certificates autoconf m4 automake ant autopoint bison \
flex build-essential libtool libtool-bin patch pkg-config ragel subversion \
git rpm2cpio libwebkitgtk-1.0-0 yasm ragel g++ protobuf-compiler gettext \
libgsm1-dev wget expect unzip python python3 locales libltdl-dev curl && \
echo "deb http://ftp.debian.org/debian stretch-backports main" > /etc/apt/sources.list.d/stretch-backports.list && \
apt-get update && apt-get -y -t stretch-backports install cmake && \
rm -f /etc/apt/sources.list.d/stretch-backports.list && \
echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list.d/testing.list && \
apt-get update && apt-get -y -t testing --no-install-suggests --no-install-recommends install automake && \
rm -f /etc/apt/sources.list.d/testing.list && \
apt-get clean -y && rm -rf /var/lib/apt/lists/* && \
localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 && \
echo "export ANDROID_NDK=${ANDROID_NDK}" >> /etc/profile.d/vlc_env.sh && \
echo "export ANDROID_SDK=${ANDROID_SDK}" >> /etc/profile.d/vlc_env.sh && \
mkdir sdk && cd sdk && \
wget -q https://dl.google.com/android/repository/android-ndk-r18b-linux-x86_64.zip && \
ANDROID_NDK_SHA256=4f61cbe4bbf6406aa5ef2ae871def78010eed6271af72de83f8bd0b07a9fd3fd && \
echo $ANDROID_NDK_SHA256 android-ndk-r18b-linux-x86_64.zip | sha256sum -c && \
unzip android-ndk-r18b-linux-x86_64.zip && \
rm -f android-ndk-r18b-linux-x86_64.zip && \
ln -s android-ndk-r18b android-ndk && \
mkdir android-sdk-linux && \
cd android-sdk-linux && \
mkdir "licenses" && \
echo "24333f8a63b6825ea9c5514f83c2829b004d1fee" > "licenses/android-sdk-license" && \
echo "d56f5187479451eabf01fb78af6dfcb131a6481e" >> "licenses/android-sdk-license" && \
wget -q https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip && \
SDK_TOOLS_SHA256=444e22ce8ca0f67353bda4b85175ed3731cae3ffa695ca18119cbacef1c1bea0 && \
echo $SDK_TOOLS_SHA256 sdk-tools-linux-3859397.zip | sha256sum -c && \
unzip sdk-tools-linux-3859397.zip && \
rm -f sdk-tools-linux-3859397.zip && \
tools/bin/sdkmanager "build-tools;26.0.1" "platform-tools" "platforms;android-26" && \
chown -R videolan /sdk
ENV LANG en_US.UTF-8
USER videolan
RUN git config --global user.name "VLC Android" && \
git config --global user.email buildbot#videolan.org
and built it like below
docker build -t vlc-android .
I want the user id "videolan" is the owner id of the container directory "/vlc-android" and all files under it in the container run on Ubuntu server 18.04.3, like "result on Mac OS 10.14.6 (Docker desktop version 2.1.0.3)".
How can I do?
When you mount a volume on linux, the resulting folder in the docker container will get the same rights as the folder on the host. If the folder on the host is owned by root, then it'll be owned by root also inside the docker container.
To fix your problem, you have to change the owner of the $(pwd)/vlc-android to match the user id used in the container (according to the Dockerfile you attached in your question, the UID is 499).
Try to execute this:
sudo chown 499 -R $(pwd)/vlc-android
then restart the container.
EDIT:
Another solution would be, if you're able to rebuild the docker image on the ubuntu server, to regenerate the image to use the folder owner id instead of 499.
You simply have to fetch the folder owner ID (try to avoid the root user):
id $username
and regenerate the docker image using the following command:
USER_ID=1000
docker build \
-t my_new_vlc_androing_thingy \
--build-arg VIDEOLAN_CI_UID=${USER_ID} \
.
and run it with:
docker run --rm \
-w /vlc-android \
-v $(pwd)/vlc-android:/vlc-android \
my_new_vlc_androing_thingy \
bash -c "ls -ld /vlc-android"
I'm new to docker and recently I tried to use setup openstreetmap-tileserver. I tried a manual installation by cloning the project and run docker build -t SampleMap and docker run -v openstreetmap-data:/var/lib/postgresql/10/main SampleMap import and then run the proper command to run the container. I got three images using docker image ls:
ubuntu
none
SampleMap
Everything worked fined. Next, I tried to erase the DB and do the whole process for a new map (a different .osm.pbf file). I removed the image SampleMap (with docker image rm) and tried to do the whole process again but the problem is all the DB tables still exist. It seems that all the changes are written into the Ubuntu image rather than the SampleMap. I'm asking generally is there any way that I can reset the whole Ubuntu image to its initial state? It seems that all the changes are permanent in the Ubuntu image.
Here is the Dockerfile:
FROM ubuntu:18.04
# Based on
# https://switch2osm.org/manually-building-a-tile-server-18-04-lts/
# Set up environment
ENV TZ=UTC
ENV AUTOVACUUM=on
ENV UPDATES=disabled
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install dependencies
RUN echo "deb [ allow-insecure=yes ] http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main" >> /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y apt-transport-https ca-certificates \
&& apt-get install -y --no-install-recommends --allow-unauthenticated \
apache2 \
apache2-dev \
autoconf \
build-essential \
bzip2 \
cmake \
fonts-noto-cjk \
fonts-noto-hinted \
fonts-noto-unhinted \
clang \
gdal-bin \
git-core \
libagg-dev \
libboost-all-dev \
libbz2-dev \
libcairo-dev \
libcairomm-1.0-dev \
libexpat1-dev \
libfreetype6-dev \
libgdal-dev \
libgeos++-dev \
libgeos-dev \
libgeotiff-epsg \
libicu-dev \
liblua5.3-dev \
libmapnik-dev \
libpq-dev \
libproj-dev \
libprotobuf-c0-dev \
libtiff5-dev \
libtool \
libxml2-dev \
lua5.3 \
make \
mapnik-utils \
nodejs \
npm \
postgis \
postgresql-10 \
postgresql-10-postgis-2.5 \
postgresql-10-postgis-2.5-scripts \
postgresql-contrib-10 \
protobuf-c-compiler \
python-mapnik \
sudo \
tar \
ttf-unifont \
unzip \
wget \
zlib1g-dev \
osmosis \
osmium-tool \
cron \
python3-psycopg2 python3-shapely python3-lxml \
&& apt-get clean autoclean \
&& apt-get autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}/
# Set up renderer user
RUN adduser --disabled-password --gecos "" renderer
USER renderer
# Install latest osm2pgsql
RUN mkdir /home/renderer/src
WORKDIR /home/renderer/src
RUN git clone https://github.com/openstreetmap/osm2pgsql.git
WORKDIR /home/renderer/src/osm2pgsql
RUN mkdir build
WORKDIR /home/renderer/src/osm2pgsql/build
RUN cmake .. \
&& make -j $(nproc)
USER root
RUN make install
USER renderer
# Install and test Mapnik
RUN python -c 'import mapnik'
# Install mod_tile and renderd
WORKDIR /home/renderer/src
RUN git clone -b switch2osm https://github.com/SomeoneElseOSM/mod_tile.git
WORKDIR /home/renderer/src/mod_tile
RUN ./autogen.sh \
&& ./configure \
&& make -j $(nproc)
USER root
RUN make -j $(nproc) install \
&& make -j $(nproc) install-mod_tile \
&& ldconfig
USER renderer
# Configure stylesheet
WORKDIR /home/renderer/src
RUN git clone https://github.com/gravitystorm/openstreetmap-carto.git
WORKDIR /home/renderer/src/openstreetmap-carto
USER root
RUN npm install -g carto
USER renderer
RUN carto project.mml > mapnik.xml
# Load shapefiles
WORKDIR /home/renderer/src/openstreetmap-carto
RUN scripts/get-shapefiles.py
# Configure renderd
USER root
RUN sed -i 's/renderaccount/renderer/g' /usr/local/etc/renderd.conf \
&& sed -i 's/hot/tile/g' /usr/local/etc/renderd.conf
USER renderer
# Configure Apache
USER root
RUN mkdir /var/lib/mod_tile \
&& chown renderer /var/lib/mod_tile \
&& mkdir /var/run/renderd \
&& chown renderer /var/run/renderd
RUN echo "LoadModule tile_module /usr/lib/apache2/modules/mod_tile.so" >> /etc/apache2/conf-available/mod_tile.conf \
&& a2enconf mod_tile
COPY apache.conf /etc/apache2/sites-available/000-default.conf
COPY leaflet-demo.html /var/www/html/index.html
RUN ln -sf /proc/1/fd/1 /var/log/apache2/access.log \
&& ln -sf /proc/1/fd/2 /var/log/apache2/error.log
# Configure PosgtreSQL
COPY postgresql.custom.conf.tmpl /etc/postgresql/10/main/
RUN chown -R postgres:postgres /var/lib/postgresql \
&& chown postgres:postgres /etc/postgresql/10/main/postgresql.custom.conf.tmpl \
&& echo "\ninclude 'postgresql.custom.conf'" >> /etc/postgresql/10/main/postgresql.conf
# copy update scripts
COPY openstreetmap-tiles-update-expire /usr/bin/
RUN chmod +x /usr/bin/openstreetmap-tiles-update-expire \
&& mkdir /var/log/tiles \
&& chmod a+rw /var/log/tiles \
&& ln -s /home/renderer/src/mod_tile/osmosis-db_replag /usr/bin/osmosis-db_replag \
&& echo "* * * * * renderer openstreetmap-tiles-update-expire\n" >> /etc/crontab
# install trim_osc.py helper script
USER renderer
RUN cd ~/src \
&& git clone https://github.com/zverik/regional \
&& cd regional \
&& git checkout 612fe3e040d8bb70d2ab3b133f3b2cfc6c940520 \
&& chmod u+x ~/src/regional/trim_osc.py
# Start running
USER root
COPY run.sh /
COPY indexes.sql /
ENTRYPOINT ["/run.sh"]
CMD []
EXPOSE 80 5432
And here is my run.sh file:
#!/bin/bash
set -x
function CreatePostgressqlConfig()
{
cp /etc/postgresql/10/main/postgresql.custom.conf.tmpl /etc/postgresql/10/main/postgresql.custom.conf
sudo -u postgres echo "autovacuum = $AUTOVACUUM" >> /etc/postgresql/10/main/postgresql.custom.conf
cat /etc/postgresql/10/main/postgresql.custom.conf
}
if [ "$#" -ne 1 ]; then
ls /home/renderer
echo "usage: <import|run>"
echo "commands:"
echo " import: Set up the database and import /data.osm.pbf"
echo " run: Runs Apache and renderd to serve tiles at /tile/{z}/{x}/{y}.png"
echo "environment variables:"
echo " THREADS: defines number of threads used for importing / tile rendering"
echo " UPDATES: consecutive updates (enabled/disabled)"
exit 1
fi
if [ "$1" = "import" ]; then
# Initialize PostgreSQL
CreatePostgressqlConfig
service postgresql start
sudo -u postgres createuser renderer
sudo -u postgres createdb -E UTF8 -O renderer gis
sudo -u postgres psql -d gis -c "CREATE EXTENSION postgis;"
sudo -u postgres psql -d gis -c "CREATE EXTENSION hstore;"
sudo -u postgres psql -d gis -c "ALTER TABLE geometry_columns OWNER TO renderer;"
sudo -u postgres psql -d gis -c "ALTER TABLE spatial_ref_sys OWNER TO renderer;"
# Download Luxembourg as sample if no data is provided
if [ ! -f /data.osm.pbf ]; then
echo "WARNING: No import file at /data.osm.pbf, so importing iran-latest as example..."
wget -nv http://download.geofabrik.de/north-america/canada-latest.osm.pbf -O /data.osm.pbf
# wget -nv http://download.geofabrik.de/europe/luxembourg.poly -O /data.poly
fi
# determine and set osmosis_replication_timestamp (for consecutive updates)
osmium fileinfo /data.osm.pbf > /var/lib/mod_tile/data.osm.pbf.info
osmium fileinfo /data.osm.pbf | grep 'osmosis_replication_timestamp=' | cut -b35-44 > /var/lib/mod_tile/replication_timestamp.txt
REPLICATION_TIMESTAMP=$(cat /var/lib/mod_tile/replication_timestamp.txt)
# initial setup of osmosis workspace (for consecutive updates)
sudo -u renderer openstreetmap-tiles-update-expire $REPLICATION_TIMESTAMP
# copy polygon file if available
if [ -f /data.poly ]; then
sudo -u renderer cp /data.poly /var/lib/mod_tile/data.poly
fi
# Import data
sudo -u renderer osm2pgsql -d gis --create --slim -G --hstore --tag-transform-script /home/renderer/src/openstreetmap-carto/openstreetmap-carto.lua -C 2048 --number-processes ${THREADS:-4} -S /home/renderer/src/openstreetmap-carto/openstreetmap-carto.style /data.osm.pbf
# Create indexes
sudo -u postgres psql -d gis -f indexes.sql
service postgresql stop
exit 0
fi
if [ "$1" = "run" ]; then
# Clean /tmp
rm -rf /tmp/*
# Fix postgres data privileges
chown postgres:postgres /var/lib/postgresql -R
# Initialize PostgreSQL and Apache
CreatePostgressqlConfig
service postgresql start
service apache2 restart
# Configure renderd threads
sed -i -E "s/num_threads=[0-9]+/num_threads=${THREADS:-4}/g" /usr/local/etc/renderd.conf
# start cron job to trigger consecutive updates
if [ "$UPDATES" = "enabled" ]; then
/etc/init.d/cron start
fi
# Run
sudo -u renderer renderd -f -c /usr/local/etc/renderd.conf
service postgresql stop
exit 0
fi
echo "invalid command"
exit 1
When you create a container from your image, you mount a volume, using the -v option:
docker run -v openstreetmap-data:/var/lib/postgresql/10/main SampleMap import
Your persistent data is stored in openstreetmap-data. That file/folder is not in your container (that is created every time), it is mounted from your host's filesystem. That's why it persists