For example.
Every time I build.
Copy package.json
Install package.json
Add current directory.
My question is:
Why it does not use from the cache. For example, It should not install the package.json from the start if the package.json does not change.
It should use the cache and update only the changes code.
Update:
Dockerfile
FROM ubuntu:18.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update && apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
gcc \
git \
libpq-dev \
make \
python-pip \
python2.7 \
python2.7-dev \
apt-transport-https \
curl \
g++ \
sudo \
wget \
bzip2 \
chrpath \
libssl-dev \
libxft-dev \
libfreetype6 \
libfreetype6-dev \
libfontconfig1 \
libfontconfig1-dev \
libfontconfig \
poppler-utils \
imagemagick \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
RUN apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository ppa:malteworld/ppa && apt update && apt install -y --no-install-recommends pdftk \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/* \
&& apt-get -y autoclean
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.6.0
# Install nvm with node and npm
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.29.0/install.sh | bash \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# Set up our PATH correctly so we don't have to long-reference npm, node, &c.
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Set the work directory
RUN mkdir -p /var/www/app/jobsaf-website
RUN mkdir /data
RUN mkdir /data/db
WORKDIR /var/www/app/jobsaf-website
RUN npm install -g node-gyp #angular/cli#6.2.3 nodemon request
# Add our package.json and install *before* adding our application files
COPY package.json ./
# RUN npm install --force
RUN npm install --force
RUN npm rebuild node-sass
# Add application files
ADD . .
EXPOSE 3000 5858 4200 35729 27017 6379 49153
.dockerignore
# See http://help.github.com/ignore-files/ for more about ignoring files.
# compiled output
/tmp
/public/__build__/
/src/*/__build__/
/__build__/**
/public/dist/
/src/*/dist/
/dist/**
/.awcache
.webpack.json
/compiled/
dll/
package-lock.json
# dependencies
/node_modules
*/node_modules
# IDEs and editors
/.idea
.project
.classpath
.c9/
*.launch
**.js.map
.settings/
# IDE - VSCode
.vscode/
# misc
/.sass-cache
/connect.lock
/coverage/*
/libpeerconnection.log
npm-debug.log
testem.log
/typings
# e2e
/e2e/*.js
/e2e/*.map
#System Files
.DS_Store
Thumbs.db
*.csv
*.dat
*.iml
*.log
*.out
*.pid
*.seed
*.sublime-*
*.swo
*.swp
*.tgz
*.xml
.strong-pm
coverage
npm-debug*
/admin/dist
npm
/.cache-loader/*
stats.json
!/src/assets/js/admin-header.js
!/src/assets/js/website-custom.js
webpack-cache/
web/
/src/app/**/*.map
/src/app/**/*.js
--force should be removed from the following line as it will ignore any cache and do a fresh installation for your packages which leads to a new docker build layer starting from the installation step.
RUN npm install --force
The -f or --force argument will force npm to fetch remote resources even if a local copy exists on disk.
Related
I'm not a Docker expert and I'm trying to create a container for machine learning projects. This is mostly for academic purpose, as I'm studying machine learning.
I wrote a dockerfile (and a devcontainer.json file to open the container in vscode) that runs fine until I add the lines to build tensorflow. I found three problems but I don't know what I'm missing:
I got a warning about the fact that Bazel installation isn't a release version
When the building phase arrives at ./configure I can't configure through prompt question
I got an error at the very building phase with
ERROR: The project you're trying to build requires Bazel 5.3.0 (specified in /docklearning/tensorflow/.bazelversion), but it wasn't found in /usr/bin.
This is the Dockerfile:
FROM nvidia/cuda:12.0.0-runtime-ubuntu22.04 as base
ARG USER_UID=1000
#switch to non-interactive frontend
ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /docklearning
ADD . /docklearning
# Install packages
RUN apt-get update -q && apt-get install -q -y --no-install-recommends \
apt-transport-https curl gnupg apt-utils wget gcc g++ npm unzip build-essential ca-certificates curl git gh \
make nano iproute2 nano openssh-client openssl procps \
software-properties-common bzip2 subversion neofetch \
fontconfig && \
curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor >bazel-archive-keyring.gpg && \
mv bazel-archive-keyring.gpg /usr/share/keyrings && \
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/bazel-archive-keyring.gpg] https://storage.googleapis.com/bazel-apt stable jdk1.8" \
| tee /etc/apt/sources.list.d/bazel.list && \
apt update && apt install -q -y bazel && \
apt-get full-upgrade -q -y && \
cd ~ && \
wget https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/Meslo.zip && \
mkdir -p .local/share/fonts && \
unzip Meslo.zip -d .local/share/fonts && \
cd .local/share/fonts && rm *Windows* && \
cd ~ && \
rm Meslo.zip && \
fc-cache -fv && \
apt-get install -y zsh zsh-doc chroma
# Install anaconda
RUN wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh -O Anaconda.sh && \
/bin/bash Anaconda.sh -b -p /opt/conda && \
rm Anaconda.sh
# Install LSD for ls substitute and clean up
RUN wget https://github.com/Peltoche/lsd/releases/download/0.23.1/lsd_0.23.1_amd64.deb -P /tmp && \
dpkg -i /tmp/lsd_0.23.1_amd64.deb && \
rm /tmp/lsd_0.23.1_amd64.deb && \
apt-get autoremove -y && \
apt-get autoclean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*/apt/lists/* && \
useradd -r -m -s /bin/bash -u ${USER_UID} docklearning
# Add conda to PATH
ENV PATH=/opt/conda/bin:$PATH
ENV HOME=/home/docklearning
# Make zsh default shell
RUN chsh -s /usr/bin/zsh docklearning
# link conda to /etc/profile.d
RUN ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
# Update Conda Base env
RUN conda update -n base --all -y && \
conda install -c conda-forge bandit opt_einsum keras-preprocessing && \
conda clean -a -q -y
# Download Tensorflow from github and build from source
RUN git clone https://github.com/tensorflow/tensorflow.git && \
cd tensorflow && \
./configure && \
bazel build --config=opt --config=cuda --cxxopt="-mavx2" //tensorflow/tools/pip_package:build_pip_package && \
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg && \
pip install /tmp/tensorflow_pkg/tensorflow-*.whl && \
cd .. && \
rm -rf tensorflow
# Create shell history file
RUN mkdir ${HOME}/zsh_history && \
chown docklearning ${HOME}/zsh_history && \
mkdir ${HOME}/.ssh
# Switch to internal user
USER docklearning
WORKDIR ${HOME}
# Copy user configuration files
COPY --chown=docklearning ./config/.aliases.sh ./
COPY --chown=docklearning ./config/.bashrc ./
COPY --chown=docklearning ./config/.nanorc ./
# Configure Zsh for internal user
ENV ZSH=${HOME}/.oh-my-zsh
ENV ZSH_CUSTOM=${ZSH}/custom
ENV ZSH_PLUGINS=${ZSH_CUSTOM}/plugins
ENV ZSH_THEMES=${ZSH_CUSTOM}/themes
RUN wget -qO- https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh | zsh || true
RUN git clone --single-branch --branch 'master' --depth 1 https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_PLUGINS}/zsh-syntax-highlighting \
&& git clone --single-branch --branch 'master' --depth 1 https://github.com/zsh-users/zsh-autosuggestions ${ZSH_PLUGINS}/zsh-autosuggestions \
&& git clone --single-branch --depth 1 https://github.com/romkatv/powerlevel10k.git ${ZSH_THEMES}/powerlevel10k
COPY --chown=docklearning ./config/.p10k.zsh ./
COPY --chown=docklearning ./config/.zshrc ./
CMD [ "/bin/zsh" ]
I wouldn't say this question about docker or TensorFlow. It is about package installation and OS configuration.
In this case, you need to fix the bazel version first by specifying it explicitly:
apt install -q -y bazel-5.3.0
But that package is quite wired and doesn't create a symlink, so you need to create it:
ln -s /bin/bazel-5.3.0 /bin/bazel
You can validate if bazel installed by running it:
`bazel --version
bazel 5.3.0`
I am new to docker and not an IT-specialist. I try to install Kartoza Geoserver in Docker on Heroku, but so far no success. Does anyone has experience with this and can explain me the settings in the dockerfile and the steps specifically for a Heroku install?
So far I tried a build with a modified dockerfile but I always get the same error (in the log trail) when opening/launching geoserver on Heroku:
"Error: groupadd: cannot open /etc/group".
I guess it is an permission/privileges issue.
Any sharing of experience on modifying the docker file so that the image is read by Heroku would be helpfull.
Modifying the settings in the dockerfile:
Removed the port forwarding from dockerfile
Add RUN adduser -D myuser USER myuser to dockerfile
Result dockerfile:
#--------- Generic stuff all our Dockerfiles should start with so we get caching ------------
ARG IMAGE_VERSION=9.0.65-jdk11-openjdk-slim-buster
ARG JAVA_HOME=/usr/local/openjdk-11
FROM tomcat:$IMAGE_VERSION
LABEL maintainer="Tim Sutton<tim#linfiniti.com>"
ARG GS_VERSION=2.22.0
ARG WAR_URL=https://downloads.sourceforge.net/project/geoserver/GeoServer/${GS_VERSION}/geoserver-${GS_VERSION}-war.zip
ARG STABLE_PLUGIN_BASE_URL=https://sourceforge.net/projects/geoserver/files/GeoServer
ARG DOWNLOAD_ALL_STABLE_EXTENSIONS=1
ARG DOWNLOAD_ALL_COMMUNITY_EXTENSIONS=1
ARG HTTPS_PORT=8443
ENV DEBIAN_FRONTEND=noninteractive
#Install extra fonts to use with sld font markers
RUN adduser -D myuser
USER myuser
RUN set -eux; \
apt-get update; \
apt-get -y --no-install-recommends install \
locales gnupg2 wget ca-certificates rpl pwgen software-properties-common iputils-ping \
apt-transport-https curl gettext fonts-cantarell lmodern ttf-aenigma \
ttf-bitstream-vera ttf-sjfonts tv-fonts libapr1-dev libssl-dev \
wget zip unzip curl xsltproc certbot cabextract gettext postgresql-client figlet gosu gdal-bin; \
# Install gdal3 - bullseye doesn't build libgdal-java anymore so we can't upgrade
curl https://deb.meteo.guru/velivole-keyring.asc | apt-key add - \
&& echo "deb https://deb.meteo.guru/debian buster main" > /etc/apt/sources.list.d/meteo.guru.list \
&& apt-get update \
&& apt-get -y --no-install-recommends install gdal-bin libgdal-java; \
dpkg-divert --local --rename --add /sbin/initctl \
&& (echo "Yes, do as I say!" | apt-get remove --force-yes login) \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV \
JAVA_HOME=${JAVA_HOME} \
DEBIAN_FRONTEND=noninteractive \
GEOSERVER_DATA_DIR=/opt/geoserver/data_dir \
GDAL_DATA=/usr/share/gdal \
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/tomcat/native-jni-lib:/usr/lib/jni:/usr/local/apr/lib:/opt/libjpeg-turbo/lib64:/usr/lib:/usr/lib/x86_64-linux-gnu" \
FOOTPRINTS_DATA_DIR=/opt/footprints_dir \
GEOWEBCACHE_CACHE_DIR=/opt/geoserver/data_dir/gwc \
CERT_DIR=/etc/certs \
RANDFILE=/etc/certs/.rnd \
FONTS_DIR=/opt/fonts \
GEOSERVER_HOME=/geoserver \
EXTRA_CONFIG_DIR=/settings \
COMMUNITY_PLUGINS_DIR=/community_plugins \
STABLE_PLUGINS_DIR=/stable_plugins
WORKDIR /scripts
ADD resources /tmp/resources
ADD build_data /build_data
ADD scripts /scripts
RUN echo $GS_VERSION > /scripts/geoserver_version.txt ;\
chmod +x /scripts/*.sh;/scripts/setup.sh \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN echo 'figlet -t "Kartoza Docker GeoServer"' >> ~/.bashrc
WORKDIR ${GEOSERVER_HOME}
ENTRYPOINT ["/bin/bash", "/scripts/entrypoint.sh"]
Then build it with argument --platform linux/amd64 (I use arm64 architecture). Then pushed it to Heroku. All the time I get the same error.
I am trying to run a nextflow pipeline which uses an older version of nextflow (21.04.3) and java version 8. Since I have to use this pipeline on a remote server, therefore I can only use singularity.
As this nextflow pipeline also uses singularity pull calls therefore I need the singularity installed inside the docker image as well. Then, I can convert this image docker image to a singularity image and then I can move it to the remote server.
I am trying to install singularity inside dockerfile but I am getting errors,
This is the dockerfile that I am using,
FROM python:3.8.9-slim
LABEL authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
RUN apt-get install -y singularity-container
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow- io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
These are the errors I am getting
Step 9/17 : RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee
/etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --
keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
---> Running in afc3dcbbd1ee
--2022-03-17 17:40:19-- http://neuro.debian.net/lists/xenial.us-ca.full
Resolving neuro.debian.net (neuro.debian.net)... 129.170.233.11
Connecting to neuro.debian.net (neuro.debian.net)|129.170.233.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 262
Saving to: ‘STDOUT’
0K 100% 18.4M=0s
deb http://neurodeb.pirsquared.org data main contrib non-free
#deb-src http://neurodeb.pirsquared.org data main contrib non-free
deb http://neurodeb.pirsquared.org xenial main contrib non-free
#deb-src http://neurodeb.pirsquared.org xenial main contrib non-free
2022-03-17 17:40:19 (18.4 MB/s) - written to stdout [262/262]
/bin/sh: 1: apt-key: not found
The command '/bin/sh -c wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update'
returned a non-zero code: 127
I there a way to install singularity using a dockerfile ?
Thanks
I made some changes in the dockerfile based on the method to install singularity in linux given here.
The complete dockerfile with which I was able to run successfully nextflow, java and singularity within singularity is given below,
FROM python:3.8.9-slim
LABEL
authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
procps
# Download Go source version 1.16.3, install them and modify the PATH
ENV VERSION=1.16.3
ENV OS=linux
ENV ARCH=amd64
RUN wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz && \
echo 'export PATH=$PATH:/usr/local/go/bin' | tee -a /etc/profile
# Download Singularity from version 3.7.3 (security version)
ENV VERSION=3.7.3
RUN wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz
# Compile Singularity sources and install it
RUN export PATH=$PATH:/usr/local/go/bin && \
cd singularity && \
./mconfig --without-suid && \
make -C ./builddir && \
make -C ./builddir install
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow-io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
The file named requirements.txt used in the above dockerfile is given below,
click
GitPython
jinja2
jsonschema
packaging
prompt_toolkit>=3.0.3
pyyaml
pytest-workflow
questionary>=1.8.0
requests_cache
requests
rich>=10.0.0
tabulate
I'm building an image with the the below Dockerfile.
But after building, when I run npm install it says bash: npm: command not found
I have 2 RUN commands to build 2 layers, the first one is basically to build the Python environment for the image, including installing pyodbc driver and install the necessary packages.
The second layer is to install node.
Which step is wrong with the dockerfile?
ARG VARIANT=3
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT}
# Avoid warnings by switching to noninteractive
ENV DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED 1
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# [Optional] Update UID/GID if needed
RUN if [ "$USER_GID" != "1000" ] || [ "$USER_UID" != "1000" ]; then \
groupmod --gid $USER_GID $USERNAME \
&& usermod --uid $USER_UID --gid $USER_GID $USERNAME \
&& chmod -R $USER_UID:$USER_GID /home/$USERNAME; \
fi
# [Optional] If your requirements rarely change, uncomment this section to add them to the image.
#
COPY requirements.txt /tmp/pip-tmp/
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get -y update \
&& ACCEPT_EULA=Y apt-get -y install msodbcsql17 \
&& ACCEPT_EULA=Y apt-get -y install mssql-tools \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& /bin/bash -c "source ~/.bashrc" \
&& apt-get install -y unixodbc-dev \
&& pip3 install django-mssql-backend \
&& apt-get install -y libgssapi-krb5-2 \
&& pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
&& rm -rf /tmp/pip-tmp
# ** [Optional] Uncomment this section to install additional packages. **
#
RUN apt-get update \
&& apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
#
# Verify git, process tools, lsb-release (common in install instructions for CLIs) installed
&& apt-get -y install git iproute2 procps lsb-release \
#
# Install pylint
#&& pip install pylint \
#
# Update Python environment based on requirements.txt
# && pip --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
# && rm -rf /tmp/pip-tmp \
#
# Install node for building front-ends
&& apt-get update \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs \
#
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# Switch back to dialog for any ad-hoc use of apt-get
ENV DEBIAN_FRONTEND=
Debian Nodejs could be boguous. Use official nodejs instructions https://github.com/nodesource/distributions/blob/master/README.md#debinstall to install nodejs.
Docker image is built but when I want to run it, it shows this error:
Error: Unable to access jarfile rest-service-1.0.jar
My OS is Ubuntu 18.04.1 LTS and I use docker build -t doc-service & docker run doc-service.
This is my Dockerfile:
FROM ubuntu:16.04
MAINTAINER Frederico Apostolo <frederico.apostolo#blockfactory.com> (#fapostolo)
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y software-properties-common python-software-properties language-pack-en-base
RUN add-apt-repository ppa:webupd8team/java
RUN apt-get update && apt-get update --fix-missing && apt-get -y --allow-downgrades --allow-remove-essential --allow-change-held-packages upgrade \
&& echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections \
&& apt-get install -y --allow-downgrades --allow-remove-essential --allow-change-held-packages curl vim unzip wget oracle-java8-installer \
&& apt-get clean && rm -rf /var/cache/* /var/lib/apt/lists/*
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle/
run java -version
run echo $JAVA_HOME
#use locate for debug
RUN apt-get update && apt-get install -y locate mlocate && updatedb
#LIBREOFFICE START
RUN apt-get update && apt-get update --fix-missing && apt-get install -y -q libreoffice \
libreoffice-writer ure libreoffice-java-common libreoffice-core libreoffice-common \
fonts-opensymbol hyphen-fr hyphen-de hyphen-en-us hyphen-it hyphen-ru fonts-dejavu \
fonts-dejavu-core fonts-dejavu-extra fonts-noto fonts-dustin fonts-f500 fonts-fanwood \
fonts-freefont-ttf fonts-liberation fonts-lmodern fonts-lyx fonts-sil-gentium \
fonts-texgyre fonts-tlwg-purisa
#LIBREOFFICE END
#font configuration
COPY 00-odt-template-renderer-fontconfig.conf /etc/fonts/conf.d
RUN mkdir /document-service /document-service/fonts /document-service/module /document-service/logs
# local settings
RUN echo "127.0.0.1 http://www.arbs.local http://arbs.local www.arbs.local arbs.local" >> /etc/hosts
# && mkdir /logs/ && echo "dummy" >> /logs/errors.log
#EXPOSE 2115
COPY document-service-java_with_user_arg.sh /
RUN chmod +x /document-service-java_with_user_arg.sh
RUN apt-get update && apt-get -y --no-install-recommends install \
ca-certificates \
curl
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
ENV LANG="en_US.UTF-8"
# In case someone loses the Dockerfile
# Needs to be in the end so it doesn't invalidate unaltered cache whenever the file is updated.
RUN rm -rf /etc/Dockerfile
ADD Dockerfile /etc/Dockerfile
ENTRYPOINT ["/document-service-java_with_user_arg.sh"]
this is document-service-java_with_user_arg.sh:
#!/bin/bash
USER_ID=${LOCAL_USER_ID:-9001}
USER_NAME=${LOCAL_USER_NAME:-jetty}
echo "Starting user: $USER_NAME with UID : $USER_ID"
useradd --shell /bin/bash --home-dir /document-service/dockerhome --non-unique --uid $USER_ID $USER_NAME
cd /document-service
/usr/local/bin/gosu $USER_NAME "$#" java -jar rest-service-1.0.jar
Can anyone help me on this?
Based on the comments, you must add the JAR when building the image by defining in your Dockerfile :
COPY rest-service-1.0.jar /document-service/rest-service-1.0.jar
You could also just use :
COPY rest-service-1.0.jar /rest-service-1.0.jar
, and remove cd /document-service in your entrypoint script, as on ubuntu:16.04 images, default working directory is /. My opinion is that setting the working directory in the script is safer, so you should just go for the first solution.
Note that you could also use ADD instead of COPY (as you already did in your Dockerfile), but here only COPY is necessary (read this post if you want more info : What is the difference between the `COPY` and `ADD` commands in a Dockerfile?).
Finally, I suggest you to add the COPY line at the end of your Dockerfile, so that if a new JAR is built, image won't be rebuilt from scratch but from an existing layer, speeding up build time.
it looking error about workdir
you must select workdir for this copy format
try WORKDIR /yourpath/