I've made the following Docker container for Plink and Peddy, but whenever I try to build the container, I'm getting the following error:
Executing transaction: ...working... WARNING conda.core.envs_manager:register_env(46): Unable to register environment. Path not writable or missing.
environment location: /root/identity_check/anaconda
registry file: /root/.conda/environments.txt
done
installation finished.
Removing intermediate container cdf60f5bf1a5
---> be254b7571be
Step 7/10 : RUN conda update -y conda && conda config --add channels bioconda && conda install -y peddy
---> Running in aa2e91da28b4
/bin/sh: 1: conda: not found
The command '/bin/sh -c conda update -y conda && conda config --add channels bioconda && conda install -y peddy' returned a non-zero code: 127
Dockerfile:
FROM ubuntu:19.04
WORKDIR /identity_check
RUN apt-get update && \
apt-get install -y \
python-pip \
tabix \
wget \
unzip
RUN apt-get update && apt-get install -y sudo && rm -rf /var/lib/apt/lists/* \
&& sudo apt-get -y update \
&& sudo pip install --upgrade pip \
&& sudo pip install awscli --upgrade --user \
&& sudo pip install boto3 \
&& sudo pip install pyyaml \
&& sudo pip install sqlitedict
# Install PLINK
RUN wget http://s3.amazonaws.com/plink1-assets/plink_linux_x86_64_20190617.zip \
&& mv plink_linux_x86_64_20190617.zip /usr/local/bin/ \
&& unzip /usr/local/bin/plink_linux_x86_64_20190617.zip
# Install Peddy
RUN INSTALL_PATH=~/anaconda \
&& wget http://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh \
&& bash Miniconda2-latest* -fbp $INSTALL_PATH \
&& PATH=$INSTALL_PATH/bin:$PATH
RUN conda update -y conda \
&& conda config --add channels bioconda \
&& conda install -y peddy
ENV PATH=$PATH:/identity_check/
ADD . /identity_check
CMD bash /identity_check/identity_setup.sh
I've tried changing the INSTALL_PATH and seeing if that makes a difference and even launched a virtual machine to test out these installation steps manually and it works fine. I don't understand why conda isn't found.
# Install Peddy
RUN INSTALL_PATH=~/anaconda \
&& wget http://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh \
&& bash Miniconda2-latest* -fbp $INSTALL_PATH \
&& PATH=$INSTALL_PATH/bin:$PATH
The last part of the above updates a PATH variable that will only exist in the shell running the command. That shell exits immediately after setting the PATH variable, and the temporary container used to execute the RUN command exits. The result of the RUN command is to gather the filesystem changes into a layer of the docker image being created. Any environment variable changes, background processes launched, or anything else not part of the container filesystem is lost.
Instead, you'll want to update the image environment with:
# Install Peddy
RUN INSTALL_PATH=~/anaconda \
&& wget http://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh \
&& bash Miniconda2-latest* -fbp $INSTALL_PATH \
ENV PATH=/root/anaconda/bin:$PATH
If the software permits it, I would avoid installing in the /root home directory and instead install it somewhere like /usr/local/bin, making it available if you change the container to run as a different user.
Related
I am trying to run a nextflow pipeline which uses an older version of nextflow (21.04.3) and java version 8. Since I have to use this pipeline on a remote server, therefore I can only use singularity.
As this nextflow pipeline also uses singularity pull calls therefore I need the singularity installed inside the docker image as well. Then, I can convert this image docker image to a singularity image and then I can move it to the remote server.
I am trying to install singularity inside dockerfile but I am getting errors,
This is the dockerfile that I am using,
FROM python:3.8.9-slim
LABEL authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
RUN apt-get install -y singularity-container
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow- io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
These are the errors I am getting
Step 9/17 : RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee
/etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --
keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
---> Running in afc3dcbbd1ee
--2022-03-17 17:40:19-- http://neuro.debian.net/lists/xenial.us-ca.full
Resolving neuro.debian.net (neuro.debian.net)... 129.170.233.11
Connecting to neuro.debian.net (neuro.debian.net)|129.170.233.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 262
Saving to: ‘STDOUT’
0K 100% 18.4M=0s
deb http://neurodeb.pirsquared.org data main contrib non-free
#deb-src http://neurodeb.pirsquared.org data main contrib non-free
deb http://neurodeb.pirsquared.org xenial main contrib non-free
#deb-src http://neurodeb.pirsquared.org xenial main contrib non-free
2022-03-17 17:40:19 (18.4 MB/s) - written to stdout [262/262]
/bin/sh: 1: apt-key: not found
The command '/bin/sh -c wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update'
returned a non-zero code: 127
I there a way to install singularity using a dockerfile ?
Thanks
I made some changes in the dockerfile based on the method to install singularity in linux given here.
The complete dockerfile with which I was able to run successfully nextflow, java and singularity within singularity is given below,
FROM python:3.8.9-slim
LABEL
authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
procps
# Download Go source version 1.16.3, install them and modify the PATH
ENV VERSION=1.16.3
ENV OS=linux
ENV ARCH=amd64
RUN wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz && \
echo 'export PATH=$PATH:/usr/local/go/bin' | tee -a /etc/profile
# Download Singularity from version 3.7.3 (security version)
ENV VERSION=3.7.3
RUN wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz
# Compile Singularity sources and install it
RUN export PATH=$PATH:/usr/local/go/bin && \
cd singularity && \
./mconfig --without-suid && \
make -C ./builddir && \
make -C ./builddir install
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow-io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
The file named requirements.txt used in the above dockerfile is given below,
click
GitPython
jinja2
jsonschema
packaging
prompt_toolkit>=3.0.3
pyyaml
pytest-workflow
questionary>=1.8.0
requests_cache
requests
rich>=10.0.0
tabulate
I have the following Dockerfile, currently working locally in my device:
FROM python:3.7-slim-buster
WORKDIR /app
COPY . /app
VOLUME /app
RUN chmod +x /app/cat/sitemap_download.py
COPY entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
ARG VERSION=3.7.4
RUN apt update && \
apt install -y bash wget && \
wget -O /tmp/nordrepo.deb https://repo.nordvpn.com/deb/nordvpn/debian/pool/main/nordvpn-release_1.0.0_all.deb && \
apt install -y /tmp/nordrepo.deb && \
apt update && \
apt install -y nordvpn=$VERSION && \
apt remove -y wget nordvpn-release
RUN apt-get clean \
&& apt-get -y update
RUN apt-get -y install python3-dev \
python3-psycopg2 \
&& apt-get -y install build-essential
RUN pip install --upgrade pip
RUN pip install -r cat/requirements.txt
RUN pip install awscli
ENTRYPOINT ["sh", "-c", "./entrypoint.sh"]
But when I deploy it to Fargate, the container stops before reaching the steady state with:
sh: 1: ./entrypoint.sh: not found
Edit: Adding entrypoint.sh file for clarification:
#!/bin/env sh
# start process, but it should exit once the file is in S3
/app/cat/sitemap_download.py
# Once the process is done, we are good to scale down the service
aws ecs update-service --cluster cluster_name --region eu-west-1 --service service-name --desired-count 0
I have tried modifying ENTRYPOINT to use it as exec form, or with full path but always get the same issue. Any ideas on what am I doing wrong?
I've managed to fix it now.
Changing the Dockerfile to look as follows solves the issue:
COPY . /app
VOLUME /app
RUN chmod +x /app/cat/sitemap_download.py
COPY entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
ARG VERSION=3.7.4
RUN apt update && \
apt install -y bash wget && \
wget -O /tmp/nordrepo.deb https://repo.nordvpn.com/deb/nordvpn/debian/pool/main/nordvpn-release_1.0.0_all.deb && \
apt install -y /tmp/nordrepo.deb && \
apt update && \
apt install -y nordvpn=$VERSION && \
apt remove -y wget nordvpn-release
RUN apt-get clean \
&& apt-get -y update
RUN apt-get -y install python3-dev \
python3-psycopg2 \
&& apt-get -y install build-essential
RUN pip install --upgrade pip
RUN pip install -r cat/requirements.txt
RUN pip install awscli
ENTRYPOINT ["/bin/bash"]
CMD ["./entrypoint.sh"]
I tried this after reading: What is the difference between CMD and ENTRYPOINT in a Dockerfile?
I believe this syntax fixes it because with entrypoint I'm indicating bash to be run at start, and then passing the script as parameter.
I’m on a Mac and trying to build a new container (new to Docker), I can get Anaconda installed fine and updating Anaconda from within the container works, however when I try to run conda update conda from the Dockerfile I get the below error:
What am I doing wrong?
Thanks!
The command '/bin/sh -c conda update conda' returned a non-zero code: 127
FROM ubuntu:18.04
RUN apt-get update && \
apt-get -y install curl && \
apt-get -y install python3 && \
apt-get -y install python3-pip && \
python3 -m pip install --upgrade pip && \
apt-get -y install wget && \
apt-get -y install vim && \
pip3 install tensorflow && \
pip3 install keras
RUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-5.3.0-Linux-x86_64.sh -O ~/anaconda.sh && \
/bin/bash ~/anaconda.sh -b -p /opt/conda && \
rm ~/anaconda.sh && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
RUN conda update conda
RUN conda install opencv
RUN conda install matplotlib
RUN conda install pandas
RUN conda install seaborn
I installed Elasticsearch in my image based on ubuntu:16.04.
And start the service using
RUN service elasticsearch start
but, it was not started.
If I go into the container and run it, it starts.
I want to run the service and dump the index when I create the image, below is a part of my Dockerfile.
How do I start Elasticsearch in the Dockerfile?
#install OpenJDK-8
RUN apt-get update && apt-get install -y openjdk-8-jdk && apt-get install -y ant && apt-get clean
RUN apt-get update && apt-get install -y ca-certificates-java && apt-get clean
RUN update-ca-certificates -f
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
#download ES
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN apt-get install -y apt-transport-https
RUN echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
RUN apt-get update && apt-get install -y elasticsearch
RUN service elasticsearch start
The RUN command executes only during the build phase. It stops after the build is completed. You should use CMD (or ENTRYPOINT) instead:
CMD service elasticsearch start && /bin/bash
It's better wrapping the starting command in your own file and then only execute the file:
CMD /start_elastic.sh
I don't know why not take official oss image, but, this Docker file based on Debian work:
FROM java:8-jre
ENV ES_NAME=elasticsearch \
ELASTICSEARCH_VERSION=6.6.1
ENV ELASTICSEARCH_URL=https://artifacts.elastic.co/downloads/$ES_NAME/$ES_NAME-$ELASTICSEARCH_VERSION.tar.gz
RUN apt-get update && apt-get install -y --assume-yes openssl bash curl wget \
&& mkdir -p /opt \
&& echo '[i] Start create elasticsearch' \
&& wget -T 15 -O /tmp/$ES_NAME-$ELASTICSEARCH_VERSION.tar.gz $ELASTICSEARCH_URL \
&& tar -xzf /tmp/$ES_NAME-$ELASTICSEARCH_VERSION.tar.gz -C /opt/ \
&& ln -s /opt/$ES_NAME-$ELASTICSEARCH_VERSION /opt/$ES_NAME \
&& useradd elastic \
&& mkdir -p /var/lib/elasticsearch /opt/$ES_NAME/plugins /opt/$ES_NAME/config/scripts \
&& chown -R elastic /opt/$ES_NAME-$ELASTICSEARCH_VERSION/
ENV PATH=/opt/elasticsearch/bin:$PATH
USER elastic
CMD [ "/bin/sh", "-c", "/opt/elasticsearch/bin/elasticsearch --E cluster.name=test --E network.host=0 $ELASTIC_CMD_OPTIONS" ]
I believe most of the commands you'll be able to use on Ubuntu.
Don't forget to run sudo sysctl -w vm.max_map_count=262144 on your host
Docker image is built but when I want to run it, it shows this error:
Error: Unable to access jarfile rest-service-1.0.jar
My OS is Ubuntu 18.04.1 LTS and I use docker build -t doc-service & docker run doc-service.
This is my Dockerfile:
FROM ubuntu:16.04
MAINTAINER Frederico Apostolo <frederico.apostolo#blockfactory.com> (#fapostolo)
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y software-properties-common python-software-properties language-pack-en-base
RUN add-apt-repository ppa:webupd8team/java
RUN apt-get update && apt-get update --fix-missing && apt-get -y --allow-downgrades --allow-remove-essential --allow-change-held-packages upgrade \
&& echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections \
&& apt-get install -y --allow-downgrades --allow-remove-essential --allow-change-held-packages curl vim unzip wget oracle-java8-installer \
&& apt-get clean && rm -rf /var/cache/* /var/lib/apt/lists/*
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle/
run java -version
run echo $JAVA_HOME
#use locate for debug
RUN apt-get update && apt-get install -y locate mlocate && updatedb
#LIBREOFFICE START
RUN apt-get update && apt-get update --fix-missing && apt-get install -y -q libreoffice \
libreoffice-writer ure libreoffice-java-common libreoffice-core libreoffice-common \
fonts-opensymbol hyphen-fr hyphen-de hyphen-en-us hyphen-it hyphen-ru fonts-dejavu \
fonts-dejavu-core fonts-dejavu-extra fonts-noto fonts-dustin fonts-f500 fonts-fanwood \
fonts-freefont-ttf fonts-liberation fonts-lmodern fonts-lyx fonts-sil-gentium \
fonts-texgyre fonts-tlwg-purisa
#LIBREOFFICE END
#font configuration
COPY 00-odt-template-renderer-fontconfig.conf /etc/fonts/conf.d
RUN mkdir /document-service /document-service/fonts /document-service/module /document-service/logs
# local settings
RUN echo "127.0.0.1 http://www.arbs.local http://arbs.local www.arbs.local arbs.local" >> /etc/hosts
# && mkdir /logs/ && echo "dummy" >> /logs/errors.log
#EXPOSE 2115
COPY document-service-java_with_user_arg.sh /
RUN chmod +x /document-service-java_with_user_arg.sh
RUN apt-get update && apt-get -y --no-install-recommends install \
ca-certificates \
curl
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
ENV LANG="en_US.UTF-8"
# In case someone loses the Dockerfile
# Needs to be in the end so it doesn't invalidate unaltered cache whenever the file is updated.
RUN rm -rf /etc/Dockerfile
ADD Dockerfile /etc/Dockerfile
ENTRYPOINT ["/document-service-java_with_user_arg.sh"]
this is document-service-java_with_user_arg.sh:
#!/bin/bash
USER_ID=${LOCAL_USER_ID:-9001}
USER_NAME=${LOCAL_USER_NAME:-jetty}
echo "Starting user: $USER_NAME with UID : $USER_ID"
useradd --shell /bin/bash --home-dir /document-service/dockerhome --non-unique --uid $USER_ID $USER_NAME
cd /document-service
/usr/local/bin/gosu $USER_NAME "$#" java -jar rest-service-1.0.jar
Can anyone help me on this?
Based on the comments, you must add the JAR when building the image by defining in your Dockerfile :
COPY rest-service-1.0.jar /document-service/rest-service-1.0.jar
You could also just use :
COPY rest-service-1.0.jar /rest-service-1.0.jar
, and remove cd /document-service in your entrypoint script, as on ubuntu:16.04 images, default working directory is /. My opinion is that setting the working directory in the script is safer, so you should just go for the first solution.
Note that you could also use ADD instead of COPY (as you already did in your Dockerfile), but here only COPY is necessary (read this post if you want more info : What is the difference between the `COPY` and `ADD` commands in a Dockerfile?).
Finally, I suggest you to add the COPY line at the end of your Dockerfile, so that if a new JAR is built, image won't be rebuilt from scratch but from an existing layer, speeding up build time.
it looking error about workdir
you must select workdir for this copy format
try WORKDIR /yourpath/