I am trying to convert a bash script to a Dockerfile since we are going the containerization route with AWS Batch
Basically I install CPLEX (an optimization library) and Anaconda, install some related packages, check if my environment it good to go, and then kick off a shell script to run the batch job.
Here is a snippet of my Dockerfile:
FROM amazonlinux:latest
# Download packages for container
RUN yum update -y
RUN yum -y install which unzip aws-cli \
RUN yum install -y tar.x86_64
RUN yum install gzip -y
RUN yum install ncompress -y
RUN yum -y install wget
RUN yum install -y nano
# Set working directory
WORKDIR /setup
#: Copy CPLEX installer binary and installation script.
COPY cplex_odee1210.linux-x86-64.bin /setup/
COPY cplex_installer_input.sh /setup/
#: Install CPLEX and update .bashrc
RUN chmod +x /setup/cplex_odee1210.linux-x86-64.bin
RUN chmod +x cplex_installer_input.sh
RUN ./cplex_installer_input.sh | bash cplex_odee1210.linux-x86-64.bin
RUN echo 'export PATH=$PATH:/opt/ibm/ILOG/CPLEX_Optimizer1210/cplex/bin/x86-64_linux' >>/root/.bashrc \
&& /bin/bash -c "source ~/.bashrc"
ENV PATH $PATH:/opt/ibm/ILOG/CPLEX_Optimizer1210/cplex/bin/x86-64_linux
#: Download Anaconda
COPY Anaconda3-2019.10-Linux-x86_64.sh /setup/
RUN bash Anaconda3-2019.10-Linux-x86_64.sh -b -p /home/ec2-user/anaconda3
RUN echo 'export PATH=$PATH:/home/ec2-user/anaconda3/bin' >>/root/.bashrc \
&& /bin/bash -c "source ~/.bashrc"
ENV PATH $PATH:/home/ec2-user/anaconda3/bin
RUN conda install pandas -y \
&& conda install numpy -y \
&& conda install ujson -y \
&& pip install docplex \
&& pip install boto3 \
&& pip install grpcio \
&& pip install grpcio-tools
RUN python3 -m docplex.mp.environment
ADD fetch_and_run.sh /usr/local/bin/fetch_and_run.sh
ENTRYPOINT ["/usr/local/bin/fetch_and_run.sh"]
From there, I kick off a bash script
#!/bin/bash
date
echo "Args: $#"
env
echo "script_path: $1"
echo "script_name: $2"
echo "path_prefix: $3"
echo "jobID: $AWS_BATCH_JOB_ID"
echo "jobQueue: $AWS_BATCH_JQ_NAME"
echo "computeEnvironment: $AWS_BATCH_CE_NAME"
echo "current directory: $(pwd)"
mkdir /tmp/scripts/
aws s3 cp $1 /tmp/scripts/$2
python3 /tmp/scripts/${#:2}
But for some reason, I keep getting
/tmp/tmp.hQlWYBEFs/batch-file-temp: line 20: python3: command not found
Do I need to change some PATH variables? Why isn't Docker picking up my Python 3 version?
The image needs to have python3 installed. Building images works off of files and programs that exist in the container. The python3 you have installed on your own system is not available.
Related
I'm trying to build the following sightglass benchmarking suite/ Dockerfile:
FROM ubuntu:22.04
RUN echo 'APT::Install-Suggests "0";' >> /etc/apt/apt.conf.d/00-docker
RUN echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.conf.d/00-docker
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update \
&& apt-get install -y python3 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src
ADD rust-benchmark rust-benchmark
WORKDIR /usr/src/rust-benchmark
RUN apt update --yes
RUN apt install clang lldb lld wget curl git xz-utils bzip2 --yes
RUN apt-get install --reinstall ca-certificates --yes
RUN apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6 -y
RUN mkdir /usr/local/share/ca-certificates/cacert.org
RUN wget -P /usr/local/share/ca-certificates/cacert.org http://www.cacert.org/certs/root.crt http://www.cacert.org/certs/class3.crt
RUN update-ca-certificates
RUN git config --global http.sslCAinfo /etc/ssl/certs/ca-certificates.crt
RUN wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh --no-check-certificate
RUN cd / && find . -name cargo
RUN chmod +x Anaconda3-2022.10-Linux-x86_64.sh
RUN yes yes | ./Anaconda3-2022.10-Linux-x86_64.sh
RUN rm Anaconda3-2022.10-Linux-x86_64.sh
RUN echo "export PATH=./yes/bin:$PATH" >> ~/.bashrc
ENV CONDA ./yes/bin/
ENV PATH="${CONDA}:${PATH}"
RUN ln -s ./yes/bin/conda /usr/local/bin/conda
RUN eval $(conda shell.bash hook)
RUN conda init bash
RUN conda update --all
RUN cd / && find . -name cargo
RUN conda create -c conda-forge -n rustenv rust
RUN activate rustenv
SHELL ["./yes/bin/conda", "run", "-n", "rustenv", "/bin/bash", "-c"]
RUN rustc --version
ENV GIT_SSL_NO_VERIFY=1
RUN git clone https://github.com/emscripten-core/emsdk.git
RUN cd emsdk && git pull
RUN chmod +x ./emsdk/emsdk
RUN ./emsdk/emsdk install latest
RUN ./emsdk/emsdk activate latest
RUN chmod +x ./emsdk/emsdk_env.sh
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN cd emsdk && source ./emsdk_env.sh
RUN ./emsdk/emsdk_env.sh
ENV EMSDK ./emsdk
ENV EMSCRIPTEN=${EMSDK}/emscripten/sdk
ENV EM_DATA ${EMSDK}/.data
ENV EM_CONFIG ${EMSDK}/.emscripten
ENV EM_CACHE ${EM_DATA}/cache
ENV EM_PORTS ${EM_DATA}/ports
ENV PATH="${EMSDK}:${EMSDK}/emscripten/sdk:${EMSDK}/llvm/clang/bin:${EMSDK}/node/current/bin:${EMSDK}/binaryen/bin:${PATH}"
RUN curl https://sh.rustup.rs -ksSf | sh -s -- -y
RUN chmod +x $HOME/.cargo/env
RUN $HOME/.cargo/env
ENV RUST ~/.cargo/bin
ENV PATH="${RUST}:${PATH}"
RUN rustup default nightly
RUN rustup target add wasm32-wasi --toolchain nightly
RUN ./yes/envs/rustenv/bin/cargo build --release --target wasm32-wasi
RUN cp target/wasm32-wasi/release/bls-381-wasm-benchmark.wasm /benchmark.wasm
The build process always aborts on the compile step with the following error:
error[E0463]: can't find crate for `core`
|
= note: the `wasm32-wasi` target may not be installed
= help: consider downloading the target with `rustup target add wasm32-wasi`
error[E0463]: can't find crate for `compiler_builtins`
My full setup can be found here: https://github.com/achimcc/arkworks-wasmtime-benchmarks/tree/main/benchmarks/bls12-381
It seems you are compiling something whose target is wasm32-wasi.
Rust can compile source codes for different "targets", but only few of them was enabled by default.
To install the wasm32-wasi target, you can run this command:
rustup target add wasm32-wasi
Any other questions about compling or environments, feel easy to comment here.
tried running cron through dockerfile but when running the container its getting exit. Below is my dockerfile and error. Any help would be really appreciated
Error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "cron": executable file not found in $PATH: unknown.
Dockerfile:
# Pull base image.
FROM amazonlinux:2
ARG TERRAFORM_VERSION=1.2.6
RUN \
yum update -y && \
yum install unzip -y && \
yum install wget -y && \
yum install vim -y \
yum install bash -y
################################
# Install Terraform
################################
# Download terraform for linux
RUN wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN mv terraform /usr/local/bin/
################################
# Install python
################################
RUN yum install -y python3-pip
RUN pip3 install --upgrade pip
################################
# Install AWS CLI
################################
RUN pip install awscli --upgrade --user
# add aws cli location to path
ENV PATH=~/.local/bin:$PATH
RUN mkdir ~/.aws && touch ~/.aws/credentials
################################
# Install Cron
################################
RUN yum -y install ca-certificates shadow-utils cronie && yum -y clean all
# Creating crontab
COPY ./automation.sh /var/automation.sh
# Giving executable permission to script file.
RUN chmod +x /var/automation.sh \
&& echo "* * * * * /bin/bash /var/automation.sh" >> /var/crontab
# Ensure sudo group users are not asked for a password
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> \
/etc/sudoers
#run cron process through cmd
CMD ["cron", "-f"]
Update the CMD to this:
CMD ["/usr/bin/crontab", "/var/crontab"]
Anyone looking for answer to this, I had to change the approach a bit of running cron and it worked finally. Here is the dockerfile with updated approach.
# Pull base image.
FROM amazonlinux:2
ARG TERRAFORM_VERSION=1.2.6
################################
# Install Dependencies
################################
RUN yum update -y && yum -y install unzip wget vim bash procps python3-pip jq git && pip3 install --upgrade pip
################################
# Install AWS CLI
################################
RUN pip install awscli --upgrade --user
# add aws cli location to path
ENV PATH=~/.local/bin:$PATH
RUN mkdir ~/.aws && touch ~/.aws/credentials
################################
# Install Terraform
################################
# Download terraform for linux
RUN wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN mv terraform /usr/local/bin/
################################
# Install Cron
################################
RUN yum -y install ca-certificates shadow-utils cronie && yum -y clean all
# Creating crontab
COPY ./automation.sh /var/automation.sh
# Giving executable permission to script file.
RUN chmod +x /var/automation.sh \
&& echo "* * * * * /bin/bash /var/automation.sh" >> /var/crontab
RUN crontab /var/crontab
# Ensure sudo group users are not asked for a password
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> \
/etc/sudoers
#run cron process through cmd
CMD ["/usr/sbin/crond", "-n"]
I am trying to run a nextflow pipeline which uses an older version of nextflow (21.04.3) and java version 8. Since I have to use this pipeline on a remote server, therefore I can only use singularity.
As this nextflow pipeline also uses singularity pull calls therefore I need the singularity installed inside the docker image as well. Then, I can convert this image docker image to a singularity image and then I can move it to the remote server.
I am trying to install singularity inside dockerfile but I am getting errors,
This is the dockerfile that I am using,
FROM python:3.8.9-slim
LABEL authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
RUN apt-get install -y singularity-container
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow- io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
These are the errors I am getting
Step 9/17 : RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee
/etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --
keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
---> Running in afc3dcbbd1ee
--2022-03-17 17:40:19-- http://neuro.debian.net/lists/xenial.us-ca.full
Resolving neuro.debian.net (neuro.debian.net)... 129.170.233.11
Connecting to neuro.debian.net (neuro.debian.net)|129.170.233.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 262
Saving to: ‘STDOUT’
0K 100% 18.4M=0s
deb http://neurodeb.pirsquared.org data main contrib non-free
#deb-src http://neurodeb.pirsquared.org data main contrib non-free
deb http://neurodeb.pirsquared.org xenial main contrib non-free
#deb-src http://neurodeb.pirsquared.org xenial main contrib non-free
2022-03-17 17:40:19 (18.4 MB/s) - written to stdout [262/262]
/bin/sh: 1: apt-key: not found
The command '/bin/sh -c wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update'
returned a non-zero code: 127
I there a way to install singularity using a dockerfile ?
Thanks
I made some changes in the dockerfile based on the method to install singularity in linux given here.
The complete dockerfile with which I was able to run successfully nextflow, java and singularity within singularity is given below,
FROM python:3.8.9-slim
LABEL
authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
procps
# Download Go source version 1.16.3, install them and modify the PATH
ENV VERSION=1.16.3
ENV OS=linux
ENV ARCH=amd64
RUN wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz && \
echo 'export PATH=$PATH:/usr/local/go/bin' | tee -a /etc/profile
# Download Singularity from version 3.7.3 (security version)
ENV VERSION=3.7.3
RUN wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz
# Compile Singularity sources and install it
RUN export PATH=$PATH:/usr/local/go/bin && \
cd singularity && \
./mconfig --without-suid && \
make -C ./builddir && \
make -C ./builddir install
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow-io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
The file named requirements.txt used in the above dockerfile is given below,
click
GitPython
jinja2
jsonschema
packaging
prompt_toolkit>=3.0.3
pyyaml
pytest-workflow
questionary>=1.8.0
requests_cache
requests
rich>=10.0.0
tabulate
I'm trying to use asdf-direnv in docker. Following the README of asdf-direnv, I make this Dockerfile
FROM nvidia/cuda:10.2-devel-ubuntu18.04
RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-ic", "-l"]
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
# Python
RUN apt-get install -y make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev
# Utils
RUN apt-get install -y git
RUN apt-get clean
WORKDIR /venv
RUN git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.8.1
RUN echo ". $HOME/.asdf/asdf.sh" >> ~/.bashrc
RUN echo ". $HOME/.asdf/completions/asdf.bash" >> ~/.bashrc
RUN asdf plugin add direnv
RUN asdf install direnv 2.28.0
RUN asdf local direnv 2.28.0
RUN echo "eval \"\$(asdf exec direnv hook bash)\"" >> ~/.bashrc
RUN echo "direnv() { asdf exec direnv \"\$#\"; }" >> ~/.bashrc
RUN mkdir -p ~/.config/direnv/
RUN echo "source \"\$(asdf direnv hook asdf)\"" >> ~/.config/direnv/direnvrc
RUN echo "export DIRENV_LOG_FORMAT=" >> ~/.config/direnv/direnvrc
RUN asdf plugin add python
RUN asdf install python 3.8.7
RUN asdf local python 3.8.7
RUN echo "use asdf" >> .envrc
RUN echo "layout python" >> .envrc
RUN direnv allow
RUN echo $(which python)
CMD ["/bin/bash"]
The issue is, the line RUN echo $(which python) works properly when I run but not when I build the image. I got
/root/.asdf/shims/python when building docker build . -t venv-gpu -f docker-gpu/Dockerfile
/venv/.direnv/python-3.8.7/bin/python when docker run --gpus all -it venv-gpu
How can I fixed this?
So I have a Dockerfile, using which I create an image. The instruction in Dockerfile are:
#This is a docker file
FROM ubuntu:14.04
MAINTAINER amit
# Install python-pip
RUN apt-get update && apt-get install -y python-pip
# Install virtual-env
RUN mkdir ~/.virtualenvs
RUN pip install virtualenv
RUN pip install virtualenvwrapper
RUN touch ~/.bashrc
RUN echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.bashrc
RUN echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.bashrc
RUN /bin/bash -c "source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv be"
# INSTALL REQUIRED PACKAGES
RUN apt-get update && apt-get install -y \
xclip \
python-dev \
libffi-dev \
libpam0g-dev \
sqlite3 \
libsqlite3-dev \
subversion \
g++ \
libxslt1-dev \
libxml2-dev \
zlib1g-dev \
swig \
node \
git \
libssl-dev
# Expose port
EXPOSE 5000
# Get the source ideally one should do a get on source release
COPY src /src
WORKDIR /src
RUN touch installer.sh
RUN echo "#!/bin/bash" >> installer.sh
RUN echo "source `which virtualenvwrapper.sh`" >> installer.sh
RUN echo "workon be" >> installer.sh
RUN echo "./tools/install_dependencies" >> installer.sh
RUN echo "deactivate be" >> installer.sh
RUN chmod +x installer.sh
Now I build an image from this. I execute the installer.sh by running the docker container and everything works great.
But when some one pulls this very image from the repository , and runs the file installer.sh, there is an error stating that virtualenv be is not present and then one has to do "mkvirtualenv be" once again.
What is wrong here? Shouldn't the "virtualenv be" automatically be present in the container?