Can not get asdf-direnv work in docker when building - docker

I'm trying to use asdf-direnv in docker. Following the README of asdf-direnv, I make this Dockerfile
FROM nvidia/cuda:10.2-devel-ubuntu18.04
RUN chsh -s /bin/bash
SHELL ["/bin/bash", "-ic", "-l"]
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
# Python
RUN apt-get install -y make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev
# Utils
RUN apt-get install -y git
RUN apt-get clean
WORKDIR /venv
RUN git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.8.1
RUN echo ". $HOME/.asdf/asdf.sh" >> ~/.bashrc
RUN echo ". $HOME/.asdf/completions/asdf.bash" >> ~/.bashrc
RUN asdf plugin add direnv
RUN asdf install direnv 2.28.0
RUN asdf local direnv 2.28.0
RUN echo "eval \"\$(asdf exec direnv hook bash)\"" >> ~/.bashrc
RUN echo "direnv() { asdf exec direnv \"\$#\"; }" >> ~/.bashrc
RUN mkdir -p ~/.config/direnv/
RUN echo "source \"\$(asdf direnv hook asdf)\"" >> ~/.config/direnv/direnvrc
RUN echo "export DIRENV_LOG_FORMAT=" >> ~/.config/direnv/direnvrc
RUN asdf plugin add python
RUN asdf install python 3.8.7
RUN asdf local python 3.8.7
RUN echo "use asdf" >> .envrc
RUN echo "layout python" >> .envrc
RUN direnv allow
RUN echo $(which python)
CMD ["/bin/bash"]
The issue is, the line RUN echo $(which python) works properly when I run but not when I build the image. I got
/root/.asdf/shims/python when building docker build . -t venv-gpu -f docker-gpu/Dockerfile
/venv/.direnv/python-3.8.7/bin/python when docker run --gpus all -it venv-gpu
How can I fixed this?

Related

Error during installation of Node.js, node -v outputs node not found

I run a given Dockerfile in order to build image for my TeamCity Agent
FROM jetbrains/teamcity-agent:2022.10.1-linux-sudo
RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
RUN sudo sh -c 'echo deb https://apt.kubernetes.io/ kubernetes-xenial main > /etc/apt/sources.list.d/kubernetes.list'
RUN curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -
# https://github.com/AdoptOpenJDK/openjdk-docker/blob/master/12/jdk/ubuntu/Dockerfile.hotspot.releases.full
RUN sudo apt-get update && \
sudo apt-get install -y ffmpeg gnupg2 git sudo kubectl \
binfmt-support qemu-user-static mc jq
#RUN wget -O - https://apt.kitware.com/keys/kitware-archive-la3est.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
#RUN sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main' && \
# sudo apt-get update && \
RUN sudo apt install -y cmake build-essential wget
RUN sudo curl -L https://nodejs.org/dist/v14.17.3/node-v14.17.3-linux-x64.tar.gz --output node-v14.17.3-linux-x64.tar.gz
RUN sudo tar -xvf node-v14.17.3-linux-x64.tar.gz
RUN echo 'export PATH="$HOME/node-v14.17.3-linux-x64/bin:$PATH"' >> ~/.bashrc
RUN echo "The version of Node.js is $(node -v)"
All the code was right, but then I decided to add node.js installation to the Dockerfile. that begins from this line:
RUN sudo curl -L https://nodejs.org/dist/v14.17.3/node-v14.17.3-linux-x64.tar.gz --output node-v14.17.3-linux-x64.tar.gz
However, the problem right now is that I have the following error, during execution of the last line of the Dockerfile:
RUN echo "The version of Node.js is $(node -v)"
Output for this line is:
Step 10/22 : RUN echo "The version of Node.js is $(node -v)"
21:07:41 ---> Running in 863b0e75e45a
21:07:42 /bin/sh: 1: node: not found
You need to make the 2 following changed in your Dockerfile for your node installation to be included in your $PATH env var -
Remove the $HOME variable from the path you're concating, as you are currently downloading node to your root folder and not the $HOME folder -
RUN echo 'export PATH="/node-v14.17.3-linux-x64/bin:$PATH"' >> ~/.bashrc
Either source ~/.bashrc explicitly for the $PATH changes to take place or run the export command as part of the Dockerfile
Once you apply these 2 changes, the error should go away.

Why is my container when starting as root seem to be empty?

When I get into my container nothing seems to have ebeen installed?
docker pull brandojazz/iit-term-synthesis:test
then
docker run -u root -ti brandojazz/iit-term-synthesis:test_arm bash
see:
(base) root#897a4007076f:/home/bot# opam switch
[WARNING] Running as root is not recommended
[ERROR] Opam has not been initialised, please run `opam init'
it should have been initialized.
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "me#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc
# rlwrap \
# sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
# RUN echo "bot:bot" | chpasswd
# RUN && adduser docker sudo
WORKDIR /home/bot
USER bot
ADD https://api.github.com/repos/IBM/pycoq/git/refs/heads/main version.json
# -- setup opam like VP's PyCoq
RUN opam init --disable-sandboxing
# compiler + '_' + coq_serapi + '.' + coq_serapi_pin
RUN opam switch create ocaml-variants.4.07.1+flambda_coq-serapi.8.11.0+0.11.1 ocaml-variants.4.07.1+flambda
RUN opam switch ocaml-variants.4.07.1+flambda_coq-serapi.8.11.0+0.11.1
RUN eval $(opam env)
RUN opam repo add coq-released https://coq.inria.fr/opam/released
# RUN opam pin add -y coq 8.11.0
# ['opam', 'repo', '--all-switches', 'add', '--set-default', 'coq-released', 'https://coq.inria.fr/opam/released']
RUN opam repo --all-switches add --set-default coq-released https://coq.inria.fr/opam/released
RUN opam update --all
RUN opam pin add -y coq 8.11.0
#RUN opam install -y --switch ocaml-variants.4.07.1+flambda_coq-serapi_coq-serapi_8.11.0+0.11.1 coq-serapi 8.11.0+0.11.1
RUN opam install -y coq-serapi
#RUN eval $(opam env)
#
## makes sure depedencies for pycoq are installed once already in the docker image
#RUN pip install https://github.com/ddelange/psutil/releases/download/release-5.9.1/psutil-5.9.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
#ENV WANDB_API_KEY="SECRET"
#RUN pip install wandb --upgrade
#
#RUN pip install ultimate-utils
## RUN pip install pycoq # do not uncomment on arm, unless serlib is removed from setup.py in the pypi pycoq version.
## RUN pip install ~/iit-term-synthesis # likely won't work cuz we don't have iit or have pused it to pypi
#
## then make sure editable mode is done to be able to use changing pycoq from system
#RUN echo "pip install -e /home/bot/ultimate-utils" >> ~/.bashrc
#RUN echo "pip install -e /home/bot/pycoq" >> ~/.bashrc
#RUN echo "pip install -e /home/bot/iit-term-synthesis" >> ~/.bashrc
#RUN echo "pip install wandb --upgrade" >> ~/.bashrc
#
#RUN echo "eval $(opam env)" >> ~/.bashrc
## - set env variable for bash terminal prompt p1 to be nicely colored
#ENV force_color_prompt=yes
#
#RUN mkdir -p /home/bot/data/
# RUN pytest --pyargs pycoq
#CMD /bin/bash
NB: This may not be your only problem (I have no idea what opam is or how it works), but one thing jumps out:
This...
RUN eval $(opam env)
...doesn't do anything. Each RUN invocation happens in a new subshell; environment variables set in one RUN command aren't going to be visible in a subsequent RUN command.
Rather than a list of single-command RUN commands, chain everything together in a single command:
RUN eval $(opam env) && \
opam repo add coq-released https://coq.inria.fr/opam/released && \
opam repo --all-switches add --set-default coq-released https://coq.inria.fr/opam/released && \
opam update --all && \
opam pin add -y coq 8.11.0 && \
opam install -y coq-serapi
Because the above runs in a single shell, the environment set by eval $(opam env) will be available to all the following commands.

Dockerfile: Python3 not found

I am trying to convert a bash script to a Dockerfile since we are going the containerization route with AWS Batch
Basically I install CPLEX (an optimization library) and Anaconda, install some related packages, check if my environment it good to go, and then kick off a shell script to run the batch job.
Here is a snippet of my Dockerfile:
FROM amazonlinux:latest
# Download packages for container
RUN yum update -y
RUN yum -y install which unzip aws-cli \
RUN yum install -y tar.x86_64
RUN yum install gzip -y
RUN yum install ncompress -y
RUN yum -y install wget
RUN yum install -y nano
# Set working directory
WORKDIR /setup
#: Copy CPLEX installer binary and installation script.
COPY cplex_odee1210.linux-x86-64.bin /setup/
COPY cplex_installer_input.sh /setup/
#: Install CPLEX and update .bashrc
RUN chmod +x /setup/cplex_odee1210.linux-x86-64.bin
RUN chmod +x cplex_installer_input.sh
RUN ./cplex_installer_input.sh | bash cplex_odee1210.linux-x86-64.bin
RUN echo 'export PATH=$PATH:/opt/ibm/ILOG/CPLEX_Optimizer1210/cplex/bin/x86-64_linux' >>/root/.bashrc \
&& /bin/bash -c "source ~/.bashrc"
ENV PATH $PATH:/opt/ibm/ILOG/CPLEX_Optimizer1210/cplex/bin/x86-64_linux
#: Download Anaconda
COPY Anaconda3-2019.10-Linux-x86_64.sh /setup/
RUN bash Anaconda3-2019.10-Linux-x86_64.sh -b -p /home/ec2-user/anaconda3
RUN echo 'export PATH=$PATH:/home/ec2-user/anaconda3/bin' >>/root/.bashrc \
&& /bin/bash -c "source ~/.bashrc"
ENV PATH $PATH:/home/ec2-user/anaconda3/bin
RUN conda install pandas -y \
&& conda install numpy -y \
&& conda install ujson -y \
&& pip install docplex \
&& pip install boto3 \
&& pip install grpcio \
&& pip install grpcio-tools
RUN python3 -m docplex.mp.environment
ADD fetch_and_run.sh /usr/local/bin/fetch_and_run.sh
ENTRYPOINT ["/usr/local/bin/fetch_and_run.sh"]
From there, I kick off a bash script
#!/bin/bash
date
echo "Args: $#"
env
echo "script_path: $1"
echo "script_name: $2"
echo "path_prefix: $3"
echo "jobID: $AWS_BATCH_JOB_ID"
echo "jobQueue: $AWS_BATCH_JQ_NAME"
echo "computeEnvironment: $AWS_BATCH_CE_NAME"
echo "current directory: $(pwd)"
mkdir /tmp/scripts/
aws s3 cp $1 /tmp/scripts/$2
python3 /tmp/scripts/${#:2}
But for some reason, I keep getting
/tmp/tmp.hQlWYBEFs/batch-file-temp: line 20: python3: command not found
Do I need to change some PATH variables? Why isn't Docker picking up my Python 3 version?
The image needs to have python3 installed. Building images works off of files and programs that exist in the container. The python3 you have installed on your own system is not available.

not able to install groovy using sdkman in docker image

I use a script to build thee dockerfile. below is my script...
echo "FROM ubuntu:14.04" >> Dockerfile
echo "RUN rm /bin/sh && ln -s /bin/bash /bin/sh" >> Dockerfile
echo "RUN apt-get -y update && apt-get upgrade -y" >> Dockerfile
echo "RUN apt-get install -y software-properties-common" >> Dockerfile
echo "RUN apt-get -y update && add-apt-repository -y ppa:webupd8team/java" >> Dockerfile
echo "RUN echo debconf shared/accepted-oracle-license-v1-1 select true | debconf-set-selections" >> Dockerfile
echo "RUN echo debconf shared/accepted-oracle-license-v1-1 seen true | debconf-set-selections" >> Dockerfile
echo "RUN apt-get -y update && apt-get install -y oracle-java8-installer" >> Dockerfile
echo "RUN apt-get install -y curl " >> Dockerfile
echo "RUN apt-get install -y unzip " >> Dockerfile
echo "RUN apt-get -y update && curl -s get.sdkman.io | bash" >> Dockerfile
echo 'RUN source "$HOME/.sdkman/bin/sdkman-init.sh"' >> Dockerfile
echo 'RUN source ~/.profile' >> Dockerfile
echo "RUN yes | sdk install groovy" >> Dockerfile
...
docker build -t imagename:version ./
...
but I get the below error
RUN yes | sdk install groovy
---> Running in 09056add5ab7
/bin/sh: sdk: command not found
The command '/bin/sh -c yes | sdk install groovy' returned a non-zero code: 127
if I dont use this command "sdk install groovy" the build is sucessfull. and then i an run the image and issue the same command and it works.
Any help, any idea why this is hapenning?
RUN yes | /bin/bash -l -c 'sdk install groovy'
worked.

Docker container doesnot reflect changes on machines other than it was created

So I have a Dockerfile, using which I create an image. The instruction in Dockerfile are:
#This is a docker file
FROM ubuntu:14.04
MAINTAINER amit
# Install python-pip
RUN apt-get update && apt-get install -y python-pip
# Install virtual-env
RUN mkdir ~/.virtualenvs
RUN pip install virtualenv
RUN pip install virtualenvwrapper
RUN touch ~/.bashrc
RUN echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.bashrc
RUN echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.bashrc
RUN /bin/bash -c "source /usr/local/bin/virtualenvwrapper.sh && mkvirtualenv be"
# INSTALL REQUIRED PACKAGES
RUN apt-get update && apt-get install -y \
xclip \
python-dev \
libffi-dev \
libpam0g-dev \
sqlite3 \
libsqlite3-dev \
subversion \
g++ \
libxslt1-dev \
libxml2-dev \
zlib1g-dev \
swig \
node \
git \
libssl-dev
# Expose port
EXPOSE 5000
# Get the source ideally one should do a get on source release
COPY src /src
WORKDIR /src
RUN touch installer.sh
RUN echo "#!/bin/bash" >> installer.sh
RUN echo "source `which virtualenvwrapper.sh`" >> installer.sh
RUN echo "workon be" >> installer.sh
RUN echo "./tools/install_dependencies" >> installer.sh
RUN echo "deactivate be" >> installer.sh
RUN chmod +x installer.sh
Now I build an image from this. I execute the installer.sh by running the docker container and everything works great.
But when some one pulls this very image from the repository , and runs the file installer.sh, there is an error stating that virtualenv be is not present and then one has to do "mkvirtualenv be" once again.
What is wrong here? Shouldn't the "virtualenv be" automatically be present in the container?

Resources