I have a Docker container that has two conda environments in it. One is used the vast majority of the time so I would like to automatically start in that environment (currently in starts in the base environment). Based on this website I tried by adding this to the end of my Dockerfile:
# Activate env
SHELL ["conda", "run", "-n", "py3", "/bin/bash", "-c"]
ENTRYPOINT ["conda", "run", "-n", "py3", "python", "pass.py"]
where run.py just contains print("hello world").
This caused the script to run but the docker container does not remain open after I run the docker container, although it does when I remove these lines. How do I get the container to open and stay open in a specified environment?
My Dockerfile looks like this:
FROM nvidia/cuda:10.1-cudnn7-devel-centos7
WORKDIR /app/
COPY ./*.* ./
ENV CONDA_DIR "/opt/conda"
ENV PATH "$CONDA_DIR"/bin:$PATH
ONBUILD ENV PATH "$CONDA_DIR"/bin:$PATH
RUN \
yum -y install epel-release && \
yum -y update && \
yum install -y \
bzip2 \
curl \
which \
libXext \
libSM \
libXrender \
git \
cuda-nvcc-10-1 \
openssh-server \
postgresql-devel \
yum clean all && rm -rf /var/cache/yum/*
RUN CONDA_VERSION="4.7.12" && \
curl -L \
https://repo.continuum.io/miniconda/Miniconda3-${CONDA_VERSION}-Linux-x86_64.sh -o miniconda.sh && \
mkdir -p "$CONDA_DIR" && \
bash miniconda.sh -f -b -p "$CONDA_DIR" && \
echo "export PATH=$CONDA_DIR/bin:\$PATH" > /etc/profile.d/conda.sh && \
rm miniconda.sh && \
conda config --add channels conda-forge && \
conda config --set auto_update_conda False && \
pip install --upgrade pip && \
rm -rf /root/.cache/pip/*
RUN conda env create -f py2_env.yaml
RUN conda env create -f py3_env.yaml
# Activate env
SHELL ["conda", "run", "-n", "py3", "/bin/bash", "-c"]
ENTRYPOINT ["conda", "run", "-n", "py3", "python", "pass.py"]
Related
I'm building a Docker image including a ready to use terminal with all my usual tools.
I'm running a 2020 Macbook Air M1 running Monterey 12.5.1.
I'd like to start the container directly in a tmux session, but the characters display behavior is inconsistent.
When ENTRYPOINT is ["zsh"] and I execute tmux in the interactive container, the characters are as expected :
and when executing tmux :
but when changing the ENTRYPOINT to ["zsh", "-c", "tmux"] :
Here is my Dockerfile :
FROM ubuntu:22.04
ARG USER=ben
ENV GROUP=${USER}
ENV HOME=/home/${USER}
ENV TMUX_SESSION_NAME=devops
RUN groupadd ${GROUP}
RUN useradd -m -g ${GROUP} ${USER}
RUN apt-get update -y && apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends tzdata
RUN apt-get install -y \
ca-certificates \
curl \
git \
wget \
docker \
vim \
fzf \
zsh \
fd-find \
zsh-syntax-highlighting \
tmux \
locales \
locales-all
RUN usermod -s /bin/zsh ${USER}
# Configuring locales
RUN ln -fs /usr/share/zoneinfo/Europe/Paris /etc/localtime \
&& dpkg-reconfigure --frontend noninteractive tzdata
USER ${USER}
WORKDIR /home/${USER}
# Oh-My-Zsh configuration
RUN wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh -O - | zsh || true
# ZSH plugins
RUN git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
RUN git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-${HOME}/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting
RUN git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-${HOME}/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
COPY --chown=${USER}:${GROUP} zshrc ${HOME}/.zshrc
COPY --chown=${USER}:${GROUP} tmux.conf ${HOME}/.tmux.conf
COPY --chown=${USER}:${GROUP} p10k.zsh ${HOME}/.p10k.zsh
# ENTRYPOINT ["zsh", "-c", "tmux"]
ENTRYPOINT ["zsh"]
I couldn't find the reason for this behavior, but I investigated starting tmux directly from zsh and not in the ENTRYPOINT, and the solution that solved my issue was to set the environment variables ZSH_TMUX_AUTOSTART=true.
Thank you all for your help !
I'm trying to build an image that will have both Jupyter Notebook and Keras to run my created ML model in a docker container. Here's what I have as of right now for my Dockerfile.
Dockerfile
ARG cuda_version=9.0
ARG cudnn_version=7
FROM nvidia/cuda:${cuda_version}-cudnn${cudnn_version}-devel
# Install system packages
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
g++ \
git \
graphviz \
libgl1-mesa-glx \
libhdf5-dev \
openmpi-bin \
wget && \
rm -rf /var/lib/apt/lists/*
# Install conda
ENV CONDA_DIR /opt/conda
ENV PATH $CONDA_DIR/bin:$PATH
RUN wget --quiet --no-check-certificate https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh && \
echo "c59b3dd3cad550ac7596e0d599b91e75d88826db132e4146030ef471bb434e9a *Miniconda3-4.2.12-Linux-x86_64.sh" | sha256sum -c - && \
/bin/bash /Miniconda3-4.2.12-Linux-x86_64.sh -f -b -p $CONDA_DIR && \
rm Miniconda3-4.2.12-Linux-x86_64.sh && \
echo export PATH=$CONDA_DIR/bin:'$PATH' > /etc/profile.d/conda.sh
# Install Python packages and keras
ENV NB_USER keras
ENV NB_UID 1000
RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \
chown $NB_USER $CONDA_DIR -R && \
mkdir -p /src && \
chown $NB_USER /src
USER $NB_USER
ARG python_version=3.6
RUN conda config --append channels conda-forge
RUN conda install -y python=${python_version} && \
pip install --upgrade pip && \
pip install \
sklearn_pandas \
tensorflow-gpu \
cntk-gpu && \
conda install \
bcolz \
h5py \
json \
cx_Oracle \
matplotlib \
numpy \
mkl \
nose \
notebook \
Pillow \
pandas \
pydot \
pygpu \
pyyaml \
scikit-learn \
six \
theano \
mkdocs \
&& \
git clone git://github.com/keras-team/keras.git /src && pip install -e /src[tests] && \
pip install git+git://github.com/keras-team/keras.git && \
conda clean -yt
ADD theanorc /home/keras/.theanorc
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ENV PYTHONPATH='/src/:$PYTHONPATH'
# Working directory
WORKDIR /data
# COPYING CURRENT DIRECTORY INTO /data folder in container
COPY ./lstm_real_deal.ipynb /data
EXPOSE 8888
CMD jupyter notebook --port=8888 --ip=0.0.0.0
I then run the docker commands in this order:
docker build . (I'm in the directory with the Dockerfile so it's building correctly)
docker run -d -p 8888:8888 [IMAGE]
I then go to 0.0.0.0:8888 on my local machine I can't seem to connect to Jupyter notebook.
Any suggested improvements to the Dockerfile would also be much appreciated.
When the chromium succeed to launch, its Debugging WebSocket URL should be like ws://127.0.0.1:9222/devtools/browser/ec261e61-0e52-4016-a5d7-d541e82ecb0a.
127.0.0.1:9222 should be able to browse by Chrome to inspect the headless Chromium. However, I cannot access the remote debugger URL by Chrome after I dockerize my application.
launchOption for launching chromium by Puppeteer:
{
"args": [
"--remote-debugging-port=9222",
"--window-size=1920,1080",
"--mute-audio",
"--disable-notifications",
"--force-device-scale-factor=0.8",
"--no-sandbox",
"--disable-setuid-sandbox"
],
"defaultViewport": {
"height": 1080,
"width": 1920
},
"headless": true
}
Dockerfile:
FROM node:10.16.3-slim
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh -O /usr/sbin/wait-for-it.sh \
&& chmod +x /usr/sbin/wait-for-it.sh
WORKDIR /usr/app
COPY ./ ./
VOLUME ["......." ]
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /usr/app \
&& npm install
USER pptruser
CMD npm run start
EXPOSE 3000 9222
Run the new container by :
docker run \
-p 3000:3000 \
-p 9222:9222 \
pptr
Port 9222 should be accessible in my host machine. But Chrome shows the error ERR_EMPTY_RESPONSE when I browse 127.0.0.1:9222 and DOCKER-INTERNAL-IP:9222 will timeout.
I managed to make this work with puppeteer using the following Dockerfile, docker run and puppeteer config:
FROM ubuntu:18.04
RUN apt update \
&& apt install -y \
curl \
wget \
gnupg \
gcc \
g++ \
make \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser
ADD . /app
WORKDIR /app
RUN chown -R pptruser:pptruser /app
RUN rm -rf node_modules
RUN rm -rf build/*
USER pptruser
RUN npm install --dev
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT /app/entrypoint.sh
Docker run:
docker run -p 9223:9222 -it myimage
Puppeteer launch:
this.browser = await puppeteer.launch(
{
headless: true,
args: [
'--remote-debugging-port=9222',
'--remote-debugging-address=0.0.0.0',
'--no-sandbox'
]
}
);
The entrypoint just launches the platform like: node build/main.js
After that I just had to connect to localhost:9223 on Chrome to see the browser. Hope it helps!
I know there is already an accepted answer, but let me add onto this in hopes to greatly reduce your image size. One shouldn't add too many extras into the Dockerfile if one can help it. But ultimately, adding --remote-debugging-port=9222 and --remote-debugging-address=0.0.0.0 will allow you to access it.
Dockerfile
FROM ubuntu:latest
LABEL Full Name <email#email.com> https://yourwebsite.com
WORKDIR /home/
COPY wrapper-script.sh wrapper-script.sh
# install chromium-browser and cleanup.
RUN apt update && apt install chromium-browser --no-install-recommends -y && apt autoremove && apt clean && apt autoclean && rm -rf /var/lib/apt/lists/*
# Run your commands and add environment variables from your compose file.
CMD ["sh", "wrapper-script.sh"]
I use a wrapper script so that I can include environment variables here. You can see URL and USERNAME set so that I can configure them from the compose file. Of course, i'm sure there is a better way to do this, but I do this so that I can scale my containers horizontally with ease.
wrapper-script.sh
#!/bin/bash
# Start the process
chromium-browser --headless --disable-gpu --no-sandbox --remote-debugging-port=9222 --remote-debugging-address=0.0.0.0 ${URL}${USERNAME}
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start chromium-browser: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep chromium-browser | grep -q -v grep
PROCESS_1_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Lastly, I have the docker-compose file. This is where I define all my settings so that I can configure my wrapper-script.sh with what I need and scale horizontally. Notice the environment section of the docker-compose file. USERNAME and URL are environment variables, and they can be called from the wrapper script.
docker-compose.yml
version: '3.7'
services:
chrome:
command: [ 'sh', 'wrapper-script.sh' ]
image: headless-chrome
build:
context: .
dockerfile: Dockerfile
environment:
- USERNAME=eaglejs
- URL=https://teamtreehouse.com/
ports:
- 9222:9222
If you are wondering what my folder structure looks like. all three files are at the root of the folder. For example:
My_Docker_Repo:
Dockerfile
docker-compose.yml
wrapper-script.sh
After that is all said and done, I simply run docker-compose up and I have one container running. Right now, using the ports section, you'll have to do something to scale that as well. if you were to run docker-compose up --scale chrome=5 your ports will clash, but let me know if you want to try that and i'll see what I can do for scaling, but other than that, if it is for testing, this should work well the way it is. :) Happy coding!
eaglejs
I've built a docker image containing a number of environment variables, including one called SPARK_HOME. Here is the line from the Dockerfile that declares that env var:
ENV SPARK_HOME="/opt/spark"
When I issue docker run I can see that the env var exists but any reference to it doesn't return anything, as demonstrated in a simple echo:
$ docker run --rm myimage /bin/bash -c "env | grep SPARK_HOME ; echo SPARK_HOME=$SPARK_HOME"
SPARK_HOME=/opt/spark
SPARK_HOME=
$
Am I missing something obvious here? Why can I not refer to the value of an existing env var?
EDIT 1: As requested in the comments the Dockerfile content is included below, below the break.
EDIT 2: Discovered that the var can be referred to if I run the container interactively
$ docker run --rm -it myimage /bin/bash
root#419dd5f13a6f:/tmp# echo $SPARK_HOME
/opt/spark
FROM our.internal.artifact.store/python:3.7-stretch
WORKDIR /tmp
ENV SPARK_VERSION=2.2.1
ENV HADOOP_VERSION=2.8.4
ARG ARTIFACTORY_USER
ARG ARTIFACTORY_ENCRYPTED_PASSWORD
ARG ARTIFACTORY_PATH=our.internal.artifact.store/artifactory/generic-dev/ceng/external-dependencies
ARG SPARK_BINARY_PATH=https://${ARTIFACTORY_PATH}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz
ARG HADOOP_BINARY_PATH=https://${ARTIFACTORY_PATH}/hadoop-${HADOOP_VERSION}.tar.gz
ADD files/apt-transport-https_1.4.8_amd64.deb /tmp
RUN echo "deb https://username:password#our.internal.artifact.store/artifactory/debian-main-remote stretch main" >/etc/apt/sources.list.d/main.list &&\
echo "deb https://username:password#our.internal.artifact.store/artifactory/maria-db-debian stretch main" >>/etc/apt/sources.list.d/main.list &&\
echo 'Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/02update &&\
echo 'Acquire::http::Timeout "10";' > /etc/apt/apt.conf.d/99timeout &&\
echo 'Acquire::ftp::Timeout "10";' >> /etc/apt/apt.conf.d/99timeout &&\
dpkg -i /tmp/apt-transport-https_1.4.8_amd64.deb &&\
apt-get install --allow-unauthenticated -y /tmp/apt-transport-https_1.4.8_amd64.deb &&\
apt-get update --allow-unauthenticated -y -o Dir::Etc::sourcelist="sources.list.d/main.list" -o Dir::Etc::sourceparts="-" -o APT::Get::List-Cleanup="0"
RUN apt-get update && \
apt-get -y install default-jdk
# Detect JAVA_HOME and export in bashrc.
# This will result in something like this being added to /etc/bash.bashrc
# export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN echo export JAVA_HOME="$(readlink -f /usr/bin/java | sed "s:/jre/bin/java::")" >> /etc/bash.bashrc
# Configure Spark-${SPARK_VERSION}
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${SPARK_BINARY_PATH}" -o /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& cd /opt \
&& tar -xvzf /opt/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& rm spark-${SPARK_VERSION}-bin-hadoop2.7.tgz \
&& ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark \
&& sed -i '/log4j.rootCategory=INFO, console/c\log4j.rootCategory=CRITICAL, console' /opt/spark/conf/log4j.properties.template \
&& mv /opt/spark/conf/log4j.properties.template /opt/spark/conf/log4j.properties \
&& mkdir /opt/spark-optional-jars/ \
&& mv /opt/spark/conf/spark-defaults.conf.template /opt/spark/conf/spark-defaults.conf \
&& printf "spark.driver.extraClassPath /opt/spark-optional-jars/*\nspark.executor.extraClassPath /opt/spark-optional-jars/*\n">>/opt/spark/conf/spark-defaults.conf \
&& printf "spark.driver.extraJavaOptions -Dderby.system.home=/tmp/derby" >> /opt/spark/conf/spark-defaults.conf
# Configure Hadoop-${HADOOP_VERSION}
RUN curl --fail -u "${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPTED_PASSWORD}" -X GET "${HADOOP_BINARY_PATH}" -o /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& tar -xvzf /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& rm /opt/hadoop-${HADOOP_VERSION}.tar.gz \
&& ln -s hadoop-${HADOOP_VERSION} hadoop
# Set Environment Variables.
ENV SPARK_HOME="/opt/spark" \
HADOOP_HOME="/opt/hadoop" \
PYSPARK_SUBMIT_ARGS="--master=local[*] pyspark-shell --executor-memory 1g --driver-memory 1g --conf spark.ui.enabled=false spark.executor.extrajavaoptions=-Xmx=1024m" \
PYTHONPATH="/opt/spark/python:/opt/spark/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH" \
PATH="$PATH:/opt/spark/bin:/opt/hadoop/bin" \
PYSPARK_DRIVER_PYTHON="/usr/local/bin/python" \
PYSPARK_PYTHON="/usr/local/bin/python"
# Upgrade pip and setuptools
RUN pip install --index-url https://username:password#our.internal.artifact.store/artifactory/api/pypi/pypi-virtual-all/simple --upgrade pip setuptools
# Install core python packages
RUN pip install --index-url https://username:password#our.internal.artifact.store/artifactory/api/pypi/pypi-virtual-all/simple pipenv
ADD Pipfile /tmp
ADD pysparkdf_helloworld.py /tmp
Ok, contrary to my comment, thats not weird at all.
The issue is just that your local shell already interpolates $SPARK_HOME before sending it to the container, so you're basically calling echo SPARK_HOME=
To fix, just escape the env var in the command: $SPARK_HOME->\$SPARK_HOME
Demo:
$ export SPARK_HOME=foo
$ docker run ... /bin/bash -c "env | grep SPARK_HOME ; echo SPARK_HOME=$SPARK_HOME"
> SPARK_HOME=/opt/spark
> SPARK_HOME=foo
I have a simple Spring Boot App that is dockerized, with this simple DockerFile
FROM openjdk
MAINTAINER matteoroxis
ADD target/example-service.jar example-service.jar
ENTRYPOINT ["java", "-jar", "/example-service.jar"]
EXPOSE 2222
I have necessity to use Filebeat to send log to a logstash environment; how can I launch Filebeat using my DockerFile?
FROM openjdk
MAINTAINER matteoroxis
ENV FILEBEAT_VERSION=1.2.3 \
FILEBEAT_SHA1=3fde7f5f5ea837140965a193bbb387c131c16d9c
COPY my-config/filebeat.yml /filebeat.yml
RUN set -x && \
apt-get update && \
apt-get install -y wget && \
wget https://download.elastic.co/beats/filebeat/filebeat-${FILEBEAT_VERSION}-x86_64.tar.gz -O /opt/filebeat.tar.gz && \
cd /opt && \
echo "${FILEBEAT_SHA1} filebeat.tar.gz" | sha1sum -c - && \
tar xzvf filebeat.tar.gz && \
cd filebeat-* && \
cp filebeat /bin && \
cd /opt && \
rm -rf filebeat* && \
apt-get purge -y wget && \
apt-get autoremove -y && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ADD target/example-service.jar example-service.jar
ENTRYPOINT ["java", "-jar", "/example-service.jar"]
CMD [ "filebeat", "-e" ]
EXPOSE 2222
Here is the filebeat dockerfile, for your reference