Using latest Docker for Mac, on latest macOS.
I have a Dockerfile:
FROM debian:8
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -y -q \
&& apt-get install -y -q apt-utils \
&& apt-get upgrade -y -q \
&& apt-get install -y -q ssh build-essential libssl-dev libffi-dev python-dev python-pip python-six openjdk-7-jdk \
&& mkdir -p /etc/ansible \
&& echo -e "[ssh_connection]\nssh_args = -o ControlMaster=no -o ControlPersist=60s\n" > /etc/ansible/ansible.cfg
The problem is with the echo command. The content of the file produced by that command is:
-e [ssh_connection]
ssh_args = -o ControlMaster=no -o ControlPersist=60s
The -e option is printed as well! What's even crazier the option has been picked up by echo, as evident by the newlines being parsed. In fact if I attach to a container and run the same command again, I get the correct file content. I thought this might be a problem with docker build quoting each argument in RUN, but even if I run echo "-e" "X\nY" the command prints:
X
Y
Does anyone have any idea why this would happen?
Try running:
RUN bash -c 'echo -e ...'
Source
Reading carefully https://github.com/docker/docker/issues/8949, I understand that the reason of the weird behavior depends from the interpreting shell.
In my case, running an Ubuntu image, it was enough to remove the -e to have the line properly formatted:
RUN echo "# User rules for foobar\n\
foobar ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/nopasswd
And the result is:
# User rules for foobar
foobar ALL=(ALL) NOPASSWD:ALL
Better not to use double quote in RUN. Always use single quote or use ENV variable to set environment variables.
ex:
echo '******* Installing PPA pack ********'
ENV JAVA_HOME /usr/bin/java
Related
I'm building a Docker image including a ready to use terminal with all my usual tools.
I'm running a 2020 Macbook Air M1 running Monterey 12.5.1.
I'd like to start the container directly in a tmux session, but the characters display behavior is inconsistent.
When ENTRYPOINT is ["zsh"] and I execute tmux in the interactive container, the characters are as expected :
and when executing tmux :
but when changing the ENTRYPOINT to ["zsh", "-c", "tmux"] :
Here is my Dockerfile :
FROM ubuntu:22.04
ARG USER=ben
ENV GROUP=${USER}
ENV HOME=/home/${USER}
ENV TMUX_SESSION_NAME=devops
RUN groupadd ${GROUP}
RUN useradd -m -g ${GROUP} ${USER}
RUN apt-get update -y && apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends tzdata
RUN apt-get install -y \
ca-certificates \
curl \
git \
wget \
docker \
vim \
fzf \
zsh \
fd-find \
zsh-syntax-highlighting \
tmux \
locales \
locales-all
RUN usermod -s /bin/zsh ${USER}
# Configuring locales
RUN ln -fs /usr/share/zoneinfo/Europe/Paris /etc/localtime \
&& dpkg-reconfigure --frontend noninteractive tzdata
USER ${USER}
WORKDIR /home/${USER}
# Oh-My-Zsh configuration
RUN wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh -O - | zsh || true
# ZSH plugins
RUN git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k
RUN git clone https://github.com/zsh-users/zsh-syntax-highlighting.git ${ZSH_CUSTOM:-${HOME}/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting
RUN git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-${HOME}/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
COPY --chown=${USER}:${GROUP} zshrc ${HOME}/.zshrc
COPY --chown=${USER}:${GROUP} tmux.conf ${HOME}/.tmux.conf
COPY --chown=${USER}:${GROUP} p10k.zsh ${HOME}/.p10k.zsh
# ENTRYPOINT ["zsh", "-c", "tmux"]
ENTRYPOINT ["zsh"]
I couldn't find the reason for this behavior, but I investigated starting tmux directly from zsh and not in the ENTRYPOINT, and the solution that solved my issue was to set the environment variables ZSH_TMUX_AUTOSTART=true.
Thank you all for your help !
I am building a docker image and I'd like to increase the maximum amount of files that can be opened. I tried several things but none of them worked when I opened a new SSH session that connected to the container. They did work when executing a bash into the container.
I tried, in the docker build:
RUN echo "DefaultLimitNOFILE=65535" >> /etc/systemd/system.conf
Also tried:
RUN set ulimit -n 65535
RUN set ulimit -Sn 65535
RUN set ulimit -Hn 65535
I tried to add --ulimit nofile=65535:65535 both to the docker run and docker build command.
After I start the image and I log into it through SSH, the soft limit is never the one I set.
Docker build:
FROM nvcr.io/nvidia/deepstream:6.0-triton
ENV GIT_SSL_NO_VERIFY=1
# SETUP PYTHON
RUN sh docker_python_setup.sh
RUN update-alternatives --set python3 /usr/bin/python3.8
RUN apt install --fix-broken -y
RUN apt -y install python3-gi python3-gst-1.0 python-gi-dev git python3 python3-pip cmake g++ build-essential \
libglib2.0-dev python3-dev python3.8-dev libglib2.0-dev-bin python-gi-dev libtool m4 autoconf automake
# DEEPSTREAM PYTHON BINDINGS
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps && \
git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps && \
git submodule update --init
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/3rdparty/gst-python/ && \
./autogen.sh && \
make && \
make install
RUN pip3 install --upgrade pip
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/bindings && \
mkdir build && \
cd build && \
cmake -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8 -DPIP_PLATFORM=linux_x86_64 -DDS_PATH=/opt/nvidia/deepstream/deepstream-6.0 .. && \
make && \
pip3 install pyds-1.1.0-py3-none-linux_x86_64.whl
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps && \
mv apps/* ./
# RTSP DEPENDENCIES
RUN apt update && \
apt install -y python3-gi python3-dev python3-gst-1.0
RUN apt update && \
apt install -y libgstrtspserver-1.0-0 gstreamer1.0-rtsp && \
apt install -y libgirepository1.0-dev && \
apt-get install -y gobject-introspection gir1.2-gst-rtsp-server-1.0
# DEVELOPMENT AND DEBUGGING TOOLS
RUN apt install -y ipython3 graphviz graphviz-dev ffmpeg
# SSH AND REMOTE LOGIN FOR DEVELOPMENT PURPOSES
RUN apt update && apt install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:230idsjfjzJNJK3' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN sed -i 's/\(^Port\)/#\1/' /etc/ssh/sshd_config && echo Port 2222 >> /etc/ssh/sshd_config
# Export 2222 for SSH server
EXPOSE 2222
# SET ULIMIT USING THE COMMANDS ABOVE ....
# STARTUP
# Disable previous entrypoint.
ENTRYPOINT []
# Set default dir
WORKDIR /src
# Enable SSH for debug on remote server
CMD ["/usr/sbin/sshd", "-D"]
In the SSH session I always get the value:
root#ip-x-x-x-x:~# ulimit -n
1024
root#ip-x-x-x-x:~# ulimit -Sn
1024
root#ip-x-x-x-x:~# ulimit -Hn
1048576
I'd like to set the limit for all future SSH sessions.
EDIT: I noticed if I open a shell into the container, the soft limit is actually equal to the hard limit even without specifying anything. So the default limit is 1048576. But if I open an SSH session into the container the soft limit is 1024. How can I solve this?
You should also use prlimit and update the value of the current session (Bash) you are in. Try running the below script.
echo "add openfiles limit..........................."
sudo cp /etc/security/limits.conf /etc/security/orig_limits.conf
sudo cat <<EOT >> /etc/security/limits.conf
* hard nofile 33000
* soft nofile 33000
root hard nofile 33000
root soft nofile 33000
EOT
sudo echo "session required pam_limits.so" > /etc/pam.d/common-session
sudo ulimit -n 33000
ulimit -u unlimited
update_ulimit_per_pid(){
sudo echo "prlimit for pid "$pid" before updating is "$(ulimit -n)
sudo echo "Updating ulimit for pid: "$pid
sudo prlimit --pid $pid --nofile=33000:33000
sudo echo "prlimit for pid "$pid" after updating is "$(ulimit -n)
}
for pid in `ps -ef | grep 'bash' | awk '{print $2}'` ; do update_ulimit_per_pid ; done
This should work. This will not only update ulimit when you relogin, but also the in the bash session you are in.
I have this Dockerfile:
FROM ubuntu:18.04
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
RUN export EOSIO_LOCATION=~/eosio/eos \
export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install \
mkdir -p $EOSIO_INSTALL_LOCATION
RUN git clone https://github.com/EOSIO/eos.git $EOSIO_LOCATION \
cd $EOSIO_LOCATION && git submodule update --init --recursive
ENTRYPOINT ["/bin/bash"]
And error is: /bin/sh: 1: export: -p: bad variable name
How can i fix it?
You currently don't have any separation between the export and mkdir commands in the RUN statement.
You probably want to concatenate the commands with &&. This ensures that the previous commands (only) runs if the prior command succeds. You may also use ; to separate commands, i.e.
RUN export EOSIO_LOCATION=~/eosio/eos && \
export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install && \
mkdir -p $EOSIO_INSTALL_LOCATION
NOTE You probably don't need to export these variables and could:
EOSIO_LOCATION=... && EOSIO_INSTALL_LOCATION=... && mkdir ...
There's a Dockerfile ENV command that may be preferable:
ENV EOSIO_LOCATION=${PWD}/eosio/eos
ENV EOSIO_INSTALL_LOCATION=${EOSIO_LOCATION}/../install && \
RUN mkdir -p ${EOSIO_INSTALL_LOCATION}
Personal preference is to wrap env vars in ${...} and to use ${PWD} instead of ~ as it feels more explicit.
I build my project by Dockerfile. The project need to installation of Openvino. Openvino needs to set some environment variables dynamically by a script that depends on architecture. The sciprt is: script to set environment variables
As I learn, Dockerfile can't set enviroment variables to image from a script.
How do I follow way to solve the problem?
I need to set the variables because later I continue install opencv that looks the enviroment variables.
What I think that if I put the script to ~/.bashrc file to set variables when connect to bash, if I have any trick to start bash for a second, it could solve my problem.
Secondly I think that build openvino image, create container from that, connect it and initiliaze variables by running script manually in container. After that, convert the container to image. Create new Dockerfile and continue building steps by using this images for ongoing steps.
Openvino Dockerfile exp and line that run the script
My Dockerfile:
FROM ubuntu:18.04
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/16612/l_openvino_toolkit_p_2020.2.120.tgz
ENV INSTALLDIR /opt/intel/openvino
# openvino download
RUN curl -LOJ "${DOWNLOAD_LINK}"
# opencv download
RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.3.0.zip && \
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.3.0.zip
RUN apt-get -y install sudo
# openvino installation
RUN tar -xvzf ./*.tgz && \
cd l_openvino_toolkit_p_2020.2.120 && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
# rm -rf /tmp/* && \
sudo -E $INSTALLDIR/install_dependencies/install_openvino_dependencies.sh
WORKDIR /home/sa
RUN /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh" && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> /home/sa/.bashrc && \
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc && \
$INSTALLDIR/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
$INSTALLDIR/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN bash
# opencv installation
RUN unzip opencv.zip && \
unzip opencv_contrib.zip && \
# rm opencv.zip opencv_contrib.zip && \
mv opencv-4.3.0 opencv && \
mv opencv_contrib-4.3.0 opencv_contrib && \
cd ./opencv && \
mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_INF_ENGINE=ON -D ENABLE_CXX11=ON -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D ENABLE_PRECOMPILED_HEADERS=OFF -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/home/sa/opencv_contrib/modules -D PYTHON_EXECUTABLE=/usr/bin/python3 -D WIDTH_GTK=ON -D BUILD_TESTS=OFF -D BUILD_DOCS=OFF -D WITH_GSTREAMER=OFF -D WITH_FFMPEG=ON -D BUILD_EXAMPLES=OFF .. && \
make && \
make install && \
ldconfig
You need to cause the shell to load that file in every RUN command where you use it, and also at container startup time.
For startup time, you can use an entrypoint wrapper script:
#!/bin/sh
# Load the script of environment variables
. /opt/intel/openvino/bin/setupvars.sh
# Run the main container command
exec "$#"
Then in the Dockerfile, you need to include the environment variable script in RUN commands, and make this script be the image's ENTRYPOINT.
RUN . /opt/intel/openvino/bin/setupvars.sh && \
/opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites.sh && \
/opt/intel/openvino/deployment_tools/demo/demo_squeezenet_download_convert_run.sh
RUN ... && \
. /opt/intel/openvino/bin/setupvars.sh && \
cmake ... && \
make && \
...
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD same as the command you set in the original image
If you docker exec debugging shells in the container, they won't see these environment variables and you'll need to manually re-read the environment variable script. If you use docker inspect to look at low-level details of the container, it also won't show the environment variables.
It looks like that script just sets a couple of environment variables (especially $LD_LIBRARY_PATH and $PYTHONPATH), if to somewhat long-winded values, and you could just set these with ENV statements in the Dockerfile.
If you look at the docker build output, there are lines like ---> 0123456789ab after each build step; those are valid image IDs that you can docker run. You could run
docker run --rm 0123456789ab \
env \
| sort > env-a
docker run --rm 0123456789ab \
sh -c '. /opt/intel/openvino/bin/setupvars.sh && env' \
| sort > env-b
This will give you two local files with the environment variables with and without running this setup script. Find the differences (say, with comm(1)), put ENV before each line, and add that to your Dockerfile.
You can't really use .bashrc in Docker. Many common paths don't invoke its startup files: in the language of that documentation, neither a Dockerfile RUN command nor a docker run instruction is an "interactive shell" so those don't read dot files, and usually docker run ... command doesn't invoke a shell at all.
You also don't need sudo (you are already running as root, and an interactive password prompt will fail); RUN sh -c is redundant (Docker inserts it on its own); and source isn't a standard shell command (prefer the standard ., which will work even on Alpine-based images that don't have shell extensions).
I'm experimenting for the first time to try to create a docker container to run ROS. I am getting a confusing error and I cant figure out how to trouble
bash-3.2$ docker run -ti --name turtlebot3 rosdocker To run a command
as administrator (user "root"), use "sudo <command>". See "man
sudo_root" for details.
bash: /home/ros/catkin_ws/devel/setup.bash: No such file or directory
I am creating rosdocker with this dockerfile, from inside vscode. I am using the Docker plugin and using the "Build Image" command. Here's the Dockerfile:
FROM ros:kinetic-robot-xenial
RUN apt-get update && apt-get install --assume-yes \sudo \
python-pip \
ros-kinetic-desktop-full \
ros-kinetic-turtlebot3 \
ros-kinetic-turtlebot3-bringup \
ros-kinetic-turtlebot3-description \
ros-kinetic-turtlebot3-fake \
ros-kinetic-turtlebot3-gazebo \
ros-kinetic-turtlebot3-msgs \
ros-kinetic-turtlebot3-navigation \
ros-kinetic-turtlebot3-simulations \
ros-kinetic-turtlebot3-slam \
ros-kinetic-turtlebot3-teleop
# install python packages
RUN pip install -U scikit-learn numpy scipy
RUN pip install --upgrade pip
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
# create catkin_ws
RUN mkdir /home/$USERNAME/catkin_ws
WORKDIR /home/$USERNAME/catkin_ws
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
I am not sure where the error is coming from and I don't know how to debug or troubleshoot it. I would appreciate any pointers!
You are creating an user ros and then in the last line doing this:
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
So obviously, system will look for "/home/ros/catkin_ws/devel/setup.bash" which is not created any where inside docker file.
Either create this file or if you are planning to mount from host to docker, then run with
docker run -ti --name turtlebot3 rosdocker -v sourcevolume:destinationvolume