Accepting license terms while running Dockerfile - docker

I am having trouble building an image on Docker when trying to install Berryconda. I need to accept the terms and conditions before I can continue (as shown in the image below). Has anyone experienced this issue before and knows how to tackle this scenario?
Berryconda T&C
Berryconda installation portion of the Dockerfile
RUN : \
&& echo "Installing Berryconda" \
&& wget -O berryconda3-2.0.0.sh "https://github.com/jjhelmus/berryconda/releases/download/v2.0.0/Berryconda3-2.0.0-Linux-armv7l.sh" \
&& chmod +x berryconda3-2.0.0.sh \
&& ./berryconda3-2.0.0.sh /usr/bin/berryconda3

You should be able to run the Berryconda install script with the -b argument to execute the script in batch mode.
-b run install in batch mode (without manual intervention),
it is expected the license terms are agreed upon
The available command line arguments for the installer are described near the top of the install script.

Related

Docker build - use the same shell for all RUN commands

Is it possible to use the same shell in all RUN commands when building a docker image? As opposed to each RUN command running on its own shell.
Use case: at some point, I need to source some file containing environment variables that are used later on. I cannot do this, because the commands run in different shells:
RUN source something.sh
RUN ./install.sh
RUN ... more commands
Instead I have to do:
RUN source something.sh && \
./install.sh && \
... more commands
Which I'm trying to avoid since it hurts readability, it's error prone and does not allow inserting comments in between commands.
Any ideas?
Thanks!
It's not possible to have separate RUN statement run in the same shell.
If you don't like the look of concatenated commands, you could write a shell script and RUN that.
You'll have to get it into the container by using a COPY statement.
Or you can use wget or curl to fetch it and pipe it into a shell. That requires that wget or curl is present in the container, so you might have to install them first.
If you use curl and Debian, it could look like this
RUN apt update && \
apt install -y curl && \
curl -sL https://github.com/link/to/my/install-script.sh | bash
If you COPY it in, it'd look like this
COPY install-script.sh .
RUN ./install-script.sh

Error when starting custom Airflow Docker Image GROUP_OR_COMMAND

I created a custom image with the following Dockerfile:
FROM apache/airflow:2.1.1-python3.8
USER root
RUN apt-get update \
&& apt-get -y install gcc gnupg2 \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update \
&& ACCEPT_EULA=Y apt-get -y install msodbcsql17 \
&& ACCEPT_EULA=Y apt-get -y install mssql-tools
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc \
&& source ~/.bashrc
RUN apt-get -y install unixodbc-dev \
&& apt-get -y install python-pip \
&& pip install pyodbc
RUN echo -e “AIRFLOW_UID=$(id -u) \nAIRFLOW_GID=0” > .env
USER airflow
The image creates successfully, but when I try to run it, I get this error:
"airflow command error: the following arguments are required: GROUP_OR_COMMAND, see help above."
I have tried supplying a group ID with the --user, but I can't figure it out.
How can I start this custom Airflow Docker image?
Thanks!
First of all this line is wrong:
RUN echo -e “AIRFLOW_UID=$(id -u) \nAIRFLOW_GID=0” > .env
If you are running it with Docker Compose (I presume you took it from https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html), this is something you should run on "Host" machine, not in the image. Remove that line, it has no effect.
Secondly - it really depends what "command" you run. The "GROUP_OR_COMMAND" message you got is the output of "airflow" command. You have not copied the whole output of your command but this is a message you get when you try to run airflow without telling it what to do. When you run the image you will run by default the airflow command which has a number of subcommands that can be executed. So the "see help above" message tells you the very thing you should do - look at the help and see what subcommand you wanted to run (and possibly run it).
docker run -it apache/airflow:2.1.2
usage: airflow [-h] GROUP_OR_COMMAND ...
positional arguments:
GROUP_OR_COMMAND
Groups:
celery Celery components
config View configuration
connections Manage connections
dags Manage DAGs
db Database operations
jobs Manage jobs
kubernetes Tools to help run the KubernetesExecutor
pools Manage pools
providers Display providers
roles Manage roles
tasks Manage tasks
users Manage users
variables Manage variables
Commands:
cheat-sheet Display cheat sheet
info Show information about current Airflow and environment
kerberos Start a kerberos ticket renewer
plugins Dump information about loaded plugins
rotate-fernet-key
Rotate encrypted connection credentials and variables
scheduler Start a scheduler instance
sync-perm Update permissions for existing roles and optionally DAGs
version Show the version
webserver Start a Airflow webserver instance
optional arguments:
-h, --help show this help message and exit
airflow command error: the following arguments are required: GROUP_OR_COMMAND, see help above.
when you extend the official image, it will pass the parametor to "airflow" command which causing this problem. Check this out: https://airflow.apache.org/docs/docker-stack/entrypoint.html#entrypoint-commands

Docker file owners and groups

I think I have a dilemma. I am trying to create a Dockerfile to reproduce a long and complicated installation process (of ROS) so that my students can get it running with less headache.
I am combining various scripts provided with manual steps that are documented. The manual steps often say to do "sudo" but I am told that doing sudo inside a Dockerfile is to be avoided. So I move those steps to before the USER command in the Dockerfile because I am told that those commands run as root. However as a result the files and directories created are owned by root and I believe subsequent steps are failing.
I have two choices I think: move the commands to after the USER command and include sudo or try to make the install scripts create directories and files of the right ownership. Of course a priori I dont know what files and directories are going to be created.
Here is my Dockerfile (actually one of many I have been experimenting with.) Also if you see any other things that need to be improved or fixed please let me know!
FROM ubuntu:16.04
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
RUN apt-get update && apt-get install --assume-yes wget sudo && \
wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && \
chmod 755 ./install_ros_kinetic.sh && \
bash ./install_ros_kinetic.sh
RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
USER $USERNAME
WORKDIR /home/$USERNAME
RUN cd /home/$USERNAME/catkin_ws/src/ && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_simulations.git
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/ros/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
# RUN . /home/ros/.bashrc && \
# cd /home/$USERNAME/catkin_ws && \
# catkin_make
USER $USERNAME
ENTRYPOINT /bin/bash
Would be interesting for my own information to get why sudo should be avoided in containers.
Historically we use docker to automate build, test and deploy processes in our team and always tried to write Dockerfiles as close as possible to original process.
Lets say if you build in your host some app and launch some commands with sudo, some without, we managed to create exactly the same Dockerfiles. The positive feedback from this is that you are not obligated to write readme's on how to build the code anymore - you just supply Dockerfile and whenever someone wants to repeat all steps in non-container environment, he just follows (copy/pastes) commands from the file.
So my proposal is - in Dockerfile install packages first, then switch to user and proceed with all remaining steps, using sudo when necessary. You will have all artifacts owned by the user, not root.
UPD
Got the original discussion and this one. So it sounds like you choose the best approach based on your particular case and needs.

How to access root folder inside a Docker container

I am new to docker, and am attempting to build an image that involves performing an npm install. Some of our the dependencies are coming from private repos we have, and I am hitting an SSH related issue:
I realised I was not supplying any form of SSH details to my file, and came across various posts online about how to do this using args into the docker build command.
So taken from here, I have added the following to my dockerfile before the npm install command gets run:
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
So running the docker build command again with the correct args supplied, I do see further activity in the console that suggests my SSH key is being utilised:
But as you can see I am getting no hostkey alg messages and
I still getting the same 'Host key verification failed' error. I was wondering if I could view the log file it references in the error:
Do I need to get the image running in order to be able to connect to it and browse the 'root' folder?
I hope I have made sense, please be gentle I am a docker noob!
Thanks
The lines that start with —-> in the docker build output are valid Docker image IDs. You can pick any of these and docker run them:
docker run --rm -it 59c45dac474a sh
If a step is actually failing, one useful debugging trick is to launch the image built in the step before it and run the command by hand.
Remember that anyone who has your image can do this; the way you’ve built it, if you ever push your image to any repository, your ssh private key is there for the taking, and you should probably consider it compromised. That’s doubly true since it will also be there in plain text in docker history output.

Succesfully created a virtualenv (using "mkproject") in Dockerfile, but can't run "workon" properly

Edit: Solved- typo
I have a Dockerfile that successfully creates a virtualenv using virtualenvwrapper (along with setting up a heap of "standard" settings/packages in our normal environment). I am using the resulting image as a "base image" for further use. All good so far. However, the following Dockerfile (based of the first image, "base_image_14.04") falls down at the last line:
FROM base_image_14.04
USER root
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update && apt-get install -y \
libproj0 libproj-dev \
libgeos-c1v5 libgeos-dev \
libjpeg62 libjpeg-dev \
zlib1g zlib1g-dev \
libfreetype6 libfreetype6-dev \
libgdal20 libgdal-dev \
&& rm -rf /var/lib/apt/lists
USER webdev
RUN ["/bin/bash", "-ic", "mkproject maproxy"]
EXPOSE 80
WORKDIR $PROJECT_HOME/mapproxy
ADD ./requirements.txt .
RUN ["/bin/bash", "-ic", "workon mapproxy && pip install -r requirements.txt"]
The "mkproject mapproxy" works fine. If I comment out the last line it builds successfully and I can spin up the container and run "workon mapproxy" manually, not a problem. But when I try and build with the last line, it gives a workon error:
ERROR: Environment 'mapproxy' does not exist. Create it with 'mkvirtualenv mapproxy'.
workon is being called, but for some reason it can't find the mapproxy virtualenv.
WORKON_HOME & PROJECT_HOME both exist (defined in the parent image) and point to the correct locations (and are used successfully by "mkproject mapproxy").
So why is workon returning an error when the mapproxy virtualenv exists? The same error happens when I isolate that last line into a third Dockerfile building on the second.
Solved: It was a simple typo. mkproject maproxy instead of mapproxy. :sigh:
I am trying to build a docker image and am running into similar problems.
First question was why use a virtual env in docker? The main reason in a nutshell is to minimize effort to migrate an existing and working approach into a docker container. I will eventually use docker-compose, but I wanted to start by getting my feet wet with it all in a single docker container.
In my first attempt I installed almost everything with apt-get, including uwsgi. I installed my app "globally" with pip3. The app has command line functionality and a separate flask web app, hence the need for uwsgi. The command line functionality works, but when I make a request of the flask app uwsgi / python has a problem with locale: Fatal Python error: Py_Initialize: Unable to get the locale encoding and ImportError: No module named 'encodings
I have stripped away all my app specific additions to narrow down the problem. This is the Dockerfile I'm using:
# Docker image definition for testing
FROM ubuntu:xenial
# Create a user
RUN useradd -G sudo -ms /bin/bash tester
RUN echo 'tester:password' | chpasswd
WORKDIR /home/tester
# Skipping apt-get update to save some build time. Some are kept
# to insure they are the same as on host setup.
RUN apt-get install -y python3 python3-dev python3-pip \
virtualenv virtualenvwrapper sudo nano && \
apt-get clean -qy
# After above, can we use those installed in rest of Dockerfile?
# Yes, but not always, such as with virtualenvwrapper. What about
# virtualenv? How do you "source" the script? Doesn't appear to be
# installed, as bash complains "source needs a single parameter"
ENV VIRTUALENVWRAPPER_PYTHON /usr/bin/python3
ENV VIRTUALENVWRAPPER_VIRTUALENV /usr/bin/virtualenv
RUN ["/bin/bash", "-c", "source", "/usr/share/virtualenvwrapper/virtualenvwrapper.sh"]
# Create a virtualenv so uwsgi can find locale
# RUN mkdir /home/tester/.virtualenv && virtualenv -p`which python3` /home/bts_tools/.virtualenv/bts_tools
RUN mkvirtualenv -p`which python3` bts_tools && \
workon bts_tools && \
pip3 --disable-pip-version-check install --upgrade bts_tools
USER tester
ENTRYPOINT ["/bin/bash"]
CMD ["--login"]
The build fails on the line I try to source the virtualenvwrapper script. Bash complains source needs an argument - the file to be sourced. So I comment out the RUN lines and it builds without error. When I run the resulting container I see all the additions to the ENV that virtualenvwrapper makes (you can see all of them by executing the "set" command without any args), and the script to be sourced is there too.
So my question is why doesn't docker find them? How does the docker build process work if the results of any previous RUNs or ENVs aren't applied for subsequent use in the Dockerfile? I know some things are applied and work, for example if you apt-get nginx you can refer to /etc/nginx or alter things under that folder. You can create a user and set it's password or cd into its home folder for example. If I move the WORKDIR before the RUN useradd -G I see a warning from useradd the home folder already exists. I tried to use the "time" program to time how long it takes to do various things in the Dockerfile and docker complains it can't find 'time'.
So what exactly is going on? I have spent the last 3 days trying to figure this out. It just shouldn't be this difficult. What am I missing?
Parts of the bts_tools flask app worked when I wasn't using virtual envs. Most of the app didn't work, and the issue was this locale problem. Since everything works on the host outside of docker, and after trying to alter the PATH, PYTHONHOME, PYTHONPATH in my uwsgi start script to overcome the dreaded "locale encoding" fatal error, I decided to try to replicate the host setup as closely as possible since that didn't have the locale issue. When I have had that problem before I could run dpkg-reconfigure python3 or fix with changes to PATH or ENV settings. If you google the problem you'll see many people have difficulties with python & locale. It's almost enough reason to avoid using python!
I posted this elsewhere about locale issue, if it helps.

Resources