Pass arguments to interactive shell in Docker Container - docker

Currently I'm trying to create a Docker image for jitsi-meet.
I installed jitsi-meet on my test system and noticed, that I get prompted for user input. Well, this is absolutely fine, when installing jitsi manually.
However the installation process is supposed to run during the build of the image. Which means there is no way for me to manually type in the necessary data.
Is there any way to pass values as an environment variable in the Dockerfile and use the variable in the container when I get prompted to enter some additional information?
This is how my Dockerfile looks like:
FROM debian:latest
WORKDIR /opt/jitsi-meet
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y ssh sudo ufw apt-utils apt-transport-https wget gnupg2 && \
wget -qO - https://download.jitsi.org/jitsi-key.gpg.key | sudo apt-key add - && \
sh -c "echo 'deb https://download.jitsi.org stable/' > /etc/apt/sources.list.d/jitsi-stable.list" && \
apt-get -y update && \
apt-get -y install jitsi-meet
EXPOSE 80 443
EXPOSE 10000/udp
Thanks in advance!

Yes you can can set ENV vars in a docker file
using 'ENV', see:
https://docs.docker.com/engine/reference/builder/#environment-replacement
To use it when you got prompted something depends on the implementation
a prompt upon container run, is not really advisable, as interactive container startup doesn't make sense in most cases.
However in bash you might be able to read redirect something to stdin using <
or send it with a pipe(|) to a command.
But how to solve that issue, depends on how it is implemented in the sourcecode
where it prompts.
In general it's best practice to skip the prompt, if an env has been set.

Related

How to include Webots in a Docker container build?

I want to add Webots to my Dockerfile, but I'm running into an issue. My current manual installation steps (from here) are:
$ # launch my Docker container without Webots
$ wget -qO- https://cyberbotics.com/Cyberbotics.asc | sudo apt-key add -
$ sudo apt update
$ sudo apt install -y software-properties-common
$ sudo apt-add-repository 'deb https://cyberbotics.com/debian/ binary-amd64/'
$ sudo apt update
$ sudo apt-get install webots
$ # now I have a Docker container with Webots
I want to include this process in the build of the Docker container. I can't just use the same steps in Dockerfile though, because while webots is installing, it prompts for some stdin responses asking for the keyboard's country of origin. Since Docker doesn't listen to stdin while building, I have no way to answer these prompts. I tried piping echo output like so, but it doesn't work:
# Install Webots (a robot simulator)
RUN wget -qO- https://cyberbotics.com/Cyberbotics.asc | sudo apt-key add -
RUN apt-get update && sudo apt-get install -y \
software-properties-common \
libxtst6
RUN sudo apt-add-repository 'deb https://cyberbotics.com/debian/ binary-amd64/'
RUN apt-get update && echo 31 1 | sudo apt-get install -y \
webots # the echo fills the "keyboard country of origin" prompts
How can I get Webots included in the Docker container? I don't want to just use someone else's container (e.g. cyberbotics/webots-docker), since I need to add other things to the container, like ROS2.
Edit: this answer is incorrect. FROM doesn't work like this, and only the last FROM statement will be utilized.
Original answer:
It turns out to be simpler than I expected. You can include more than one FROM $IMAGE statement in a Dockerfile to combine base images. Here's a sample explaining what I did (note that all the ARG statements must come before the first FROM statement):
ARG BASE_IMAGE_WEBOTS=cyberbotics/webots:R2021a-ubuntu20.04
ARG IMAGE2=other/image:latest
FROM $BASE_IMAGE_WEBOTS AS base
FROM $IMAGE2 AS image2
# other things needed

Why can't I run command in Dockerfile but I can from within the my Docker container?

I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done

Docker commands require keyboard interaction

I'm trying to create a Docker image for ripping CDs (using abcde).
Here's the relevant portion of the Dockerfile:
FROM ubuntu:17.10
MAINTAINER Graham Nicholls <graham#rockcons.co.uk>
RUN apt update && apt -y install eject vim ruby abcde
...
Unfortunately, the package "abcde" pulls in a mail client (not sure which), and apt tries to configure that by asking what type of mail connection to configure (smarthost/relay etc).
When docker runs, it's not appearing to read from stdin, so I can't redirect into the docker process.
I've tried using --nodeps with apt (and replacing apt with apt-get); unfortunately --nodeps seems no-longer to be a supported option and returns:
E: Command line option --nodeps is not understood in combination with the other options
Someone has suggested using expect in response to a similar question, which I'd rather avoid. This seems to be a "difficult to google" problem - I can't find anything.
So, is there a way of passing in the answer to the config in apt, or of preventing apt from pulling in a mail client, which would be better - I'm not planning in sending updates to cddb.
The typical template to install apt packages in a docker container looks like:
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
eject \
vim \
ruby \
abcde \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
Running it with the "noninteractive" value removes any prompts. You don't want to set that as an ENV since that would also impact any interactive commands you run inside the container.
You also want to cleanup the package database when finished to reduce the layer size and avoid reusing a stale cached package database in a later step.
The no-install-recommends option will reduce the number of packages installed by only installing the required dependencies, not the additional recommended packages. This cuts the size of the root filesystem down by half for me.
If you need to pass a non-default configuration to a package, then use debconf. First run you install somewhere interactively and enter the options you want to save. Install debconf-utils. Then run:
debconf-get-selections | grep "${package_name}"
to view all the options you configured for that package. You can then pipe these options to debconf-set-selections in your container before running your install, e.g.:
RUN echo "postfix postfix/main_mailer_type select No configuration" \
| debconf-set-selections \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
....
or save your selections to a file that you copy in:
COPY debconf-selections /
RUN debconf-set-selections </debconf-selections \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
....

docker-compose update from S3 bucket

Our Dockerfile invokes a python script which copies a binary from S3 to /usr/bin. This works fine the first time. But from then on "docker-compose build" does nothing because everything is cached. This is a problem if the binary has changed.
Short of building with --no-cache, what is the best way to make sure "docker-compose build" will always pick up the new binary if there is one. We don't mind if it unnecessarily downloads the binary even if unchanged, so long as it does work then the binary has changed.
Seems like we want a Dockerfile step that always executes?
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get -y install software-properties-common
RUN apt-get -y install --reinstall ca-certificates
RUN add-apt-repository ppa:fkrull/deadsnakes
RUN apt-get update && apt-get install -y \
curl \
wget \
vim \
git \
python3.5 \
python3-pip \
python3-setuptools \
libpcap0.8-dev
RUN ln -sf /usr/bin/python3.5 /usr/bin/python3
ADD . /app
WORKDIR /app
# Install Python Requirements
RUN pip3 install -r etc/python/requirements.txt
# Download/Install processor and associated libs
RUN python3 setup_processor.py
RUN mkdir -p /logs
ENTRYPOINT ["/app/entrypoint.sh"]
Where setup_processor.py downloads directly from S3 to /usr/bin.
So as of now there is no direct feature like this. But there is a workaround to your solution.
Add Build argument before your download step
ARG BUILD_ON=now
# Download/Install processor and associated libs
RUN python3 setup_processor.py
While building the image use below
docker build --build-arg BUILD_ON=$(date) ....
This will always make sure that you get a change in the ARG step and all steps cache after that will be invalidated
A feature has already been requested and being worked out on below thread
https://github.com/moby/moby/issues/1996

Silently Installing pecl modules (e.g. pecl_http) Inside a Docker Container?

I am attempting to install pecl_http inside a docker container. Currently my Dockerfile looks something like this:
FROM fun:5000/apache-php:0.1.0
# Install dependencies
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get -y install \
php5-dev \
libcurl4-openssl-dev && \
yes "\n" | pecl install pecl_http-1.7.6 && \
echo "extension=http.so" > /etc/php5/mods-available/http.ini && \
cd /etc/php5/apache2/conf.d/ && \
ln -s ../../mods-available/http.ini 20-http.ini && \
...
Initially I was simply using pecl install pecl_http-1.7.6 in the docker file, and the container built successfully - without pecl_http installed.
If I attach to the container, I can install pecl_http with the interactive pecl install pecl_http-1.7.6 by simply hitting enter after every prompt. I just learned about yes, and it seemed to fit my needs. Online searches indicated that many people have used it to perform unattended pecl installs, including pecl_http; however, when I attempt to use it in my docker container it fails with configure: error: could not find magic.h.
How can I perform a silent pecl_http install in Docker?
Your pecl install is asking you this question:
whether to enable response content type guessing; specify libmagic directory [no] :
And yes "\n" isn't doing what you think it is - it's actually outputting:
\n
\n
\n
\n
\n
\n
So because you're saying \n in response to the above question, the installer thinks you're telling it to look in \n for libmagic, and of course it's failing because \n is nonsense.
yes has an implicit return after each command you tell it to output, so if you want to just have it hit return and use the defaults, use yes ''.
Working Dockerfile:
FROM ubuntu:14.04
# Install dependencies
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get -y install php5-dev
RUN apt-get -y install libcurl4-openssl-dev
RUN apt-get -y install libevent-dev
RUN echo "extension=http.so" > /etc/php5/mods-available/http.ini
RUN yes "" | pecl install pecl_http-1.7.6
RUN cd /etc/php5/apache2/conf.d/
RUN ln -s ../../mods-available/http.ini 20-http.ini
...
Extra tip: Don't be afraid to split your commands out into separate RUN statements to make full use of the docker cache.

Resources