How to include Webots in a Docker container build? - docker

I want to add Webots to my Dockerfile, but I'm running into an issue. My current manual installation steps (from here) are:
$ # launch my Docker container without Webots
$ wget -qO- https://cyberbotics.com/Cyberbotics.asc | sudo apt-key add -
$ sudo apt update
$ sudo apt install -y software-properties-common
$ sudo apt-add-repository 'deb https://cyberbotics.com/debian/ binary-amd64/'
$ sudo apt update
$ sudo apt-get install webots
$ # now I have a Docker container with Webots
I want to include this process in the build of the Docker container. I can't just use the same steps in Dockerfile though, because while webots is installing, it prompts for some stdin responses asking for the keyboard's country of origin. Since Docker doesn't listen to stdin while building, I have no way to answer these prompts. I tried piping echo output like so, but it doesn't work:
# Install Webots (a robot simulator)
RUN wget -qO- https://cyberbotics.com/Cyberbotics.asc | sudo apt-key add -
RUN apt-get update && sudo apt-get install -y \
software-properties-common \
libxtst6
RUN sudo apt-add-repository 'deb https://cyberbotics.com/debian/ binary-amd64/'
RUN apt-get update && echo 31 1 | sudo apt-get install -y \
webots # the echo fills the "keyboard country of origin" prompts
How can I get Webots included in the Docker container? I don't want to just use someone else's container (e.g. cyberbotics/webots-docker), since I need to add other things to the container, like ROS2.

Edit: this answer is incorrect. FROM doesn't work like this, and only the last FROM statement will be utilized.
Original answer:
It turns out to be simpler than I expected. You can include more than one FROM $IMAGE statement in a Dockerfile to combine base images. Here's a sample explaining what I did (note that all the ARG statements must come before the first FROM statement):
ARG BASE_IMAGE_WEBOTS=cyberbotics/webots:R2021a-ubuntu20.04
ARG IMAGE2=other/image:latest
FROM $BASE_IMAGE_WEBOTS AS base
FROM $IMAGE2 AS image2
# other things needed

Related

Pass arguments to interactive shell in Docker Container

Currently I'm trying to create a Docker image for jitsi-meet.
I installed jitsi-meet on my test system and noticed, that I get prompted for user input. Well, this is absolutely fine, when installing jitsi manually.
However the installation process is supposed to run during the build of the image. Which means there is no way for me to manually type in the necessary data.
Is there any way to pass values as an environment variable in the Dockerfile and use the variable in the container when I get prompted to enter some additional information?
This is how my Dockerfile looks like:
FROM debian:latest
WORKDIR /opt/jitsi-meet
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y ssh sudo ufw apt-utils apt-transport-https wget gnupg2 && \
wget -qO - https://download.jitsi.org/jitsi-key.gpg.key | sudo apt-key add - && \
sh -c "echo 'deb https://download.jitsi.org stable/' > /etc/apt/sources.list.d/jitsi-stable.list" && \
apt-get -y update && \
apt-get -y install jitsi-meet
EXPOSE 80 443
EXPOSE 10000/udp
Thanks in advance!
Yes you can can set ENV vars in a docker file
using 'ENV', see:
https://docs.docker.com/engine/reference/builder/#environment-replacement
To use it when you got prompted something depends on the implementation
a prompt upon container run, is not really advisable, as interactive container startup doesn't make sense in most cases.
However in bash you might be able to read redirect something to stdin using <
or send it with a pipe(|) to a command.
But how to solve that issue, depends on how it is implemented in the sourcecode
where it prompts.
In general it's best practice to skip the prompt, if an env has been set.

Why can't I run command in Dockerfile but I can from within the my Docker container?

I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done

Pass Docker run command through dockerfile

I am trying to run docker inside my container. I saw in some of the article that I need to pass --privileged=true to make this possible.
But for some reason, I do not have the option to pass this parameter while running.. because it is been taken care by some automation which I do not have access.
So, I was wondering if its possible to pass above option in Dockerfile, so that I do not have the pass this as param.
Right now this is the content of my dockerfile.
FROM my-repo/jenkinsci/jnlp-slave:2.62
USER root
#RUN --privileged=true this doesnt work for obvious reasons
MAINTAINER RD_TOOLS "abc#example.com"
RUN apt-get update
RUN apt-get remove docker docker-engine docker.io || echo "No worries"
RUN apt-get --assume-yes install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common curl
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN cat /etc/*-release
RUN apt-get --assume-yes install docker.io
RUN docker --version
RUN service docker start
WIthout passing privilaged= true param, it seems I cant run the docker inside docker.
Any help in this regard is highly appreciated.
You can't force a container to run as privileged from within the Dockerfile.
As a general rule, you can't run Docker inside a Docker container; the more typical setup is to share the host's Docker socket. There's an official Docker image that attempts this at https://hub.docker.com/_/docker/ with some fairly prominent suggestions to not actually use it.

boot2docker / docker "Error. image library/.:latest not found"

I'm trying to create a VM with docker and boot2docker. I've made the following Dockerfile, which I'm trying to run through the command line
docker run Dockerfile
Immidiatly it says exactly this:
Unable to find image 'Dockerfile:latest' locally
FATA[0000] Invalid repository name <Dockerfile>, only [a-z0-9_.] are allowed
Dockerfile:
FROM ubuntu:latest
#Oracle Java7 install
RUN apt-get install software-properties-common -y
RUN apt-get update
RUN add-apt-repository -y ppa:webupd8team/java
RUN apt-get update
RUN echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections
RUN apt-get install -y oracle-java7-installer
#Jenkins install
RUN wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
RUN sudo echo "deb http://pkg.jenkins-ci.org/debian binary/" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install --force-yes -y jenkins
RUN sudo service jenkins start
#Zip support install
RUN apt-get update
RUN apt-get -y install zip
#Unzip hang.zip
RUN unzip -o /var/jenkins/hang.zip -d /var/lib/jenkins/
RUN chown -R jenkins:jenkins /vaR/lib/jenkins
RUN service jenkins restart
EXEC tail -f /etc/passwd
EXPOSE 8080
I am in the directory where the Dockerfile is, when trying to run this command.
Ignore the zip part, as that's for later use
You should run docker build first (which actually uses your Dockerfile):
docker build --tag=imagename .
Or
docker build --tag=imagename -f yourDockerfile .
Then you would use that image tag to docker run it:
docker run imagename
There are tools that can provide this type of feature.
We have achieved using docker compose, though you have to go through
(https://docs.docker.com/compose/overview/)
docker-compose up
but you can also do as work around
$ docker build -t foo . && docker run foo.

Supervisor is not starting up

I am following cloudera cdh4 installation guide.
My base file
FROM ubuntu:precise
RUN apt-get update -y
#RUN apt-get install -y curl
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository ppa:webupd8team/java
RUN apt-get update -y
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN apt-get install -y oracle-java7-installer
#Checking java version
RUN java -version
My hadoop installation file
java_ubuntu is the image build from my base file.
FROM java_ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y curl
RUN curl http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb > cdh4-repository_1.0_all.deb
RUN dpkg -i cdh4-repository_1.0_all.deb
RUN curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | apt-key add -
RUN apt-get update -y
RUN apt-get install -y hadoop-0.20-conf-pseudo
#Check for /etc/hadoop/conf.pseudo.mrl to verfiy hadoop packages
RUN echo "dhis"
RUN dpkg -L hadoop-0.20-conf-pseudo
Supervisor part
hadoop_ubuntu is the image build from my hadoop installation docker file
FROM hadoop_ubuntu:latest
USER hdfs
RUN hdfs namenode -format
USER root
RUN apt-get install -y supervisor
RUN echo "[supervisord] nodameon=true [program=namenode] command=/etc/init.d/hadoop-hdfs-namenode -D" > /etc/supervisorconf.d
CMD ["/usr/bin/supervisord"]
Program is successfully build. But namenode is not starting up? How to use supervisor?
You have your config in /etc/supervisorconf.d and I don't believe that's the right location.
It should be /etc/supervisor/conf.d/supervisord.conf instead.
Also it's easier to maintain if you make a file locally and then use the COPY instruction to put it in the image.
Then as someone mentioned you can connect to the container after it's running (docker exec -it <container id> /bin/bash) and then run supervisorctl to see what's running and what might be wrong.
Perhaps you need line breaks in your supervisor.conf. Try hand crafting one and COPY it into your dockerfile for testing.
Docker and supervisord

Resources