Pass Docker run command through dockerfile - docker

I am trying to run docker inside my container. I saw in some of the article that I need to pass --privileged=true to make this possible.
But for some reason, I do not have the option to pass this parameter while running.. because it is been taken care by some automation which I do not have access.
So, I was wondering if its possible to pass above option in Dockerfile, so that I do not have the pass this as param.
Right now this is the content of my dockerfile.
FROM my-repo/jenkinsci/jnlp-slave:2.62
USER root
#RUN --privileged=true this doesnt work for obvious reasons
MAINTAINER RD_TOOLS "abc#example.com"
RUN apt-get update
RUN apt-get remove docker docker-engine docker.io || echo "No worries"
RUN apt-get --assume-yes install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common curl
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN cat /etc/*-release
RUN apt-get --assume-yes install docker.io
RUN docker --version
RUN service docker start
WIthout passing privilaged= true param, it seems I cant run the docker inside docker.
Any help in this regard is highly appreciated.

You can't force a container to run as privileged from within the Dockerfile.
As a general rule, you can't run Docker inside a Docker container; the more typical setup is to share the host's Docker socket. There's an official Docker image that attempts this at https://hub.docker.com/_/docker/ with some fairly prominent suggestions to not actually use it.

Related

Docker in Docker | Github actions - Self Hosted Runner

Am trying to create a self-hosted runner for Github actions on Kubernetes. As a first step was trying with the docker file as below:
FROM ubuntu:18.04
# set the github runner version
ARG RUNNER_VERSION="2.283.1"
# update the base packages and add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN useradd -r -g docker nonroot
# install python and the packages the your code depends on along with jq so we can parse JSON
# add additional packages as necessary
RUN apt-get install -y curl jq build-essential libssl-dev apt-transport-https ca-certificates curl software-properties-common
# install docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" \
&& apt update \
&& apt-cache policy docker-ce \
&& apt install docker-ce -y
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
RUN usermod -aG docker nonroot
USER nonroot
# set the entrypoint to the start.sh script
ENTRYPOINT ["/tini", "--"]
CMD ["/bin/bash"]
After doing a build, I run the container with the below command:
docker run -v /var/run/docker.sock:/var/run/docker.sock -it srunner
When i try to pull image, I get the below error:
nonroot#0be0cdccb29b:/$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
nonroot#0be0cdccb29b:/$
Please advise if there is a possible way to run docker as non-root inside a docker container.
Instead of using sockets, there is also a way to connect to outer docker, from docker in container, over TCP.
Linux example:
Run ifconfig, it will print the docker's network interface that is created when you install docker on a host node. Its usually named docker0, note down the IP address of this interface.
Now, modify the /etc/docker/daemon.json and add thistcp://IP:2375 to the hosts section. Restart docker service.
Run containers with extra option: --add-host=host.docker.internal:host-gateway
Inside any such container, the address tcp://host.docker.internal:2375 now points to the outside docker engine.
Try adding your username to the docker group as suggested here.
Additionally, you should check your kernel compatibility.

Installing Kubernetes in Docker container

I want to use Kubeflow to check it out and see if it fits my projects. I want to deploy it locally as a development server so I can check it out, but I have Windows on my computer and Kubeflow only works on Linux. I'm not allowed to dual boot this computer, I could install a virtual machine, but I thought it would be easier to use docker, and oh boy was I wrong. So, the problem is, I want to install Kubernetes in a docker container, right now this is the Dockerfile I've written:
# Docker file with local deployment of Kubeflow
FROM ubuntu:18.04
ENV USER=Joao
ENV PASSWORD=Password
ENV WK_DIR=/home/${USER}
# Setup Ubuntu
RUN apt-get update -y
RUN apt-get install -y conntrack sudo wget
RUN useradd -rm -d /home/${USER} -s /bin/bash -g root -G sudo -u 1001 -p ${PASSWORD} ${USER}
WORKDIR ${WK_DIR}
# Installing Docker CE
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
# Installing Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
# Installing Minikube
RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
ENV PATH="${PATH}:${WK_DIR}"
COPY start.sh start.sh
CMD sh start.sh
With this, just to make the deployment easier, I also have a docker-compose.yaml that looks like this:
services:
kf-local:
build: .
volumes:
- path/to/folder:/usr/kubeflow
privileged: true
And start.sh looks like this:
service docker start
minikube start \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-api-audiences=api \
--driver=docker
The problem is, whenever I try running this I get the error:
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I've tried creating a user and running it from there also but then I'm not being able to run sudo, any idea how I could install Kubernetes on a Docker container?
As you thought you are right in case of using VM and that be easy to test it out.
Instead of setting up Kubernetes on docker you can use Linux base container for development testing.
There is linux container available name as LXC container. Docker is kind of application container while in simple words LXC is like VM for local development testing. you can install the stuff into rather than docker setting up application inside image.
read some details about lxc : https://medium.com/#harsh.manvar111/lxc-vs-docker-lxc-101-bd49db95933a
you can also run it on windows and try it out at : https://linuxcontainers.org/
If you have read the documentation of Kubeflow there is also one option multipass
Multipass creates a Linux virtual machine on Windows, Mac or Linux
systems. The VM contains a complete Ubuntu operating system which can
then be used to deploy Kubernetes and Kubeflow.
Learn more about Multipass : https://multipass.run/#install
Insufficient user permissions on the docker groups and minikube directory cause this error ("X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.").
You can fix that error by adding your user to the docker group and setting permissions to the minikube profile directory (change the $USER with your username in the two commands below):
sudo usermod -aG docker $USER && newgrp docker
sudo chown -R $USER $HOME/.minikube; chmod -R u+wrx $HOME/.minikube

How to include Webots in a Docker container build?

I want to add Webots to my Dockerfile, but I'm running into an issue. My current manual installation steps (from here) are:
$ # launch my Docker container without Webots
$ wget -qO- https://cyberbotics.com/Cyberbotics.asc | sudo apt-key add -
$ sudo apt update
$ sudo apt install -y software-properties-common
$ sudo apt-add-repository 'deb https://cyberbotics.com/debian/ binary-amd64/'
$ sudo apt update
$ sudo apt-get install webots
$ # now I have a Docker container with Webots
I want to include this process in the build of the Docker container. I can't just use the same steps in Dockerfile though, because while webots is installing, it prompts for some stdin responses asking for the keyboard's country of origin. Since Docker doesn't listen to stdin while building, I have no way to answer these prompts. I tried piping echo output like so, but it doesn't work:
# Install Webots (a robot simulator)
RUN wget -qO- https://cyberbotics.com/Cyberbotics.asc | sudo apt-key add -
RUN apt-get update && sudo apt-get install -y \
software-properties-common \
libxtst6
RUN sudo apt-add-repository 'deb https://cyberbotics.com/debian/ binary-amd64/'
RUN apt-get update && echo 31 1 | sudo apt-get install -y \
webots # the echo fills the "keyboard country of origin" prompts
How can I get Webots included in the Docker container? I don't want to just use someone else's container (e.g. cyberbotics/webots-docker), since I need to add other things to the container, like ROS2.
Edit: this answer is incorrect. FROM doesn't work like this, and only the last FROM statement will be utilized.
Original answer:
It turns out to be simpler than I expected. You can include more than one FROM $IMAGE statement in a Dockerfile to combine base images. Here's a sample explaining what I did (note that all the ARG statements must come before the first FROM statement):
ARG BASE_IMAGE_WEBOTS=cyberbotics/webots:R2021a-ubuntu20.04
ARG IMAGE2=other/image:latest
FROM $BASE_IMAGE_WEBOTS AS base
FROM $IMAGE2 AS image2
# other things needed

Pass arguments to interactive shell in Docker Container

Currently I'm trying to create a Docker image for jitsi-meet.
I installed jitsi-meet on my test system and noticed, that I get prompted for user input. Well, this is absolutely fine, when installing jitsi manually.
However the installation process is supposed to run during the build of the image. Which means there is no way for me to manually type in the necessary data.
Is there any way to pass values as an environment variable in the Dockerfile and use the variable in the container when I get prompted to enter some additional information?
This is how my Dockerfile looks like:
FROM debian:latest
WORKDIR /opt/jitsi-meet
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y ssh sudo ufw apt-utils apt-transport-https wget gnupg2 && \
wget -qO - https://download.jitsi.org/jitsi-key.gpg.key | sudo apt-key add - && \
sh -c "echo 'deb https://download.jitsi.org stable/' > /etc/apt/sources.list.d/jitsi-stable.list" && \
apt-get -y update && \
apt-get -y install jitsi-meet
EXPOSE 80 443
EXPOSE 10000/udp
Thanks in advance!
Yes you can can set ENV vars in a docker file
using 'ENV', see:
https://docs.docker.com/engine/reference/builder/#environment-replacement
To use it when you got prompted something depends on the implementation
a prompt upon container run, is not really advisable, as interactive container startup doesn't make sense in most cases.
However in bash you might be able to read redirect something to stdin using <
or send it with a pipe(|) to a command.
But how to solve that issue, depends on how it is implemented in the sourcecode
where it prompts.
In general it's best practice to skip the prompt, if an env has been set.

How to build on a host when running Jenkins inside a Docker container

I am running a Jenkins instance within a Docker container, and it is connected to a Bitbucket repository. When something changes in the online repository, Jenkins downloads the new source. Based on the new source, I want to create a new Docker image, but that needs to happen on the host since it is where I have Docker installed.
I haven't figured out how can I run something on the host, but at the same time I understand that Docker is used for isolating processes, so this is by design. Is there a way to achieve this?
I would create a separate Jenkins slave that maps to either the host running Jenkins, or a separate host that has Docker installed. Then run your job to create Docker images on the slave rather than the master.
The documentation for the official Jenkins Docker image has details on how to connect up a slave.
It is possible to use Docker from a host inside your Jenkins container. You need to install Docker on the container and then map the host's Docker socket descriptor.
A sample Dockerfile that will achieve this could looks like this:
FROM jenkins/jenkins:2.102
USER root
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
software-properties-common
# Docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" && \
apt-get update && apt-get install -y docker-ce && \
usermod -aG docker,staff jenkins
# Set SUID to run docker as root
RUN chmod g+s /usr/bin/docker
USER jenkins
To run a container based on this image with Docker working, you need to map the docker.sock file descriptor:
docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock my-image
If you don't want to building a new image you can use my prebuilt image: https://hub.docker.com/r/mdobak/docker-jenkins/

Resources