Installing Kubernetes in Docker container - docker

I want to use Kubeflow to check it out and see if it fits my projects. I want to deploy it locally as a development server so I can check it out, but I have Windows on my computer and Kubeflow only works on Linux. I'm not allowed to dual boot this computer, I could install a virtual machine, but I thought it would be easier to use docker, and oh boy was I wrong. So, the problem is, I want to install Kubernetes in a docker container, right now this is the Dockerfile I've written:
# Docker file with local deployment of Kubeflow
FROM ubuntu:18.04
ENV USER=Joao
ENV PASSWORD=Password
ENV WK_DIR=/home/${USER}
# Setup Ubuntu
RUN apt-get update -y
RUN apt-get install -y conntrack sudo wget
RUN useradd -rm -d /home/${USER} -s /bin/bash -g root -G sudo -u 1001 -p ${PASSWORD} ${USER}
WORKDIR ${WK_DIR}
# Installing Docker CE
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
# Installing Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
# Installing Minikube
RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
ENV PATH="${PATH}:${WK_DIR}"
COPY start.sh start.sh
CMD sh start.sh
With this, just to make the deployment easier, I also have a docker-compose.yaml that looks like this:
services:
kf-local:
build: .
volumes:
- path/to/folder:/usr/kubeflow
privileged: true
And start.sh looks like this:
service docker start
minikube start \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-api-audiences=api \
--driver=docker
The problem is, whenever I try running this I get the error:
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I've tried creating a user and running it from there also but then I'm not being able to run sudo, any idea how I could install Kubernetes on a Docker container?

As you thought you are right in case of using VM and that be easy to test it out.
Instead of setting up Kubernetes on docker you can use Linux base container for development testing.
There is linux container available name as LXC container. Docker is kind of application container while in simple words LXC is like VM for local development testing. you can install the stuff into rather than docker setting up application inside image.
read some details about lxc : https://medium.com/#harsh.manvar111/lxc-vs-docker-lxc-101-bd49db95933a
you can also run it on windows and try it out at : https://linuxcontainers.org/
If you have read the documentation of Kubeflow there is also one option multipass
Multipass creates a Linux virtual machine on Windows, Mac or Linux
systems. The VM contains a complete Ubuntu operating system which can
then be used to deploy Kubernetes and Kubeflow.
Learn more about Multipass : https://multipass.run/#install

Insufficient user permissions on the docker groups and minikube directory cause this error ("X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.").
You can fix that error by adding your user to the docker group and setting permissions to the minikube profile directory (change the $USER with your username in the two commands below):
sudo usermod -aG docker $USER && newgrp docker
sudo chown -R $USER $HOME/.minikube; chmod -R u+wrx $HOME/.minikube

Related

mkdir: cannot create directory ‘cpuset’: Read-only file system when running a "service docker start" in Dockerfile

I have a Dockerfile that extends the Apache Airflow 2.5.1 base image. What I want to do is be able to use docker inside my airflow containers (i.e. docker-in-docker) for testing and evaluation purposes.
My docker-compose.yaml has the following mount:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
My Dockerfile looks as follows:
FROM apache/airflow:2.5.1
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker airflow
RUN service docker start
USER airflow
Basically:
Install docker.
Add the airflow user to the docker group.
Start the docker service.
Continue as airflow.
Unfortunately, this does not work. During RUN service docker start, I encounter the following error:
Step 11/12 : RUN service docker start
---> Running in 77e9b044bcea
mkdir: cannot create directory ‘cpuset’: Read-only file system
I have another Dockerfile for building a local jenkins image, which looks as follows:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && apt-get install -y ca-certificates curl gnupg lsb-release nano
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
RUN groupadd -f docker
RUN usermod -a -G docker jenkins
RUN service docker start
USER jenkins
I.e. it is exactly the same, except that I am using the jenkins user. Building this image works.
I have not set any extraneous permission on my /var/run/docker.sock:
$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 18 17:14 /var/run/docker.sock
My questions are:
Why does RUN service start docker not work when building my airflow image?
Why does the exact same command in my jenkins Dockerfile work?
I've tried most of the answers to similar questions, e.g. here and here, but they have unfortunately not helped.
I'd rather try to avoid the chmod 777 /var/run/docker.sock solution if at all possible, and it should be since my jenkins image can build correctly...
Just delete the RUN service start docker line.
The docker CLI tool needs to connect to a Docker daemon, which it normally does through the /var/run/docker.sock Unix socket file. Bind-mounting the socket into the container is enough to make the host's Docker daemon accessible; you do not need to separately start Docker in the container.
There are several issues with the RUN service ... line specifically. Docker has a kind of complex setup internally, and some of the things it does aren't normally allowed in a container; that's probably related to the "cannot create directory" error. In any case, a Docker image doesn't persist running processes, so if you were able to start Docker inside the build, it wouldn't still be running when the container eventually ran.
More conceptually, a container doesn't "run services", it is a wrapper around only a single process (and its children). Commands like service or systemctl often won't work the way you expect, and I'd generally avoid them in a Docker context.

Docker in Docker | Github actions - Self Hosted Runner

Am trying to create a self-hosted runner for Github actions on Kubernetes. As a first step was trying with the docker file as below:
FROM ubuntu:18.04
# set the github runner version
ARG RUNNER_VERSION="2.283.1"
# update the base packages and add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN useradd -r -g docker nonroot
# install python and the packages the your code depends on along with jq so we can parse JSON
# add additional packages as necessary
RUN apt-get install -y curl jq build-essential libssl-dev apt-transport-https ca-certificates curl software-properties-common
# install docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" \
&& apt update \
&& apt-cache policy docker-ce \
&& apt install docker-ce -y
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
RUN usermod -aG docker nonroot
USER nonroot
# set the entrypoint to the start.sh script
ENTRYPOINT ["/tini", "--"]
CMD ["/bin/bash"]
After doing a build, I run the container with the below command:
docker run -v /var/run/docker.sock:/var/run/docker.sock -it srunner
When i try to pull image, I get the below error:
nonroot#0be0cdccb29b:/$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
nonroot#0be0cdccb29b:/$
Please advise if there is a possible way to run docker as non-root inside a docker container.
Instead of using sockets, there is also a way to connect to outer docker, from docker in container, over TCP.
Linux example:
Run ifconfig, it will print the docker's network interface that is created when you install docker on a host node. Its usually named docker0, note down the IP address of this interface.
Now, modify the /etc/docker/daemon.json and add thistcp://IP:2375 to the hosts section. Restart docker service.
Run containers with extra option: --add-host=host.docker.internal:host-gateway
Inside any such container, the address tcp://host.docker.internal:2375 now points to the outside docker engine.
Try adding your username to the docker group as suggested here.
Additionally, you should check your kernel compatibility.

Can Docker client(within container) talk to docker daemon on EC2 using UNIX socket?

As part of Jenkins docker image,
am supposed to install docker client(only),
that can talk to docker daemon installed on underlying EC2 instance.
UNIX socket, I mean socket(AF_UNIX,,)
Background
As per the instruction, given here,
I do not see the necessity to install docker daemon withink jenkins image,
because the author is using UNIX socket to talk to underlying docker daemon running in EC2 instance, as shown here.
My understanding is, installing docker client installation(only) within jenkins image, would suffice to talk to docker daemon running on EC2 instance, using UNIX socket(/var/run/docker.sock)
1)
Can docker client running in jenkins image communicate to docker daemon running in underlying EC2 instance? with below mapping...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
2)
How to install docker client only in below jenkins image?
FROM jenkins:1.642.1
# Suppress apt installation warnings
ENV DEBIAN_FRONTEND=noninteractive
# Official Jenkins image does not include sudo, change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the groupID used by AWS Linux ECS instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker compose
RUN groupadd -g ${DOCKER_GID:-497} docker
To use Docker in Jenkins, Jenkins must have access to the docker.sock.
What you are proposing here is a docker in docker approach, by installing docker inside the jenkins container, but actually this is not necessary. You only need a valid docker daemon, and for that reason, the usual approach is to map /var/run/docker.sock from the host to the container.
Have a look at this amazing post https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
You need to install docker inside the jenkins image then bind mount the /var/run/docker.sock so that you can run side car containers as explained in Jérôme Petazzoni's blog post on the subject. This is my jenkins Dockerfile:
FROM jenkins/jenkins:lts
USER root
RUN apt-get update && \
apt-get install -y \
maven \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
lsb-release \
software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get update && \
apt-get install -y \
docker-ce \
docker-ce-cli \
containerd.io
RUN usermod -a -G docker jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
USER jenkins
WORKDIR /var/jenkins_home
Note: you can install your plugins during the build using the plugins.sh as explained here.
Build the jenkins image i.e.: docker build --rm -t so:58652650 .
Run the container mounting /var/run/docker.sock i.e.: docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock --entrypoint bash so:58652650
Inside the image as the jenkins user the docker commands should work as expected:

/var/run/docker.sock: connect: permission denied Jenkins slave on ecs cluster

I'm using the AWS EC2 plugin for Jenkins to spawn up Jenkins slaves when tasks are generated. Running into permission issues when trying to build docker inside docker container. I've looked at dozens of other posts and people frequently provide this as the answer:
create docker group
add jenkins user to docker group
restart
everything magically works
The thing is is that I can't restart, because the jenkins slave gets spawned using the plugin, and I'm not sure how to restart it properly for it to handle the build correctly upon restart. Also, that would mean to run the restart on the host despite being in a container which sounds like a bad idea.
I've tried:
Adding jenkins to sudo users in dockerfile RUN adduser jenkins
sudo followed by RUN echo "jenkins ALL=NOPASSWD: ALL" >>
/etc/sudoers
Changing docker socket file owner RUN chown root:jenkins /var/run/docker.sock
Changing docker socket permissions chmod 777 /var/run/docker.sock
Using newgrp so I don't have to restart docker from outside the container
Basically, how do I get around not restarting the docker service while also providing sudo permissions in order to build dockerfiles inside jenkins slave container? Or if I can actually restart while still using EC2 plugin, how would I best go about that?
Current dockerfile:
FROM jenkins/jnlp-slave
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce && \
apt-get -y install sudo
VOLUME /var/run/docker.sock
RUN adduser jenkins sudo
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN usermod -aG docker jenkins
RUN chmod 777 /var/run/docker.sock
RUN chown root:jenkins /var/run/docker.sock
USER jenkins
Thank you!
I had also faced this issue I have to do below steps for it to work:
Update the docker file to install the docker-client.
Update the ECS Cloud Jenkins configuration to do volumen mount i.e /var/run/docker.sock
The most essential part is to run the docker commands inside the container. There are two options:
i. run the containerUser as root user.
this is the most convenient step if running the container as root user if fine to you
ii. add jenkins user to the docker group on the host
usermod -a -G docker jenkins
Get the gid of the docker group and map the gid inside the container. Update the gid of the docker container to the gid of the host. This way you won't get the permission denied error and don't need to do chown 777 /var/run/docker.sock

"Can not connect to Docker Daemon"

I have a project in which I need to use CircleCi to build a docker application image, and then upload it to the Amazon container repository.
Given that CircleCI also runs on Docker, I created a Docker image for it, which containers a version of Ubuntu, together with AWS CLI, Node and Docker. See Dockerfile below:
FROM ubuntu:16.04
# update libraries
RUN apt-get update
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# install docker
RUN apt-get update
RUN apt-cache policy docker-ce
RUN apt-get install -y docker-ce
# <---
RUN systemctl status docker # <--- TROUBLE HERE
# <---
# install node
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt install -y nodejs
# install aws cli
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
RUN pip install awscli --upgrade
I am currently having some problems working with this CircleCi docker image, because, if i keep the command RUN systemctl status docker I get the following error:
Failed to connect to bus: No such file or directory The command '/bin/sh -c systemctl status docker' returned a non-zero code: 1
If, on the other, I remove that command, the build is sucessful. However, when I go inside docker sudo docker run -it unad16 and run any docker command, as, f.e., docker images, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I have been trying to debug this error since yesterday, but have been unsucessfull. Thus, any help would be truly appreciated.
Notes:
the "daemon" error occurs even when I run docker in priviled mode with sudo docker run -ti --privileged=true unad16
You don't need to run a docker daemon if you want to build a docker image in circleci. Instead you just need an image with docker client, and a circle config with - setup_remote_docker.
Read more in
https://circleci.com/docs/2.0/building-docker-images/
If for some other reason you still want to run a docker service in a docker image, please refer to DockerInDocker repo, especially the README.md part.

Resources