How to operate the k8s cluster in the docker container - docker

How does a docker container running on a docker machine instead of a k8s pod operate the k8s cluster. For example, if i need to do something like this inside a container:
kubectl get pods
In my dockerfile, I installed kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN sudo mv ./kubectl /usr/local/bin/kubectl
when i run kubectl get pods, the result is as follows:
kubectl get pod
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
So I mounted the config into the docker container at docker runcommand
docker run -v /root/.kube/config:/root/.kube/config my-images
the result is as follows:
kubectl get pod
Error in configuration:
* unable to read client-cert /root/.minikube/profiles/minikube/client.crt for minikube due to open /root/.minikube/profiles/minikube/client.crt: no such file or directory
* unable to read client-key /root/.minikube/profiles/minikube/client.key for minikube due to open /root/.minikube/profiles/minikube/client.key: no such file or directory
* unable to read certificate-authority /root/.minikube/ca.crt for minikube due to open /root/.minikube/ca.crt: no such file or directory
This seems to be due to the current-context: minikube in the k8s config file
Then mount the authentication file again, it run success.
Now, I can call the kubectl get pods command or otherwise manipulate a cluster outside the container when I mount -v /root/.kube/config:/root/.kube/config -v /root/.minikube/:/root/.minikube/, however, this does not apply to cluster mounts created by kubeadm or otherwise.
But I want to be able to mount the required configuration files and so on to the container in a uniform way so that I can use the same command to manipulate the k8s cluster, which may be created by minikube or rancher k3s or kubeadm
In summary, I want to mount a uniform set of files or directories for all cases of the k8s cluster, such as -v file: file -v dir:dir, to implement operations on the k8s cluster created in any way, such as getting the pod status, creating, deleting various types of resources, and so on
I need to have the maximum permission to operate on k8s
Can someone please tell me what is it that I need to do?

I think you can set the Docker user when running your container
You can run (in this example - ubuntu image) with an explicit user id and group id.
$ docker run -it --rm \
--mount "type=bind,src=$(pwd)/shared,dst=/opt/shared" \
--workdir /opt/shared \
--user "$(id -u):$(id -g)" \
ubuntu bash
The difference is ‘–user “$(id -u):$(id -g)“’ - they tell the container to run with the current user id and group id which are obtained dynamically through bash command substitution by running the “id -u” and “id -g” and passing on their values.
This can be good enough already. The problem here is, that the user and group don’t really exist in the container. This approach works for the terminal command, but the session looks broken and you’ll see some ugly error messages like:
"groups: cannot find name for group ID"
"I have no name!"
- your container, complaining
While bash works, some apps might refuse to run if those configs look fishy.
Next you have to configure and run your Docker containers correctly, so you don’t have to fight permission errors and access your files easily.
As you should create a non-root user in your Dockerfile in any case, this is a nice thing to do. You might as well set the user id and group id explicitly.
Below is a minimal Dockerfile which expects to receive build-time arguments, and creates a new user called “user”:
FROM ubuntu
ARG USER_ID
ARG GROUP_ID
RUN addgroup --gid $GROUP_ID user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
USER user
Take a look: add-user-to-container.
You can use this Dockerfile, to build a fresh image with the host uid and gid. This image, needs to be built specifically for each machine it will run on to make sure everything is in order.
Then, you can run use this image for our command. The user id and group id are correct without having to specify them when running the container.
$ docker build -t your-image \
--build-arg USER_ID=$(id -u) \
--build-arg GROUP_ID=$(id -g) .
$ docker run -it --rm \
--mount "type=bind,src=$(pwd)/shared,dst=/opt/shared" \
--workdir /opt/shared \
your-image bash
There is no need to use “chown”, and you will get rid of annoying permission errors anymore.
Please take a look on this very interesting article: kubernetes-management-docker, docker-shared-permissions.

Related

Starting docker container inside a docker with non-root permission [duplicate]

I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).

Need guidance for creating temporary docker container for generating file on host machine

I'd like to create a docker container that can read file named helloworld.proto and run command
protoc --dart_out=grpc:lib/src/generated -Iprotos protos/helloworld.proto
The container would start with all the dependencies and generate required file using gRPC which is accessible by the HOST machine. Is this achievable?
If the primary goal of your process is to read a file from the host system and write a file back to the host system, you will find it much easier to run the process directly on the host system and not involve Docker.
You can in principle use Docker bind mounts to make a host directory visible to the container. Especially for a one-off command, though, the length of the command line required to start the container will be longer than the length of the command you're trying to run:
sudo \ # docker run can take over the host so root is required
docker run \
--rm \ # delete the container when done
-v "$PWD:/data" \ # make host directive visible to container
-w /data \ # make mounted directory current working directory
-u $(id -u):$(id -g) \ # write files as current host user
some-image-containing-protoc \
proton --dart-out=grpc:lib/src/generated -Iprotos protos/helloworld.proto
It will usually be a single command (sudo apt-get install protobuf-compiler, brew install protobuf) to get protoc installed globally on your host system and that will usually be easier to manage.

Jenkins Docker image, to use bind mounts or not?

I am reading through this bit of the Jenkins Docker README and there seems to be a section that contradicts itself from my current understanding.
https://github.com/jenkinsci/docker/blob/master/README.md
It seems to me that is says to NOT use a bind mount, and then says that using a bind mount is highly recommended?
NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission
issues (the user used inside the container might not have rights to
the folder on the host machine). If you really need to bind mount
jenkins_home, ensure that the directory on the host is accessible by
the jenkins user inside the container (jenkins user - uid 1000) or use
-u some_other_user parameter with docker run.
docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p
50000:50000 jenkins/jenkins:lts this will run Jenkins in detached mode
with port forwarding and volume added. You can access logs with
command 'docker logs CONTAINER_ID' in order to check first login
token. ID of container will be returned from output of command above.
Backing up data
If you bind mount in a volume - you can simply back up
that directory (which is jenkins_home) at any time.
This is highly recommended. Treat the jenkins_home directory as you would a database - in Docker you would generally put a database on
a volume.
Do you use bind mounts? Would you recommend them? Why or why not? The documentation seems to be ambiguous.
As commented, the syntax used is for a volume:
docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...
That defines a Docker volume names jenkins_homes, which will be created in:
/var/lib/docker/volumes/jenkins_home.
The idea being that you can easily backup said volume:
$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”
And reload it to another Docker instance.
This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)
FROM jenkins/jenkins:lts-jdk11
USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log
RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins
RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins
USER jenkins
Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.
The advantage of that approach would be to see the content of Jenkins home without having to use Docker.
You would run it with:
docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins

Run Docker as jenkins-agent, in a docker-container, as non-root user

Simular Questions
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock
How to solve Docker permission error when trigger by Jenkins
https://github.com/jenkinsci/docker/issues/263
Dockerfile
FROM jenkins/jenkins:lts
USER root
RUN apt-get -qq update && apt-get -qq -y install --no-install-recommends curl
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker jenkins
USER jekins
Terminal command
docker run -p 8080:8080 -p 50000:50000 \
-v jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-ti bluebrown/docker-in-jenkins-in-docker /bin/bash
Inside the container
docker image ls
Output
Got permission denied while trying to connect to the Docker daemon
socket at unix:///var/run/docker.sock: Get
http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/json: dial unix
/var/run/docker.sock: connect: permission denied
When I comment the last line of the dockerfile out, to run the instance as root user,
# USER jenkins
I can access the docker socket without issues, for obvious reasons. However, I think this is not a proper solution. That is why I want to ask if anyone managed to access the docker socket as non root user.
You've added the docker group to the Jenkins user inside the container. However, that will not necessarily work because the mapping of users and groups to uids and gids can be different between the host and container. That's normally not an issue, but with host volumes and other bind mounts into the container, the files are mapped with the same uid/gid along with the permissions. Therefore, inside the container, the docker group will not have access to the docker socket unless the gid happens to be identical between the two environments.
There are several solutions, including manually passing the host gid as the gid to use inside the container. Or you can get the gid of the host and build the image with that value hard coded in.
My preferred solution is to start an entrypoint as root, fix the docker group inside the container to match the gid of the mounted docker socket, and then switch to the Jenkins user to launch the app. This works especially well in development environments where control of uid/gids may be difficult. All the steps/scripts for this are in my repo: https://github.com/sudo-bmitch/jenkins-docker
For production in a controlled environment, I try to get standardized uid/gid values, both on the host and in containers, for anything that mounts host volumes. Then I can run the container without the root entrypoint steps.
In your dockerfile you are enabling docker access for the user jenkins but droping down to the user jekins not jenkins?
Is this just a typo on this page?
I use this approach as you've described and it works correctly.

Start and attach a docker container with X11 forwarding

There are various articles like this, this and this and many more, that explains how to use X11 forwarding to run GUI apps on Docker. I am using a Centos Docker container.
However, all of these approaches use
docker run
with all appropriate options in order to visualize the result. Any use of docker run creates a new image and performs the operation on top of that.
A way to work in the same container is to use docker start followed by docker attach and then executing the commands on the prompt of the container. Additionally, the script (let's say xyz.sh) that I intend to run on Docker container resides inside a folder MyFiles in the root directory of the container and accepts a parameter as well
So is there a way to run the script using docker start and/or docker attach while also X11-forwarding it?
This is what I have tried, although would like to avoid docker run and instead use docker start and docker attach
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
centos \
cd MyFiles \
./xyz.sh param1
export containerId='docker ps -l -q'
This in turn throws up an error as below -
/usr/bin/cd: line 2: cd: MyFiles/: No such file or directory
How can I run the script xyz.sh under MyFiles on the Docker container using docker start and docker attach?
Also since the location and the name of the script may vary, I would like to know if it is mandatory to include each of these path in the system path variable on the Docker container or can it be done at runtime also?
It looks to me your problem is not with X11 forwarding but with general Docker syntax.
You could do it rather simply:
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-w MyFiles \
--rm \
centos \
bash -c xyz.sh param1
I added:
--rm to avoid stacking old dead containers.
-w workdir, obvious meaning
/bin/bash -c to get sure your script is interpreted by bash.
How to do without docker run:
run is actually like create then start. You can split it in two steps if you prefer.
If you want to attach to a container, it must be running first. And for it to be running, there must be a process currently running inside.

Resources