Due to some requirement, I am using dind (docker inside docker) whose docker daemon is used by other container inside the same pod.
I want to run this dind container's docker daemon on any port other than 2375.
There is another pod existing inside K8 node which serves all other pods with the docker daemon.
But there are some inconsistencies occuring as same docker daemon is used by pods of different services.
For the above reason i decided to use dind container in every requiring pod. Thus i had to set the hostNetwork to false, but now the pod is not able to
pull any public image.
pull any public debian package etc.
I have tried setting up the dns values with 8.8.8.8 and 8.8.4.4, but still the issue is coming.
The only thing is that the hostNetwork=false is a mandatory flag.
Is there any way to make the pod able to pull public images and artifacts while the above flag is set to false ?
Thanks in advance.
Related
I have a pod running linux, i have installed many software/tools, if I restart the pod, k8s will start a new pod and I'll lose all my resources installed, is there any way to save the pod as docker image or any other way it can be persistant even after restarting pod.
Is there a way to download the container image from a pod in kuberentes environment? tried the solution, but wasn't helpful.
The answer in the link is not wrong, you will probably have to jump through some hoops. One method I can think of is to:
Run a container that has the docker cli installed, mounts the docker socket from the host, and has an node affinity rule so that the container is scheduled on the same node as the container you want to capture.
From within this container, you should be able to access the docker daemon running on the node, issue docker commands to capture, tag, push the updated image.
I wouldn't advise doing this though... have not tested it myself but I have done something "similar" before.
It would be better to create your own dockerfile, install software, and use that image for your containers.
I was handed a kubernetes cluster to manage. But in the same node, I can see running docker containers (via docker ps) that I could not able to find/relate in the pods/deployments (via kubectl get pods/deployments).
I have tried kubectl describe and docker inspect but could not pick out any differentiating parameters.
How to differentiate which is which?
There will be many. At a minimum you'll see all the pod sandbox pause containers which are normally not visible. Plus possibly anything you run directly such as the control plane if not using static pods.
I created Kubernetes cluster with kops (on AWS), and i want to access to one of my nodes as a root.According to this post, it's possible only with Docker command.
When i type docker image ls i'm getting nothing. When i was using minikube i solved this issue with minikube docker-env and from output i just copied last line into new CMD line #FOR /f "tokens=*" %i IN ('minikube docker-env') DO #%i
(I'm using Widnows 10) and using above procedure, after typing docker image ls or docker image ps i was able to see all minikube pods. Is there any way to do the same for pods created with kops ?
I'm able to do it connecting to Kubernetes node, install docker on it, and then i can connect to pod specifying -u root switch, but i wonder is it possible to do the same from host machine (Windows 10 in my case)
It's a bit unclear what you're trying to do. So I'm going to give some general info here based on this scenario : You've created a K8S cluster on AWS using Kops
I want to access one of my AWS node as root
This has nothing to do with kops nor it has with Docker. This is basic AWS management. You need to check on your AWS console management to get all the info to connect to your node.
I want to see all the docker image from my windows laptop
Again, this has nothing to do with kops. Kops is a Kubernetes distribution. In Kubernetes, the smallest units of computing that can be managed is the pod. You cannot manage directly docker containers or images with kubernetes.
So if you want to see your docker images, you'll need to somehow connect to your AWS node and then execute
docker image ls
In fact, that's what you're doing with your minikube example. You're just executing the docker command on the VM managed by minikube.
More info on what's a pod here
I want to see all the pods created with kops
Well, assuming that you've succesfully configured your system to access AWS with kops (more info on that here), then you'll just have to directly execute any kubectl command. For example, to list all the pods located in the kube-system namespace :
kubectl -n kube-system get po
Hope this helps !
That is not possible. A pod is an abstraction created and managed by kubernetes. The docker daemon has no idea to what is a pod. You can only see the containers using docker command. But then again, you wont be able to tell which container is associated to which pod.
Answered by #Marc ABOUCHACRA
I have a kubernetes cluster that is running on AWS EC2 instances and weave as networking(cni). I have disabled the docker networking(ipmask and iptables) as it is managed by weave(to avoid network conflicts).
I have deployed my Jenkins on this cluster as K8s pod and this jenkins uses jenkins kubernetes plugin to spawn dynamic slaves based on pod and container template which I have defined. These slaves container have docker client in it which connects to the host docker engine via docker.sock
So when I run any job in Jenkins it starts a slave and on this it clones a git repo and starts building the Dockerfile present inside the repo.
My sample dockerfile looks like this:
FROM abc:123
RUN yum update
So when container starts building this it tries connecting to redhat repo to update the local repo and fails here. To debug I logged in to this container and try wget/CURL some packages and finds that there is no internet connectivity in this container.
I suspect that while building docker starts intermediate containers and those containers are not managed by weave so they do not have internet connectivity.
Need suggestions.
Related question: Internet connection inside Docker container in Kubernetes
Ok finally after lot of struggle I find the solution.
So when ever K8s starts a pod it starts a sidecart container whose role is basically to provide network to pod containers.
So while running docker build if I pass it's container ID as network then my intermediate contexts start getting internet connectivity via this container.
So changes looks something like this:
docker build -t "some name" --network container:\$(docker ps | grep \$(hostname) | grep k8s_POD | cut -d\" \" -f1) -f infra/docker/Dockerfile .
Hope this helps. :D
You can try to attach weave networking dynamically as a part of your build job. Is it definitely possible to change active network of container on the flight with weave.
Maybe you will need to use some additional container with Weave Docker Api Proxy or you can use a different way to communicate with Weave network on your nodes.
So, the main idea is just attach your containers where you running builds to the Kubernetes pods network, where you have an external access.
Also, and maybe it will be better, you can create another one Weave virtual network with access to the Internet and attach your contenders to it.
You're right - the docker build process runs in a different context, and Weave Net doesn't attach those automatically.
Even more complicated, Kubernetes will connect via CNI whereas Docker has its own plugin API. I believe it's possible to have both on a machine at the same time, but rather complicated.
Maybe look at some of the ways to build images without using Docker ?
I'm new to Docker and Drone but I'm liking what I've found so far :)
Can you run Dind as a service on Tutum so that Drone can use it?
Drone CI is designed to run on a Docker host and to spin up whatever containers it needs.
It seems that drone itself can be run in a container but it must have access to the host docker daemon.
As far as I can see on Tutum you don't really have access to the docker daemon from the host.
It's possible to run drone in Dind (Docker in Docker).
But could I just run a container running Dind that I could point my drone container at via DOCKER_HOST, or am I completely misunderstanding the relationship between Drone and Docker?
It turns out you can and it all seems to work just fine :)
I have my "node" in tutum speak, which has docker running on it, but it's tutum's docker that you can interact with to some extent using their api.
Inside that I have an off the shelf dind container (docker in docker) running as a daemon with its listening port specified in the PORT environment variable (which wrapdocker picks up). That port is exposed (not publicly) using tutum's interface.
Drone is configured from another off the shelf container (for github etc) and it's linked to the dind service so that drone's DOCKER_HOST environment variable can be set to: {linked dind alias}:{port number}
...and it works :)
I feel like this should have been clear from the start but I just don't think that I believed it!