I have a dockerized application. When I am running it through docker-compose up, it runs fine and appears in docker images. But when I try to start a minikube cluster with vm-driver=None, then the cluster gives error and does not start. However, when I quit my docker application and start minikube cluster again, the cluster starts successfully. But then I couldnt find my docker application image I just ran. Instead I find images like below
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
Is this expected behavior? What is the reason if so?
minikube start --vm-driver=none
Update: I am working in an Ubuntu VM.
From the Minikube documentation:
minikube was designed to run Kubernetes within a dedicated VM, and assumes that it has complete control over the machine it is executing on. With the none driver, minikube and Kubernetes run in an environment with very limited isolation, which could result in:
Decreased security
Decreased reliability
Data loss
It is not expected behavior that Minikube will delete your docker images. I have tried to reproduce your issue. I had a few docker images on my Ubuntu VM.
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
Later tried to run minikube.
$ sudo minikube start --vm-driver=none
π minikube v1.2.0 on linux (amd64)
π‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
...
β Verifying: apiserver proxy etcd scheduler controller dns
π Done! kubectl is now configured to use "minikube"
I still have all docker images and minikube is working as expected.
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4vd2q 1/1 Running 8 21d
coredns-5c98db65d4-xjx22 1/1 Running 8 21d
etcd-minikube 1/1 Running 5 21d
...
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
After exit from minikube, I still had all docker images.
As you mentioned in original thread, you have used minikube start --vm-driver=none. If you will use minikube start without sudo you will receive error like:
$ minikube start --vm-driver=none
π minikube v1.2.0 on linux (amd64)
π£ Unable to load config: open /home/$user/.minikube/profiles/minikube/config.json: permission denied
or if want stop minikube without sudo:
$ minikube stop
π£ Unable to stop VM: open /home/$user/.minikube/machines/minikube/config.json: permission denied
πΏ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
π https://github.com/kubernetes/minikube/issues/new
Please try use sudo with minikube commands.
Let me know if that helped. If not please provide your error message.
Related
I have just installed Ubuntu 20.0 and installed docker using snap. I'm trying to run some different docker images for hbase and rabbitmq but each time I start an image, it immediately exists with 126 status.
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d58720fce3a dajobe/hbase "/opt/hbase-server" 5 seconds ago Exited (126) 4 seconds ago hbase-docker
b7a84731a05b harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) 59 seconds ago optimistic_goldwasser
294b95ef081a harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) About a minute ago goofy_tu
I have tried everything and tried to use docker inspect on separate images, but nothing gives away, why the containers exit out immediately. Any suggestions?
EDIT
When i run the command i run the following
$ sudo bash start-hbase.sh
It gives the output exactly like it should
Starting HBase container
Container has ID 3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
Updating /etc/hosts to make hbase-docker point to (hbase-docker)
Now connect to hbase at localhost on the standard ports
ZK 2181, Thrift 9090, Master 16000, Region 16020
Or connect to host hbase-docker (in the container) on the same ports
For docker status:
$ id=3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
$ docker inspect $id
I think the issue might be due to some permissions, because i tried to chck the logs as suggested in the comments, and get this error:
/bin/bash: /opt/hbase-server: Permission denied
Check if the filesystem is mounted with noexec option using mount command or in /etc/fstab. If yes, remove it and remount the filesystem (or reboot).
Quick solution is restart service docker and network-manager
I am using kops as kubernetes deployment.
I noticed that whenever an image with same tag number is entered in deployment file the system takes the previous image if imagepullpolicy is not set to always
Is there any way in which I can see all the cached images of a container in kubernetes environment ?
Like suppose I have an image test:56 currently running in a deployment and test:1 to test:55 were used previously, so does kubernetes cache those images ? and if yes where can those be found ?
Comments on your environment:
I noticed that whenever an image with same tag number is entered in deployment file the system takes the previous image if imagepullpolicy is not set to always
A pre-pulled image can be used to preload certain images for speed or as an alternative to authenticating to a private registry, optimizing performance.
The docker will always cache all images that were used locally.
Since you are using EKS, keep in mind that if you have node health management (meaning a node will be replaced if it fails) the new node won't have the images cached from the old one so it's always a good idea to store your images on a Registry like your Cloud Provider Registry or a local registry.
Let's address your first question:
Is there any way in which I can see all the cached images of a container in kubernetes environment ?
Yes, you must use docker images to list the images stored in your environment.
Second question:
Like suppose I have an image test:56 currently running in a deployment and test:1 to test:55 were used previously, so does Kubernetes cache those images ? and if yes where can those be found ?
I prepared an example for you:
I deployed several pods based on the official busybox image:
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.28.4
pod/busy284 created
$ kubectl run busy293 --generator=run-pod/v1 --image=busybox:1.29.3
pod/busy284 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.28
pod/busy28 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.29
pod/busy29 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.30
pod/busy284 created
$ kubectl run busybox --generator=run-pod/v1 --image=busybox
pod/busybox created
Now let's check the images stored in docker images
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.3 ae853e93800d 5 weeks ago 116MB
k8s.gcr.io/kube-controller-manager v1.17.3 b0f1517c1f4b 5 weeks ago 161MB
k8s.gcr.io/kube-apiserver v1.17.3 90d27391b780 5 weeks ago 171MB
k8s.gcr.io/kube-scheduler v1.17.3 d109c0821a2b 5 weeks ago 94.4MB
kubernetesui/dashboard v2.0.0-beta8 eb51a3597525 3 months ago 90.8MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 4 months ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 4 months ago 288MB
kubernetesui/metrics-scraper v1.0.2 3b08661dc379 4 months ago 40.1MB
busybox latest 83aa35aa1c79 10 days ago 1.22MB
busybox 1.30 64f5d945efcc 10 months ago 1.2MB
busybox 1.29 758ec7f3a1ee 15 months ago 1.15MB
busybox 1.29.3 758ec7f3a1ee 15 months ago 1.15MB
busybox 1.28 8c811b4aec35 22 months ago 1.15MB
busybox 1.28.4 8c811b4aec35 22 months ago 1.15MB
You can see all the pushed images listed.
It's good to clean old resources from your system using the command docker system prune to free space on your server from time to time.
If you have any doubt, let me know in the comments.
I was following this URL: How to use local docker images with Minikube?
I couldn't add a comment, so thought of putting my question here:
On my laptop, I have Linux Mint OS. Details as below:
Mint version 19,
Code name : Tara,
PackageBase : Ubuntu Bionic
Cinnamon (64-bit)
As per one the answer on the above-referenced link:
I started minikube and checked pods and deployments
xxxxxxxxx:~$ pwd
/home/sj
xxxxxxxxxx:~$ minikube start
xxxxxxxxxx:~$ kubectl get pods
xxxxxxxxxx:~$ kubectl get deployments
I ran command docker images
xxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
openjdk 8 81f83aac57d6 4 weeks ago 624MB
mysql 5.7 563a026a1511 4 weeks ago 372MB
I ran below command:
eval $(minikube docker-env)
Now when I check docker images, looks like as the README describes, it reuses the Docker daemon from Minikube with eval $(minikube docker-env).
xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx alpine 33c5c6e11024 9 days ago 17.7MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 6 months ago 193MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 7 months ago 78.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 9 months ago 41MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 9 months ago 42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 9 months ago 50.5MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 9 months ago 742kB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 11 months ago 80.8MB
k8s.gcr.io/echoserver 1.4 a90209bb39e3 2 years ago 140MB
Note: if noticed docker images command pulled different images before and after step 2.
As I didn't see the image that I wanted to put on minikube, I pulled it from my docker hub.
xxxxxxxxxxxxx:~$ docker pull <username>/spring-docker-01
Using default tag: latest
latest: Pulling from <username>/spring-docker-01
05d1a5232b46: Pull complete
5cee356eda6b: Pull complete
89d3385f0fd3: Pull complete
80ae6b477848: Pull complete
40624ba8b77e: Pull complete
8081dc39373d: Pull complete
8a4b3841871b: Pull complete
b919b8fd1620: Pull complete
2760538fe600: Pull complete
48e4bd518143: Pull complete
Digest: sha256:277e8f7cfffdfe782df86eb0cd0663823efc3f17bb5d4c164a149e6a59865e11
Status: Downloaded newer image for <username>/spring-docker-01:latest
Verified if I can see that image using "docker images" command.
xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
nginx alpine 33c5c6e11024 10 days ago 17.7MB
Then I tried to build image as stated in referenced link step.
xxxxxxxxxx:~$ docker build -t <username>/spring-docker-01 .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/sj/Dockerfile: no such file or directory
As the error states that dockerfile doesn't exist at the location, I am not sure where exactly I can see dockerfile for the image I had pulled from docker hub.
Looks like I have to go to the location where the image has been pulled and from that location, I need to run the above-mentioned command. Please correct me wrong.
Below are the steps, I will be doing after I fix the above-mentioned issue.
# Run in minikube
kubectl run hello-foo --image=myImage --image-pull-policy=Never
# Check that it's running
kubectl get pods
UPDATE-1
There is mistake in above steps.
Step 6 is not needed. Image has already been pulled from docker hub, so no need of docker build command.
With that, I went ahead and followed instructions as mentioned by #aurelius in response.
xxxxxxxxx:~$ kubectl run sdk-02 --image=<username>/spring-docker-01:latest --image-pull-policy=Never
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/sdk-02 created
Checked pods and deployments
xxxxxxxxx:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sdk-02-b6db97984-2znlt 1/1 Running 0 27s
xxxxxxxxx:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdk-02 1 1 1 1 35s
Then exposed deployment on port 8084 as I was using other ports like 8080 thru 8083
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
Then verified if service has been started, checked if no issue on kubernetes dashboard and then checked the url
xxxxxxxxx:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h
sdk-02 NodePort 10.100.125.120 <none> 8084:30362/TCP 13s
xxxxxxxxx:~$ minikube service sdk-02 --url
http://192.168.99.101:30362
When I tried to open URL: http://192.168.99.101:30362 in browser I got message:
This site canβt be reached
192.168.99.101 refused to connect.
Search Google for 192 168 101 30362
ERR_CONNECTION_REFUSED
So the question : Is there any issue with steps performed?
UPDATE-2
The issue was with below step:
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
Upon checking Dockerfile of my image : <username>/spring-docker-01:latest I was exposing it to 8083 something like EXPOSE 8083
May be that was causing issue.
So I went ahead and changed expose command:
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8083
service/sdk-02 exposed
And then it started working.
If anyone has something to add to this, please feel free.
However I am still not sure where exactly I can see dockerfile for the image I had pulled from docker hub.
The docker build does not know what you mean by your command, because flag -t requires specific format:
--tag , -t Name and optionally a tag in the βname:tagβ format
xxxxxxxxxx:~/Downloads$ docker build -t shivnilesh1109/spring-docker-01 .
So the proper command here should be:
docker build -t shivnilesh1109/spring-docker-01:v1(1) .(2)
(1) desired name of your container:tag
(2) directory in which your dockerfile is.
After you proceed to minikube deployment, it will be enough just to run:
kubectl run *desired name of deployment/pod* --image=*name of the container with tag* --image-pull-policy=Never
If this would not fix your issue, try adding the path to Dockerfile manually. I've tested this on my machine and error stopped after using proper tagging of the image and tested this also with full path to Dockerfile otherwise I had the same error as you.
For you UPDATE-2 question, also to help you to understand the port exposed in the Dockerfile and in the command kubectl expose.
Dockerfile:
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published.
For more details, see EXPOSE.
Kubectl expose:
--port: The port that the service should serve on. Copied from the resource being exposed, if unspecified
--target-port: Name or number for the port on the container that the service should direct traffic to. Optional.
For more details, see kubectl expose.
So I think you should add the parameters --target-port with the port that you exposed in the Dockerfile. And then the port mapping will be correct.
You can just create a Dockerfile with this content:
FROM shivnilesh1109/spring-docker-01
Then run:
docker build -t my-spring-docker-01 .
try adding your local docker image to minikube's cache , like so :
minikube cache add docker-image-name:latesttag
then set imagePullPolicy:Never in the yaml file.
I currently have 8 containers across 4 different host machines in my docker setup.
ubuntu#ip-192-168-1-8:~$ docker service ls
ID NAME MODE REPLICAS IMAGE
yyyytxxxxx8 mycontainers global 8/8 myapplications:latest
Running a ps -a on the service yields the following.
ubuntu#ip-192-168-1-8:~$ docker service ps -a yy
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
\_ mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
My question is how can i execute a restart of all the containers using the service ID? I dont want to manually log into every node and execute a restart.
In the latest stable version of docker 1.12.x, it is possible to restart the container by updating the service configuration, but in the docker 1.13.0 which is released soon, even if the service setting is not changed, by specifying the --force flag, the container will be restarted. If you do not mind to use the 1.13.0 RC4 you can do it now.
$ docker service update --force mycontainers
Update: Docker 1.13.0 has been released.
https://github.com/docker/docker/releases/tag/v1.13.0
Pre-Docker 1.13, I found that scaling all services down to 0, waiting for shutdown, then scaling everything back up to the previous level works.
docker service scale mycontainers=0
# wait
docker service scale mycontainers=8
Updating the existing service, swarm will recreate all containers. For example, you can simply update a property of the service to archive restarting.
I noticed that Docker appears to be using a large amount of disk space. I can see the directory /Users/me/.docker/machine/machines/default is 27.4GB
I recently cleaned up all of the images and containers I wasn't using. Now, when I run docker ps I see that no images are running.
~ $ docker ps
I can also see that I have 2 containers available.
~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42332s42d3 cds-abm "/bin/bash" 2 weeks ago Exited (130) 2 weeks ago evil_shockley
9ssd64ee41 tes-abm "/bin/bash" 2 weeks ago Exited (130) 2 weeks ago lonely_brattain
I can then see I have 3 images.
~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ghr/get latest 6708fffdd4dfa 2 weeks ago 2.428 GB
<none> <none> 96c5974ddse18 2 weeks ago 2.428 GB
fdbh/ere latest bf1da53766b1 2 weeks ago 2.225 GB
How can these be taking up nearly 30GB?
It is because you are not removing the volumes created by the containers when you stop the container. In the future, use -v when you remove the containers.
docker rm -v <container-id>
Regrading cleaning up the space, you have to ssh into docker-machine and remove all volumes created. To do so,
docker-machine ssh default
sudo -i # You may not get permission to enter in to volumes directory
cd /var/lib/docker/volumes
rm -rf *
Make sure none of your containers are currently running. Also, make sure that you don't need any of these volumes for later use (like DB containers).