Use local docker image with minikube - docker

I was following this URL: How to use local docker images with Minikube?
I couldn't add a comment, so thought of putting my question here:
On my laptop, I have Linux Mint OS. Details as below:
Mint version 19,
Code name : Tara,
PackageBase : Ubuntu Bionic
Cinnamon (64-bit)
As per one the answer on the above-referenced link:
I started minikube and checked pods and deployments
xxxxxxxxx:~$ pwd
/home/sj
xxxxxxxxxx:~$ minikube start
xxxxxxxxxx:~$ kubectl get pods
xxxxxxxxxx:~$ kubectl get deployments
I ran command docker images
xxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
openjdk 8 81f83aac57d6 4 weeks ago 624MB
mysql 5.7 563a026a1511 4 weeks ago 372MB
I ran below command:
eval $(minikube docker-env)
Now when I check docker images, looks like as the README describes, it reuses the Docker daemon from Minikube with eval $(minikube docker-env).
xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx alpine 33c5c6e11024 9 days ago 17.7MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 6 months ago 193MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 7 months ago 78.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 9 months ago 41MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 9 months ago 42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 9 months ago 50.5MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 9 months ago 742kB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 11 months ago 80.8MB
k8s.gcr.io/echoserver 1.4 a90209bb39e3 2 years ago 140MB
Note: if noticed docker images command pulled different images before and after step 2.
As I didn't see the image that I wanted to put on minikube, I pulled it from my docker hub.
xxxxxxxxxxxxx:~$ docker pull <username>/spring-docker-01
Using default tag: latest
latest: Pulling from <username>/spring-docker-01
05d1a5232b46: Pull complete
5cee356eda6b: Pull complete
89d3385f0fd3: Pull complete
80ae6b477848: Pull complete
40624ba8b77e: Pull complete
8081dc39373d: Pull complete
8a4b3841871b: Pull complete
b919b8fd1620: Pull complete
2760538fe600: Pull complete
48e4bd518143: Pull complete
Digest: sha256:277e8f7cfffdfe782df86eb0cd0663823efc3f17bb5d4c164a149e6a59865e11
Status: Downloaded newer image for <username>/spring-docker-01:latest
Verified if I can see that image using "docker images" command.
xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
nginx alpine 33c5c6e11024 10 days ago 17.7MB
Then I tried to build image as stated in referenced link step.
xxxxxxxxxx:~$ docker build -t <username>/spring-docker-01 .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/sj/Dockerfile: no such file or directory
As the error states that dockerfile doesn't exist at the location, I am not sure where exactly I can see dockerfile for the image I had pulled from docker hub.
Looks like I have to go to the location where the image has been pulled and from that location, I need to run the above-mentioned command. Please correct me wrong.
Below are the steps, I will be doing after I fix the above-mentioned issue.
# Run in minikube
kubectl run hello-foo --image=myImage --image-pull-policy=Never
# Check that it's running
kubectl get pods
UPDATE-1
There is mistake in above steps.
Step 6 is not needed. Image has already been pulled from docker hub, so no need of docker build command.
With that, I went ahead and followed instructions as mentioned by #aurelius in response.
xxxxxxxxx:~$ kubectl run sdk-02 --image=<username>/spring-docker-01:latest --image-pull-policy=Never
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/sdk-02 created
Checked pods and deployments
xxxxxxxxx:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sdk-02-b6db97984-2znlt 1/1 Running 0 27s
xxxxxxxxx:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdk-02 1 1 1 1 35s
Then exposed deployment on port 8084 as I was using other ports like 8080 thru 8083
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
Then verified if service has been started, checked if no issue on kubernetes dashboard and then checked the url
xxxxxxxxx:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h
sdk-02 NodePort 10.100.125.120 <none> 8084:30362/TCP 13s
xxxxxxxxx:~$ minikube service sdk-02 --url
http://192.168.99.101:30362
When I tried to open URL: http://192.168.99.101:30362 in browser I got message:
This site can’t be reached
192.168.99.101 refused to connect.
Search Google for 192 168 101 30362
ERR_CONNECTION_REFUSED
So the question : Is there any issue with steps performed?
UPDATE-2
The issue was with below step:
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
Upon checking Dockerfile of my image : <username>/spring-docker-01:latest I was exposing it to 8083 something like EXPOSE 8083
May be that was causing issue.
So I went ahead and changed expose command:
xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8083
service/sdk-02 exposed
And then it started working.
If anyone has something to add to this, please feel free.
However I am still not sure where exactly I can see dockerfile for the image I had pulled from docker hub.

The docker build does not know what you mean by your command, because flag -t requires specific format:
--tag , -t Name and optionally a tag in the β€˜name:tag’ format
xxxxxxxxxx:~/Downloads$ docker build -t shivnilesh1109/spring-docker-01 .
So the proper command here should be:
docker build -t shivnilesh1109/spring-docker-01:v1(1) .(2)
(1) desired name of your container:tag
(2) directory in which your dockerfile is.
After you proceed to minikube deployment, it will be enough just to run:
kubectl run *desired name of deployment/pod* --image=*name of the container with tag* --image-pull-policy=Never
If this would not fix your issue, try adding the path to Dockerfile manually. I've tested this on my machine and error stopped after using proper tagging of the image and tested this also with full path to Dockerfile otherwise I had the same error as you.

For you UPDATE-2 question, also to help you to understand the port exposed in the Dockerfile and in the command kubectl expose.
Dockerfile:
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published.
For more details, see EXPOSE.
Kubectl expose:
--port: The port that the service should serve on. Copied from the resource being exposed, if unspecified
--target-port: Name or number for the port on the container that the service should direct traffic to. Optional.
For more details, see kubectl expose.
So I think you should add the parameters --target-port with the port that you exposed in the Dockerfile. And then the port mapping will be correct.

You can just create a Dockerfile with this content:
FROM shivnilesh1109/spring-docker-01
Then run:
docker build -t my-spring-docker-01 .

try adding your local docker image to minikube's cache , like so :
minikube cache add docker-image-name:latesttag
then set imagePullPolicy:Never in the yaml file.

Related

docker won't let push image to google cloud despite logged in as owner and authenticated as credential helper

as you can see I tagged my most recent image to belong to google cloud registry:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/chess-king-council/council-kings latest e82b2f44af48 4 hours ago 1.05GB
<none> <none> 63a6c4d89d29 4 hours ago 1.05GB
<none> <none> b4637ec645fa 4 hours ago 1.05GB
<none> <none> 466bb4fd8026 4 hours ago 332MB
ubuntu 20.04 4e2eef94cd6b 2 weeks ago 73.9MB
ubuntu latest 4e2eef94cd6b 2 weeks ago 73.9MB
python 3.8.2 4f7cd4269fa9 4 months ago 934MB
node 13.12.0-alpine 483343d6c5f5 5 months ago 114MB
when I run the command:
docker push gcr.io/chess-king-council/council-kings
It gives the message:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed the steps at the link it gave to authenticate as a Docker credential helper, but nothing changed.
Also, I am logged in as the owner of the project. Why won't google cloud let me push this Docker image? Am I missing something obvious?
This error can happen when the SDK is missing from the PATH and not necessarily a permissions issue. Check that in your ~/.bash_profile you have the location to the SDK in the $PATH as mentioned here on step #5.
example:
export PATH="/usr/local/yourfolder/google-cloud-sdk/latest/google-cloud-sdk/bin:$PATH"
I have lost many hours with this problem. As you, I was sure that I followed all the steps in the gcloud documentation. I was also able to run docker without root because I did the proper setup after installing it. Checked the permissions in the gcloud project and image bucket, all correct.
In the end it was the most stupid thing: I had installed docker through snap instead of apt, and apparently the snap installation stores the config.json file in a different path than $HOME/.docker/config.json. I noticed this out of luck while simply running docker:
$ docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Options:
---> --config string Location of client config files (default "<PATH_TO_DOCKER_FOLDER_WITHIN_SNAP>")
The default path was not $HOME/.docker, but one within the snap installation. So when I was running gcloud auth configure-docker, the modified config was the one in $HOME/.docker/config.json, but when trying to push the image docker was loading the config stored within the snap path, that didn't have gcloud configured as a credential helper.
The solution after realizing this was straight-forward: removing the snap installation and using apt like I should have done since the beginning fixed it. Hopefully this saves someone some pain...

How can we see cached images in kubernetes?

I am using kops as kubernetes deployment.
I noticed that whenever an image with same tag number is entered in deployment file the system takes the previous image if imagepullpolicy is not set to always
Is there any way in which I can see all the cached images of a container in kubernetes environment ?
Like suppose I have an image test:56 currently running in a deployment and test:1 to test:55 were used previously, so does kubernetes cache those images ? and if yes where can those be found ?
Comments on your environment:
I noticed that whenever an image with same tag number is entered in deployment file the system takes the previous image if imagepullpolicy is not set to always
A pre-pulled image can be used to preload certain images for speed or as an alternative to authenticating to a private registry, optimizing performance.
The docker will always cache all images that were used locally.
Since you are using EKS, keep in mind that if you have node health management (meaning a node will be replaced if it fails) the new node won't have the images cached from the old one so it's always a good idea to store your images on a Registry like your Cloud Provider Registry or a local registry.
Let's address your first question:
Is there any way in which I can see all the cached images of a container in kubernetes environment ?
Yes, you must use docker images to list the images stored in your environment.
Second question:
Like suppose I have an image test:56 currently running in a deployment and test:1 to test:55 were used previously, so does Kubernetes cache those images ? and if yes where can those be found ?
I prepared an example for you:
I deployed several pods based on the official busybox image:
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.28.4
pod/busy284 created
$ kubectl run busy293 --generator=run-pod/v1 --image=busybox:1.29.3
pod/busy284 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.28
pod/busy28 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.29
pod/busy29 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.30
pod/busy284 created
$ kubectl run busybox --generator=run-pod/v1 --image=busybox
pod/busybox created
Now let's check the images stored in docker images
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.3 ae853e93800d 5 weeks ago 116MB
k8s.gcr.io/kube-controller-manager v1.17.3 b0f1517c1f4b 5 weeks ago 161MB
k8s.gcr.io/kube-apiserver v1.17.3 90d27391b780 5 weeks ago 171MB
k8s.gcr.io/kube-scheduler v1.17.3 d109c0821a2b 5 weeks ago 94.4MB
kubernetesui/dashboard v2.0.0-beta8 eb51a3597525 3 months ago 90.8MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 4 months ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 4 months ago 288MB
kubernetesui/metrics-scraper v1.0.2 3b08661dc379 4 months ago 40.1MB
busybox latest 83aa35aa1c79 10 days ago 1.22MB
busybox 1.30 64f5d945efcc 10 months ago 1.2MB
busybox 1.29 758ec7f3a1ee 15 months ago 1.15MB
busybox 1.29.3 758ec7f3a1ee 15 months ago 1.15MB
busybox 1.28 8c811b4aec35 22 months ago 1.15MB
busybox 1.28.4 8c811b4aec35 22 months ago 1.15MB
You can see all the pushed images listed.
It's good to clean old resources from your system using the command docker system prune to free space on your server from time to time.
If you have any doubt, let me know in the comments.

Starting minikube in Ubuntu deletes local docker images

I have a dockerized application. When I am running it through docker-compose up, it runs fine and appears in docker images. But when I try to start a minikube cluster with vm-driver=None, then the cluster gives error and does not start. However, when I quit my docker application and start minikube cluster again, the cluster starts successfully. But then I couldnt find my docker application image I just ran. Instead I find images like below
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
Is this expected behavior? What is the reason if so?
minikube start --vm-driver=none
Update: I am working in an Ubuntu VM.
From the Minikube documentation:
minikube was designed to run Kubernetes within a dedicated VM, and assumes that it has complete control over the machine it is executing on. With the none driver, minikube and Kubernetes run in an environment with very limited isolation, which could result in:
Decreased security
Decreased reliability
Data loss
It is not expected behavior that Minikube will delete your docker images. I have tried to reproduce your issue. I had a few docker images on my Ubuntu VM.
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
Later tried to run minikube.
$ sudo minikube start --vm-driver=none
πŸ˜„ minikube v1.2.0 on linux (amd64)
πŸ’‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
...
βŒ› Verifying: apiserver proxy etcd scheduler controller dns
πŸ„ Done! kubectl is now configured to use "minikube"
I still have all docker images and minikube is working as expected.
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4vd2q 1/1 Running 8 21d
coredns-5c98db65d4-xjx22 1/1 Running 8 21d
etcd-minikube 1/1 Running 5 21d
...
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
After exit from minikube, I still had all docker images.
As you mentioned in original thread, you have used minikube start --vm-driver=none. If you will use minikube start without sudo you will receive error like:
$ minikube start --vm-driver=none
πŸ˜„ minikube v1.2.0 on linux (amd64)
πŸ’£ Unable to load config: open /home/$user/.minikube/profiles/minikube/config.json: permission denied
or if want stop minikube without sudo:
$ minikube stop
πŸ’£ Unable to stop VM: open /home/$user/.minikube/machines/minikube/config.json: permission denied
😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new
Please try use sudo with minikube commands.
Let me know if that helped. If not please provide your error message.

Docker getting started 3 - Docker Compose unable to connect - connection refused

I am following the Docker Getting Started Tutorials and I am stuck at part 3 (Docker Compose). I am operating on a fresh Ubuntu 16.04 installation. Following the tutorial from part 1 and part 2, except logging in to a Docker account and pushing the newly created image to a remote repository.
The Docker image file and python file are the same as in part 2, and the .yml file is the same as part 3 (I copy-pasted them with vim).
I can deploy a stack with docker compose apparently fine. However, when getting to the part where I am supposed to send a request via curl, I get the following response
curl: (7) Failed to connect to localhost port 80: Connection refused
This is the output of docker service ls right after returning from docker stack deploy:
ID NAME MODE REPLICAS IMAGE PORTS
m3ux2u3i6cpv getstartedlab_web replicated 1/5 username/repo:tag *:80->80/tcp
This is the output of docker container ls (fired right after docker service ls):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd870fcb64f4 username/repo:tag "python app.py" 7 seconds ago Up 1 second 80/tcp getstartedlab_web.2.p9v9p34kztmu8rvht3ndg2xtb
db73404d495f username/repo:tag "python app.py" 7 seconds ago Up 1 second 80/tcp getstartedlab_web.1.z3o2t10oiidtzofsonv9cwcvd
While docker ps returns no lines.
And this is the output of docker network ls:
NETWORK ID NAME DRIVER SCOPE
5776b070996c bridge bridge local
47549d9b2e88 docker_gwbridge bridge local
59xa0454g133 getstartedlab_webnet overlay swarm
e27f62ede27d host host local
ramvt1h8ueg7 ingress overlay swarm
f0fe862c5dcc none null local
I can still run the image as a single container and get the expected result, i.e. being able to connect to it via browser or curl and get an error message related to Redis, but I do not understand why it doesn't work when I deploy a stack.
As far as Ubuntu firewall settings are concerned, I have tinkered with none since the installation, and as for Docker and Docker Compose, I have only followed the steps in the "getting started" tutorials in chapters 1-to-3, including downloading Docker Compose binaries and changing permissions with chmod as described. I also added my user to the Docker group so I don't have to sudo every time I need to run a command. I am not behind a proxy server (I am running all tests in local) and I haven't tinkered with any defaults either.
I think this may be a duplicate of this question, though it hasn't been answered nor commented on yet. It is not a duplicate of this question, as I am following a different tutorial.
UPDATE:
As it was, I was using EXACTLY the same docker-compose.yml file. The key issue was a name mismatch with the docker image name, as is visible in the docker service ls output. I thank Janshair Khan for the inspiration. What is strange is for there to be a username/repo image created apparently 9 months ago:
REPOSITORY TAG IMAGE ID CREATED SIZE
<my getting-started image>
python 2.7-slim 4fd30fc83117 7 weeks ago 138MB
hello-world latest f2a91732366c 2 months ago 1.85kB
username/repo <none> c7f5ee4d4030 9 months ago 182MB

Deploying latest docker containers with ansible

I have been trying to get ansible to deploy my containers, and I have been successful with the following config, but the problem I am running into is that it will not start the most recent version of my container.
- name: Deploy
hosts: staging
tasks:
- name: Install docker-py
pip: name=docker-py
- name: Pull latest container
raw: docker pull org/proj:latest
- name: Stop container
docker:
image="org/proj:latest"
name=proj-rails
state=stopped
- name: Deploy container
docker:
image="org/proj:latest"
name=proj-rails
ports=80:80
state=running
I can build and push new containers to docker hub and try to pull them down. On the server docker images lists the latest containers:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
org/proj latest 9f0de94df28c 2 hours ago 675.5 MB
<none> <none> 15f4bbbeebca 2 days ago 670.6 MB
<none> <none> f7958247ed52 2 days ago 670.3 MB
My problem is that ansible keeps starting 15f4bbbeebca (which is not the latest container). Can anyone help me figure out what is wrong?
Shot in the dark, but that syntax looks odd to me. Shouldn't it be
- name: Stop container
docker:
image: org/proj:latest
name: proj-rails
state: stopped
?
Also, why are you using raw rather than command (or shell)?
Is it possible that the issue arises from the misunderstood latesttag? see this https://medium.com/#mccode/the-misunderstood-docker-tag-latest-af3babfd6375#.hlzfep7mn
I fixed my issues by writing my own deployment solution and then open sourced it, https://github.com/mrinterweb/freighter
I designed freighter to be easily configurable and flexible at the same time. I hope someone finds it to be helpful. I am not finished with freighter yet, but it is usable at this point.
I realize that this does not answer the question about how to deploy docker with ansible, but I gave up on trying to deploy containers with ansible. I just did not find the ansible docker module to be mature enough, documented enough, or configurable to my needs.

Resources