Why can't I reach this helm chart defined app? (in Minikube) - docker

I've used helm create helloworld-chart to create an application using a local docker image I created. i think the issue is that i have the ports all messed up.
DOCKER PIECES
--------------------------
Docker File
FROM busybox
ADD index.html /www/index.html
EXPOSE 8008
CMD httpd -p 8008 -h /www; tail -f /dev/null
(I also have an index.html file in the same directory as my Dockerfile)
Create Docker Image (and publish locally)
docker build -t hello-world .
I then ran this with docker run -p 8080:8008 hello-world and verified I am able to reach it from localhost:8080. (I then stopped that docker container)
I also verified this image was in docker locally with docker image ls and got the output:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 8640a285e98e 20 minutes ago 1.23MB
HELM PIECES
--------------------------
Created a helm chart via helm create helloworld-chart.
Edited the files:
values.yaml
# ...elided because left the same as default...
image:
repository: hello-world
tag: latest
pullPolicy: IfNotPresent
# ...elided because left the same as default...
service:
name: hello-world
type: NodePort # Chose this because MiniKube doesn't have LoadBalancer installed
externalPort: 30007
internalPort: 8008
port: 80
service.yaml
# ...elided because left the same as default...
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.internalPort }}
nodePort: {{ .Values.service.externalPort }}
deployment.yaml
# ...elided because left the same as default...
spec:
# ...elided because left the same as default...
containers:
ports:
- name: http
containerPort: {{ .Values.service.internalPort }}
protocol: TCP
I verified this "looked" correct with both helm lint helloworld-chart and helm template ./helloworld-chart
HELM AND MINIKUBE COMMANDS
--------------------------
# Packaging my helm
helm package helloworld-chart
# Installing into Kuberneters (Minikube)
helm install helloworld helloworld-chart-0.1.0.tgz
# Getting an external IP
minikube service helloworld-helloworld-chart
When I do that, it gives me an external ip like http://172.23.13.145:30007 and opens in a browser but just says the site cannot be reached. What do i have mismatched?
UPDATE/MORE INFO
---------------------------------------
When I check the pod, it's in a CrashLoopBackOff state. However, I see nothing in the logs:
kubectl logs -f helloworld-helloworld-chart-6c886d885b-grfbc
Logs:
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
I'm not sure why it's exiting.

The issue was that Minikube was actually looking in the public Docker image repo and finding something also called hello-world. It was not finding my docker image since "local" to minikube is not local to the host computer's docker. Minikube has its own docker running internally.
You have to add your image to minikube's local repo: minikube cache add hello-world:latest.
You need to change the pull policy: imagePullPolicy: Never

Related

Paperspace: Docker pull private image

I'd like to pull a private image from Docker Hub in Paperspace Deployment.
It uses a yaml file, in which the command can overwrite the default pull command.
This is the yaml file:
image: image_name/ref
port: xxxx
command:
- docker login -u 'docker_user' -p 'docker_password'
- docker pull image_name/ref:latest
resources:
replicas: 1
instanceType: C4
I have the following error:
Node State: errored
Error: An error occurred when pulling image:[image_name/ref] from deployment
Note: the commands
- docker login -u 'docker_user' -p 'docker_password'
- docker pull image_name/ref:latest
Work from my PC.
command in docker-compose.yaml is used to:
overrides the default command declared by the container image
Not overwrite the default pull command as what you thought.
So, to let docker-compose to pull a private docker image, you need to do a initial docker login before run compose, detail see docker login.
In fact, there is a container menu to specify user and password (see team settings in the top right menu -> containers).
Then you have to use the option "containerRegistry" to pull the image properly:
image: image_name/ref
containerRegistry: name_in_paperspace
port: 5000
resources:
replicas: 1
instanceType: C4
Everything is explained in this video:
https://www.youtube.com/watch?v=kPQ7AKwNlWU

How to refer local docker images loaded from tar file in Kubernetes deployment?

I am trying to create a Kubernetes deployment from local docker images. And using imagePullPolicy as Never such that Kubernetes would pick it up from local docker image imported via tar.
Environment
SingleNodeMaster # one node deployment
But Kubernetes always trying to fetch the private repository although local docker images are present.
Any pointers on how to debug and resolve the issue such that Kubernetes would pick the images from the local docker registry? Thank you.
Steps performed
docker load -i images.tar
docker images # displays images from myprivatehub.com/nginx/nginx-custom:v1.1.8
kubectl create -f local-test.yaml with imagepullPolicy as Never
Error
Pulling pod/nginx-custom-6499765dbc-2fts2 Pulling image "myprivatehub.com/nginx/nginx-custom:v1.1.8"
Failed pod/nginx-custom-6499765dbc-2fts2 Error: ErrImagePull
Failed pod/nginx-custom-6499765dbc-2fts2 Failed to pull image "myprivatehub.com/nginx/nginx-custom:v1.1.8": rpc error: code = Unknown desc = failed to pull and unpack image "myprivatehub.com/nginx/nginx-custom:v1.1.8": failed to resolve reference "myprivatehub.com/nginx/nginx-custom:v1.1.8": failed to do request: Head "https://myprivatehub.com/v2/nginx/nginx-custom/manifests/v1.1.8": dial tcp: lookup myprivatehub.com: no such host
docker pull <imagename>
Error response from daemon: Get https://myprivatehub.com/v2/: dial tcp: lookup myprivatehub.com on 172.31.0.2:53: no such host
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-custom
namespace: default
spec:
selector:
matchLabels:
run: nginx-custom
replicas: 5
template:
metadata:
labels:
run: nginx-custom
spec:
containers:
- image: myprivatehub.com/nginx/nginx-custom:v1.1.8
imagePullPolicy: Never
name: nginx-custom
ports:
- containerPort: 80
This happens due to container runtime being different than docker. I am using containerd , after switching container runtime to docker , it started working.
This is to update another approach that can be taken to achieve the similar result. In this case, one can use Docker Registry. Docker Registry Doc
We can create a Docker registry on the machine where Kubernetes is running and docker too is installed. One of the easiest way to achieve the same can be done as following:
Create a local private docker registry. If the registry:2 image is not present, then it would download it and run.
sudo docker run -d -p 5000:5000 --restart=always --name registry registry:2
Build the image or load the image from a tar as required. For my example, i am creating it to add it to the local repository.
sudo docker build -t coolapp:v1 .
Once the build is done, create a tag with this image such that it represents a host and a port.
sudo docker tag coolapp:v1 localhost:5000/coolapp:v1
Push the new tag to the local private registry
sudo docker push localhost:5000/coolapp:v1
Now in the Kubernetes YAML, we can specify the deployment as following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycoolapp
spec:
replicas: 1
selector:
matchLabels:
app: mycoolapp
template:
metadata:
labels:
app: mycoolapp
spec:
containers:
- name: mycoolapp
image: localhost:5000/coolapp:v1
ports:
- containerPort: 3000
and we apply the YAML
sudo kubectl apply -f deployment.yaml
Once this is done, we will be able to see that Kubernetes has pulled the image from the local private repository and is running it.

Gitlab CI - exposing port/service of spawned docker container

I have setup a testplant of Gitlab CI
Gitlab-CE on ubuntu 18.04 VM
Docker gitlab runner
Microk8s cluster
I am able to install the gitlab managed Ingress controller
As I am running dind, how should I expose port 4000 to my host machine (VM) and what is the best way to do it ?
I tried to play around with gitlab installed ingress controller, but not sure where the config files/yaml for gitlab managed apps ?
Tried simple nodeport expose and it did not help
kubectl -n gitlab-managed-apps expose deployment <Gitlab Runner> --type=NodePort --port=4000
Below is my gitlab-ci.yaml file..
image: docker:19.03.13
services:
- name: docker:18.09.7-dind
command:
[
'--insecure-registry=gitlab.local:32000',
]
stages:
- testing
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
CI_REGISTRY_IMAGE: img1
before_script:
- echo "$REG_PASSWORD" | docker -D login "$CI_REGISTRY" -u "$REG_USER" --password-stdin
testing:
stage: testing
tags: [docker]
script:
- docker pull "gitlab.local:32000/$CI_REGISTRY_IMAGE:latest"
- docker images
- hostname
- docker run --rm -d -p 4000:4000 "gitlab.local:32000/$CI_REGISTRY_IMAGE:latest"
- netstat -na | grep -w 4000
- sleep 3600
only:
- master
I managed to figure out what was the issue on exposing using k8s services. It was the selector that was not clearly defined. Some key points to note
I could see that the port was listening on IPv6 interface (::4000) within the pod. However this was not the problem
I added podLabels in config.toml of the gitlab runner config (e.g. app: myapp). This way, each pod spawned by the runner had a predefined label
Used the label in my selector section of the LB service
Hope its useful to anyone

kubernetes deploy with tar docker image

I have a problem to deploy docker image via kubernetes.
One issue is that, we cannot use any docker image registry service e.g. docker hub or any cloud services. But, yes I have docker images as .tar file.
However, it always fails with following message
Warning Failed 1s kubelet, dell20
Failed to pull image "test:latest": rpc
error: code = Unknown
desc = failed to resolve image "docker.io/library/test:latest":
failed to do request: Head https://registry-1.docker.io/v2/library/test/manifests/latest: dial tcp i/o timeout
I also change deployment description with IfNotPresent or Never. In this case it will fail anyway with ErrImageNeverPull.
My guess is: kubernetes tries to use Docker Hub anyway, since it https://registry-1.docker.io in order to pull the image. I just want to use tar docker image in local disk, rather than pulling from some services.
And yes the image is in docker:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test latest 9f4916a0780c 6 days ago 1.72GB
Can anyone give me any advices on this problem?
I was successful with using local image with Kubernetes cluster. I provided the explanation with example below:
The only prerequisite is that you need to make sure you have access to upload this image directly to nodes.
Create the image
Pull the default nginx image from docker registry with below command:
$ docker pull nginx:1.17.5
Nginx image is used only for demonstration purposes.
Tag this image with new name as nginx-local with command:
$ docker tag nginx:1.17.5 nginx-local:1.17.5
Save this image as nginx-local.tar executing command:
$ docker save nginx-local:1.17.5 > nginx-local.tar
Link to documentation: docker save
File nginx-local.tar is used as your image.
Copy the image to all of the nodes
The problem with this technique is that you need to ensure all of the nodes have this image.
Lack of image will result in failed pod creation.
To copy it you can use scp. It's secure way to transer files between machines.
Example command for scp:
$ scp /path/to/your/file/nginx-local.tar user#ip_adddress:/where/you/want/it/nginx-local.tar
If image is already on the node, you will need to load it into local docker image repository with command:
$ docker load -i nginx-local.tar
To ensure that image is loaded invoke command
$ docker images | grep nginx-local
Link to documentation: docker load:
It should show something like that:
docker images | grep nginx
nginx-local 1.17.5 540a289bab6c 3 weeks ago 126MB
Creating deployment with local image
The last part is to create deployment with use of nginx-local image.
Please note that:
The image version is explicitly typed inside yaml file.
ImagePullPolicy is set to Never. ImagePullPolicy
Without this options the pod creation will fail.
Below is example deployment which uses exactly that image:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-local
namespace: default
spec:
selector:
matchLabels:
run: nginx-local
replicas: 5
template:
metadata:
labels:
run: nginx-local
spec:
containers:
- image: nginx-local:1.17.5
imagePullPolicy: Never
name: nginx-local
ports:
- containerPort: 80
Create this deployment with command:
$ kubectl create -f local-test.yaml
The result was that pods were created successfully as shown below:
NAME READY STATUS RESTARTS AGE
nginx-local-84ddb99b55-7vpvd 1/1 Running 0 2m15s
nginx-local-84ddb99b55-fgb2n 1/1 Running 0 2m15s
nginx-local-84ddb99b55-jlpz8 1/1 Running 0 2m15s
nginx-local-84ddb99b55-kzgw5 1/1 Running 0 2m15s
nginx-local-84ddb99b55-mc7rw 1/1 Running 0 2m15s
This operation was successful but I would recommend you to use local docker repository. It will easier management process with images and will be inside your infrastructure.
Link to documentation about it: Local Docker Registry

How to pass docker run parameter via kubernetes pod

Hi I am running kubernetes cluster where I run Logstash container.
But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:
docker run --log-driver=gelf logstash -f /config-dir/logstash.conf
But I need to run it via Kubernetes pod. My pod looks like:
spec:
containers:
- name: logstash-logging
image: "logstash:latest"
command: ["logstash", "-f" , "/config-dir/logstash.conf"]
volumeMounts:
- name: configs
mountPath: /config-dir/logstash.conf
How to achieve to run Docker container with parameter --log-driver=gelf via kubernetes. Thanks.
Kubernetes does not expose docker-specific options such as --log-driver. A higher abstraction of logging behavior might be added in the future, but it is not in the current API yet. This issue was discussed in https://github.com/kubernetes/kubernetes/issues/15478, and the suggestion was to change the default logging driver for docker daemon in the per-node configuration/salt template.

Resources