Kubernetes Helm pod restart infinitly - docker

I'm trying to deploy Spinnaker into a Kubernetes cluster. To do the trick, I
use Halyard which uses Helm.
While I'm trying to run my Helm pod I have the following output:
Cluster "default" set.
Context "default" created.
User "user" set.
Context "default" modified.
Switched to context "default".
Creating /home/spinnaker/.helm
Creating /home/spinnaker/.helm/repository
Creating /home/spinnaker/.helm/repository/cache
Creating /home/spinnaker/.helm/repository/local
Creating /home/spinnaker/.helm/plugins
Creating /home/spinnaker/.helm/starters
Creating /home/spinnaker/.helm/cache/archive
Creating /home/spinnaker/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/spinnaker/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
All seems correct. But my pod raise a CrashLoopBackOff event without any other error and my pod restart again for no apparent reason.
The dockerfile I'm using to build the helm docker image is the following:
FROM gcr.io/spinnaker-marketplace/halyard:stable
ARG GCP_SPINNAKER_GCR_KEY
# install helm
WORKDIR /home/spinnaker
# get helm
RUN curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
RUN sed -i 's/\/usr\/local\/bin/\/home\/spinnaker/g' get_helm.sh
# sudo user workaround
RUN sed -i 's/sudo //g' get_helm.sh
RUN chmod u+x get_helm.sh
# add the current folder to the path
ENV PATH="/home/spinnaker:${PATH}"
# install helm
RUN ./get_helm.sh
# importing the configuration script
ADD shell/halyard-configure.sh .
# auhtorize the spinnaker user to execute the configuration script
USER root
RUN chown -R spinnaker halyard-configure.sh
USER spinnaker
# create the gcp key directory for docker registry
RUN mkdir -p ~/.gcp
RUN echo $GCP_SPINNAKER_GCR_KEY | base64 -d > ~/.gcp/gcr-account.json
ENTRYPOINT [ "./halyard-configure.sh" ]
CMD "/opt/halyard/bin/halyard"
And here is the content of the halyard-configure.sh shell script:
#!/usr/bin/env bash
set -e
# configure kubectl
kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-context default --cluster=default
token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-credentials user --token=$token
kubectl config set-context default --user=user
kubectl config use-context default
# configure helm
helm init --service-account tiller --upgrade

Your entrypoint script needs to end with the magic line exec "$#".
In Docker in general, a container startup launches the container entrypoint, passing it the command as parameters. (A Kubernetes pod spec refers to these parts as "command" and "args".) Once the entrypoint completes, the container exits. Since your entrypoint script just runs kubectl config and helm init commands which all complete promptly, the container exits almost immediately; when it does, Kubernetes restarts it; and when it has to restart it more than two or three times, it goes into CrashLoopBackOff state.
The usual way to get around this is to set up the entrypoint script to do any required first-time setup, then exec the command that was passed to it as parameters. Then the command (in your case, /opt/halyard/bin/halyard) winds up being "the main container process", and has the magic process ID 1, and will receive signals at container termination time.
Also note that there is a reasonably standard pattern for accessing the Kubernetes API from a pod that involves configuring a service account for the pod and using an official API, or else launching a kubectl proxy sidecar. You might be able to use that in place of the manual setup steps you have here. (I've never tried to launch Helm from inside a Kubernetes pod, though.)

Related

Kubernetes deployment required to provide application configuration path

I'm trying to deploy my application which needs the configuration form /root/properties folder.
With Docker
docker build -t config .
docker run -p8080:8080 -v /root/properties:/root/properties --name config -d config
Running OK.
Now.. With Kubernetes cluster , i'm not able to attach -v as done in docker run.
kubectl create deployment deploy-config --image=localhost:5000/config --port=8080 -v /root/properties
with -v pod not created. How to provide the properties folder path..??
Thanks in advance.
You can do it with a config map
Or possibly a pvc.
It depends on the details which have not really been shared.

Connection refused error on worker node in kubernetes

I'm setting up a 2 node cluster in kubernetes. 1 master node and 1 slave node.
After setting up master node I did installation steps of docker, kubeadm,kubelet, kubectl on worker node and then ran the join command. On master node, I see 2 nodes in Ready state (master and worker) but when I try to run any kubectl command on worker node, I'm getting connection refused error as below. I do not see any admin.conf and nothing set in .kube/config . Are these files also needed to be on worker node? and if so how do I get it? How to resolve below error? Appreciate your help
root#kubework:/etc/kubernetes# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes# kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root#kubework:/etc/kubernetes#
root#kubework:/etc/kubernetes# kubectl get nodes The connection to the
server localhost:8080 was refused - did you specify the right host or
port?
Kubcetl is by default configured and working on the master. It requires a kube-apiserver pod and ~/.kube/config.
For worker nodes, we don’t need to use kube-apiserver but what we want is using the master configuration to pass by it.
To achieve it we have to copy the ~/.kube/config file from the master to the ~/.kube/config on the worker. Value ~ with the user executing kubcetl on the worker and master (that may be different of course).
Once that done you could use the kubectl command from the worker node exactly as you do that from the master node.
Yes these files needed. Move these files into respective .kube/config folder on worker nodes.
This is expected behavior even using kubectl on master node as non root account, by default this config file is stored for root account in /etc/kubernetes/admin.conf:
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively on the master, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Optionally Controlling your cluster from machines other than the control-plane node
scp root#<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
Note:
The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited. The KUBECONFIG environment variable is not required. If the KUBECONFIG environment variable doesn't exist, kubectl uses the default kubeconfig file, $HOME/.kube/config.
I tried many of the solutions which just copy the /etc/kubernetes/admin.conf to ~/.kube/config. But none worked for me.
My OS is ubuntu and is resolved by removing, purging and re-installing the following :
sudo dpkg -r kubeadm kubectl
sudo dpkg -P kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" (downloading kubectl again, this actually worked)
kubectl get nodes
NAME STATUS ROLES AGE VERSION
mymaster Ready control-plane,master 31h v1.20.4
myworker Ready 31h v1.20.4
Davidxxx's solution worked for me.
In my case, I found out that there is a file that exists in the worker nodes at the following path:
/etc/kubernetes/kubelet.conf
You can copy this to ~/.kube/config and it works as well. I tested it myself.
If I try the following commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
getting this error
[vikram#node2 ~]$ kubectl version
Error in configuration:
* unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
* unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
Then this works which is really a workaround and not a fix.
sudo kubectl --kubeconfig /etc/kubernetes/kubelet.conf version
Was able to fix it by copying the kubelet-client-current.pem from /var/lib/kubelet/pki/ to a location inside $HOME and modifying to reflect the path of the certs in HOME/.kube/config file. Is this normal?
I came across the same issue - I found that my 3 VMs are sharing the same IP (since I was using NAT network on Virtual-box), therefore I switched to Bridge network and have 3 different IPs for 3 different VMs and then followed the installation guide for successful installation of k8s cluster.
Amit

kubectl run fail in interactive mode on CI

have issue with
kubectl run -ti
in gitlab ci.
For testing in CI we run docker container with "npm t" command in interactive mode and it was perfectly work on docker.
After migrate to Kubernetes have issue, as kubectl run give next error: Unable to use a TTY - input is not a terminal or the right kind of file
Job run in image: lachlanevenson/k8s-kubectl
If run kubectl run from local machine all work.
Pls help
The PodSpec container: has a tty attribute, which defaults to false but which one can set to true (that's what the -t option, which is a shortcut for --tty=true, does in kubectl exec). You can experiment with setting stdin: true but at your peril, since it can hang the Pod waiting for "someone" to type something.

how to debug container images using openshift

Let's say I have a docker image created using a Dockerfile. At the time of writing the Dockerfile I had to test it repeatedly to realize what I did wrong. To debug a docker image I can simply run a test container and look at its stdout/stderr to see what's wrong with the image.
IMAGE_NAME=authoritative-dns-bind
IMAGE_OPTIONS="
-v $(pwd)/config.yaml:/config.yaml:ro
-p 127.0.0.1:53:53
-p 127.0.0.1:53:53/udp"
docker run -t -i $IMAGE_OPTIONS $IMAGE_NAME
Learning the above was good enough to iteratively create and debug a minimal working Docker container. Now I'm looking for a way to do the same for OpenShift.
I'm pretty much aware of the fact that the container is not ready for OpenShift. My plan is to run it and watch its stdoud/stderr like I did with Docker. One of the people I asked for help came up with a command that looked like exactly what I need.
oc run -i -t --image $IMAGE_NAME --command test-pod -- bash
And the above command seemed for me for fedora:24 and fedora:latest images from the docker registry and I got a working shell. But the same wouldn't happen for my derived image with a containerized service. My explanation is that it probably does an entirely different thing and instead of starting the command interactively it starts it non-interactively and then tries to run bash inside a failed container.
So what I'm looking for is a reasonable way to debug a container image in OpenShift. I expected that I would be able to at least capture and view stdin/stdout of OpenShift containers.
Any ideas?
Update
According to the comment by Graham oc run should indeed work as docker run but it doesn't seem to be the case. With original Fedora images the bash always appears at least upon hitting enter.
# oc run -i -t --image authoritative-dns-bind --command test-auth13 -- bash
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
...
Waiting for pod myproject/test-auth13-1-lyng3 to be running, status is Pending, pod ready: false
^C
#
I wasn't able to try out the suggested oc debug yet as it seems to require more configuration than just simple image. There's another problem with oc run as that command creates new and new containers that I don't really need. I hope there is a way to start the debug easily and get the container automatically distroyed afterwards.
There are three main commands to debug pods:
oc describe pod $pod-name -- detailed info about the pod
oc logs $pod-name -- stdout and stderr of the pod
oc exec -ti $pod-name -- bash -- get a shell in running pod
To your specific problem: oc run default pull policy is set to Always. This means that OpenShift will try to pull the image until successful and refuse to use the local one.
Once this kuberenetes patch lands in OpenShift origin, the pull policy will be easily configurable.
Please do not consider this a final answer to the question and supersede it with your own better answers...
I'm now using a pod configuration file like the following...
apiVersion: v1
kind: Pod
metadata:
name: "authoritative-dns-server" # pod name, your reference from command line
namespace: "myproject" # default namespace in `oc cluster up`
spec:
containers:
- command:
- "bash"
image: "authoritative-dns-bind" # use your image!
name: "authoritative-dns-bind-container" # required
imagePullPolicy: "Never" # important! you want openshift to use your local image
stdin: true
tty: true
restartPolicy: "Never"
Note the command is explicitly set to bash. You can then create the pod, attach to the container and run the docker command yourself.
oc create -f pod.yaml
oc attach -t -i authoritative-dns-server
/files/run-bind.py
This looks far from ideal and it doesn't really help you debug an ordinary openshift container with standard pod configuration, but at least it's possible to debug, now. Looking forward to better answers.

How do I run curl command from within a Kubernetes pod

I have the following questions:
I am logged into a Kubernetes pod using the following command:
./cluster/kubectl.sh exec my-nginx-0onux -c my-nginx -it bash
The 'ip addr show' command shows its assigned the ip of the pod. Since pod is a logical concept, I am assuming I am logged into a docker container and not a pod, In which case, the pod IP is same as docker container IP. Is that understanding correct?
from a Kubernetes node, I do sudo docker ps and then do the following:-
sudo docker exec 71721cb14283 -it '/bin/bash'
This doesn't work. Does someone know what I am doing wrong?
I want to access the nginx service I created, from within the pod using curl. How can I install curl within this pod or container to access the service from inside. I want to do this to understand the network connectivity.
Here is how you get a curl command line within a kubernetes network to test and explore your internal REST endpoints.
To get a prompt of a busybox running inside the network, execute the following command. (A tip is to use one unique container per developer.)
kubectl run curl-<YOUR NAME> --image=radial/busyboxplus:curl -i --tty --rm
You may omit the --rm and keep the instance running for later re-usage. To reuse it later, type:
kubectl attach <POD ID> -c curl-<YOUR NAME> -i -t
Using the command kubectl get pods you can see all running POD's. The <POD ID> is something similar to curl-yourname-944940652-fvj28.
EDIT: Note that you need to login to google cloud from your terminal (once) before you can do this! Here is an example, make sure to put in your zone, cluster and project:
gcloud container clusters get-credentials example-cluster --zone europe-west1-c --project example-148812
The idea of Kubernetes is that pods are assigned on a host but there is nothing sure or permanent, so you should NOT try to look up the IP of a container or pod from your container, but rather use what Kubernetes calls a Service.
A Kubernetes Service is a path to a pod with a defined set of selectors, through the kube-proxy, which will load balance the request to all pods with the given selectors.
In short:
create a Pod with a label called 'name' for example. let's say name=mypod
create a Service with the selector name=mypod that you call myService for example, to which you assign the port 9000 for example.
then you can curl from a pod to the pods served by this Service using
curl http://myService:9000
This is assuming you have the DNS pod running of course.
If you ask for a LoadBalancer type of Service when creating it, and run on AWS or GKE, this service will also be available from outside your cluster. For internal only service, just set the flag clusterIP: None and it will not be load balanced on the outside.
see reference here:
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/tutorials/services/
Kubernetes uses the IP-per-pod model. All containers in the same pod share the same IP address as if they are running on the same host.
The command should follow docker exec [OPTIONS] CONTAINER COMMAND [ARG...]. In your case, sudo docker exec -it 71721cb14283 '/bin/bash' should work. If not, you should provide the output of your command.
It depends on what image you use. There is nothing special about installing a software in a container. For nginx, try apt-get update && apt-get install curl
There's an official curl team image these days:
https://hub.docker.com/r/curlimages/curl
Run it with:
kubectl run -it --rm --image=curlimages/curl curly -- sh

Resources