Kubernetes - Ingress ERR_EMPTY_RESPONSE everytime - docker

im trying to make my first kubernetes project, but the problem is that i may have some configuration issues.
For example i wanted to run this project:
https://gitlab.com/codeching/kubernetes-multicontainer-application-react-nodejs-postgres-nginx
I did:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
Then
kubectl apply -f k8s
But when i enter the http://localhost i just get ERR_EMPTY_RESPONSE
Anyone knows why? I have newly installed docker desktop & kubernetes, everything is green & working, but somehow i can't run even this simple project.

The ingress-nginx ingress service is getting deployed as LoadBalancer Service type. if LoadBalancer is not attached, you can use port forwarding of the service to access applications in the cluster.

Related

Have one container access another with helm/Docker

The context
Let me know if I've gone down a rabbit hole here.
I have a simple web app with a frontend and backend component, deployed using Docker/Helm inside a Kubernetes cluster. The frontend is servable via nginx, and the backend component will be running a NodeJS microservice.
I had been thinking to have both run on the same pod inside Docker, but ran into some problems getting both nginx and Node to run in the background. I could try having a startup script that runs both, but the Internet says it's a best practice to have different containers each be responsible for only running one service - so one container to run nginx and another to run the microservice.
The problem
That's fine, but then say the nginx server's HTML pages need to know what to send a POST request to in the backend - how can the HTML pages know what IP to hit for the backend's Docker container? Articles like this one come up talking about manually creating a Docker network for the two containers to speak to one another, but how can I configure this with Helm so that the frontend container knows how to hit the backend container each time a new container is deployed, without having to manually configure any network service each time? I want the deployments to be automated.
You mention that your frontend is based on Nginx.
Accordingly,Frontend must hit the public URL of backend.
Thus, backend must be exposed by choosing the service type, whether:
NodePort -> Frontend will communicate to backend with http://<any-node-ip>:<node-port>
or LoadBalancer -> Frontend will communicate to backend with the http://loadbalancer-external-IP:service-port of the service.
or, keep it ClusterIP, but add Ingress resource on top of it -> Frontend will communicate to backend with its ingress host http://ingress.host.com.
We recommended the last way, but it requires to have ingress controller.
Once you tested one of them and it works, then, you can extend your helm chart to update the service and add the ingress resource if needed
You may try to setup two containers in one pod and then communicate between containers via localhost (but on different ports!). Good example is here - Kubernetes multi-container pods and container communication.
Another option is to create two separate deployments and for each create service. Instead of using IP addresses (won't be the same for every re-deployment of your app) use a DNS name for connecting to them.
Example - two NGINX services communication.
First create two NGINX deplyoments:
kubectl create deployment nginx-one --image=nginx --replicas=3
kubectl create deployment nginx-two --image=nginx --replicas=3
Let's expose them using the kubectl expose command. It's the same if I had created a service from a yaml file:
kubectl expose deployment nginx-one --name=my-service-one --port=80
kubectl expose deployment nginx-two --name=my-service-two --port=80
Now let's check services - as you can see both of them are ClusterIP type:
user#shell:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.36.0.1 <none> 443/TCP 66d
my-service-one ClusterIP 10.36.6.59 <none> 80/TCP 60s
my-service-two ClusterIP 10.36.15.120 <none> 80/TCP 59s
I will exec into pod from nginx-one deployment and curl the second service:
user#shell:~$ kubectl exec -it nginx-one-5869965455-44cwm -- sh
# curl my-service-two
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
If you have problems, make sure you have a proper CNI plugin installed for your cluster - also check this article - Cluster Networking for more details.
Also check these:
My similar answer but with a wider explanation + example of communication between two namespaces.
Access Services Running on Clusters | Kubernetes
Service | Kubernetes
Debug Services | Kubernetes
DNS for Services and Pods | Kubernetes

Is it possible to run kubectl within a container within the cluster?

I have a kubernetes cluster, and I basically have an authenticated api for deploying tasks within the cluster without having kubectl etc set-up locally. I'm aware of the client libraries etc for the Kubernetes api, however they don't seem to support all of the different primatives etc (including some custom ones like Argo). So I just wondered if there was a way I could effectively run $ kubectl apply -f ./file.yml within a container on the cluster?
Obviously I can create a container with kubectl installed, but I just wondered how that could then 'connect' to the Kubernetes controller?
Yes, it is possible. refer halyard container. spinnaker is deployed from halyard container.
You can choose from existing ones: https://hub.docker.com/search?q=kubectl&type=image
I found that the roffe/kubectl image works well for running a kubectl apply -f to deploy to my cluster. However, you do need to use a ClusterRole and ClusterRoleBinding "attached" to your ServiceAccount, (and the serviceAccount named in your kubectl container), to deploy to any other namespaces.

Not able to access service on minikube cluster| Istio

Startup Logs of Pod I am not able to access a spring boot service on my minikube cluster.
On my local machine,I configured minikube cluster and built the docker image of my service. My service contains some simple REST endpoints.
I configured minikube to take my local docker image or should I say pull my docker image. But now when I do
kubectl get services -n istio-system
I get the below services
kubectl get services|Services list in minkube cluster |
Kubectl get pods all namespaces | Kubectl describe service
I am trying to access my service through below command
minikube service producer-service --url
which gives http://192.168.99.100:30696
I have a ping URL in my service so ideally I should be getting response by hitting http://192.168.99.100:30696/ping
I am not getting any response here. Can you guys please let me know what I am missing here?
The behaviour you describe would suggest a port mapping problem. Is your Spring boot service on the default port of 8080? Does the internal port of your Service match the port the Spring boot app is running on (it'll be in your app startup logs). The port in your screenshot seems to be 8899. It's also possible your pod is in a different namespace from your service. It would be useful to include your app startup logs and the output of 'kubectl get pods --all-namespaces', and 'kubectl describe service producer-service'.

Connect non-dockerised application to kubenetes pos

I have non deckerised application that needs to connect to dockerised application running inside kubernetes pod.
Given that pods may died and came again with different ip address, how my application can detect this? any way to assign a hostname that redirect to whatever existing pods?
You will have to use kubernetes service. Service gives you a way to talk to your pods with static Ip and dns (if you're client app is inside the cluster).
https://kubernetes.io/docs/concepts/services-networking/service/
You can do it in several ways:
Easiest: Use kubernetes service with type: NodePort. Then you can access the pod using http://[nodehost]:[nodeport]
Use kubernetes ingress. See this link for more details (https://kubernetes.io/docs/concepts/services-networking/ingress/)
If you are running in the cloud like aws, azure or gce, you can use kubernetes service type LoadBalancer.
In addition to Bal Chua’s work and suggestions from silverfox, I would like to show you the method
I used for Kubernetes to expose and manage incoming traffic from the outside:
Step 1: Deploy an application
In this example, Kubernetes sample hello application will run on port 8080/tcp
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
Step 2: Expose your Deployment as a Service internally
This command tells Kubernetes to expose port 8080/tcp to interact with the world outside:
kubectl expose deployment web --target-port=8080 --type=NodePort
After, please check if it exposed running command:
kubectl get service web
Step 3: Manage Ingress resource
Ingress sends traffic to a proper service working inside Kubernetes.
Open a text editor and then create a file basic-ingress.yaml
with content:
apiVersion:
extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
Apply the configuration:
kubectl apply -f basic-ingress.yaml
and that's all. It is time to test. Get the external IP address of Kubernetes installation:
kubectl get ingress basic-ingress
and run web browser with this address to see hello application working.

Kubernetes pods not starting, running behind a proxy

I am running kubernetes on minikube, I am behind a proxy, so I had set the env variables(HTTP_PROXY & NO_PROXY) for docker in /etc/systemd/system/docker.service.d/http-proxy.conf.
I was able to do docker pull but when I run the below example
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod
pod never starts and I get the error
desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\"
docker pull gcr.io/google_containers/echoserver:1.4 works fine
I ran into the same problem and am sharing what I learned after making a couple of wrong turns. This is with minikube v0.19.0. If you have an older version you might want to update.
Remember, there are two things we need to accomplish:
Make sure kubctl does not go through the proxy when connecting to minikube on your desktop.
Make sure that the docker daemon in minikube does go through the proxy when it needs to connect to image repositories.
First, make sure your proxy settings are correct in your environment. Here is an example from my .bashrc:
export {http,https,ftp}_proxy=http://${MY_PROXY_HOST}:${MY_PROXY_PORT}
export {HTTP,HTTPS,FTP}_PROXY=${http_proxy}
export no_proxy="localhost,127.0.0.1,localaddress,.your.domain.com,192.168.99.100"
export NO_PROXY=${no_proxy}
A couple things to note:
I set both lower and upper case. Sometimes this matters.
192.168.99.100 is from minikube ip. You can add it after your cluster is started.
OK, so that should take care of kubectl working correctly. Now we have the next issue, which is making sure that the Docker daemon in minikube is configured with your proxy settings. You do this, as mentioned by PMat like this:
$ minikube delete
$ minikube start --docker-env HTTP_PROXY=${http_proxy} --docker-env HTTPS_PROXY=${https_proxy} --docker-env NO_PROXY=192.168.99.0/24
To verify that theses settings have taken, do this:
$ minikube ssh -- systemctl show docker --property=Environment --no-pager
You should see the proxy environment variables listed.
Why do the minikube delete? Because without it the start won't update the Docker environment if you had previously created a cluster (say without the proxy information). Maybe this is why PMat did not have success passing --docker-env to start (or maybe it was on older version of minikube).
I was able to fix it myself.
I had Docker on my host and there is Docker in Minikube.
Docker in Minukube had issues
I had to ssh into minikube VM and follow this post
Cannot download Docker images behind a proxy
and it all works nows,
There should be a better way of doing this, on starting minikube i have passed docker env like below, which did not work
minikube start --docker-env HTTP_PROXY=http://xxxx:8080 --docker-env HTTPS_PROXY=http://xxxx:8080
--docker-env NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8 --extra-config=kubelet.PodInfraContainerImage=myhub/pause:3.0
I had set the same env variable inside Minikube VM, to make it work
It looks like you need to add the minikube ip to no_proxy:
export NO_PROXY=$no_proxy,$(minikube ip)
see this thread: kubectl behind a proxy

Resources