Nginx service not starting - docker

Am trying to setup Minikube, and have a challenge. My minikube is setup, and I started the Nginex pod. I can see that the pod is up, but the service doesn't appear as active. On dashboard too, although the pod appears the depolyment doesn't show up. Here are my power shell command outputs.
Am learning this technology and may have missed something. My understanding is that when using docker tools, no explicit configurations are necessary at docker level, other than setting it up. Am I wrong here ? If so where ?
relevant PS output

Lets deploy hello-nginx deployment
C:\> kubectl.exe run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
View List of pods
c:\> kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-6d66656896-hqz9j 1/1 Running 0 6m
Expose as a Service
c:\> kubectl.exe expose deployment hello-nginx --type=NodePort
service "hello-nginx" exposed
List exposed services using minikube
c:\> minikube.exe service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | hello-nginx | http://192.168.99.100:31313 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|
Access Nginx from browser http://192.168.99.100:31313

This method can be used
this worked for me on centos 7
$ systemctl enable nginx
$ systemctl restart nginx
or
$ systemctl start nginx

Related

How to use `minikube tunnel` over WSL2 and docker-desktop backend?

Context
I've a minikube cluster which run into WSL2 context and docker driver (from docker-desktop).
~ » minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.24.3 | Running | 1 | * |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
I've setup a LoadBalancer service, then I run minikube tunnel command.
~ » kubectl get svc my-svc-loadbalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc-loadbalancer LoadBalancer 10.105.11.182 127.0.0.1 8080:31684/TCP 18h
My problem
When trying to access 127.0.0.1:8080 from my browser, I'm getting a ERR_EMPTY_RESPONSE error code.
What I have noticed:
Without minikube tunnel command, accessing to 127.0.0.1:8080 result into a ERR_CONNECTION_REFUSED - which make sense.
The same service configuration but as NodePort and minikube service command works and I can access my deployment, but I would like to access it as LoadBalancer.

Trying to expose an endpoint from a kubernetes pod to the internet/ browser/ API

What am I trying to do
Trying to expose an endpoint from a kubernetes pod to the internet/ browser/ API on a Windows 11 platform with WSL 2 enabled and using Powershell, Docker on Windows, kubectl and minikube. This is essential for resolving my dev environment.
What happens
Based on whatever I could find in the docs and online, I saw Loadbalancer as the option used for <>. The tunneling never seemed to happen. I tested using the browser and using curl.
Environment Information
Windows: Windows 11 Pro
Docker on Windows: Docker Desktop 4.3.2 (72729)
Kubernetes: v1.22.3
Minikube: minikube version: v1.24.0
Commands - executed
Here are the commands that I executed to create the service.
1. Create the deployment
~ kubectl create deployment hello-world3 --image=nginx:mainline-alpine
deployment.apps/hello-world3 created
~ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
hello-world3 1/1 1 1 19s
2. Expose outbound
~ kubectl expose deployment hello-world3 --type=LoadBalancer --port=8080
service/hello-world3 exposed
~ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world3 LoadBalancer 10.103.203.156 127.0.0.1 8080:30300/TCP 14s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d8h
3. Create tunnel service
~ minikube service hello-world3
|-----------|--------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------|-------------|---------------------------|
| default | hello-world3 | 8080 | http://192.168.49.2:30300 |
|-----------|--------------|-------------|---------------------------|
* Starting tunnel for service hello-world3.
|-----------|--------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------|-------------|------------------------|
| default | hello-world3 | | http://127.0.0.1:62864 |
|-----------|--------------|-------------|------------------------|
* Opening service default/hello-world3 in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
I expected to get the “Nginx welcome” page when I connect to http://127.0.0.1:8080
But it was
This site can’t be reached. The connection was reset.
Try:
Checking the connection
Checking the proxy and the firewall
Running Windows Network Diagnostics
ERR_CONNECTION_RESET
Same occurs with:
http://127.0.0.1:62864/
Output when I use curl
~ curl http://127.0.0.1:8080/* -v
VERBOSE: GET with 0-byte payload
curl : The underlying connection was closed: An unexpected error occurred on a receive.
At line:1 char:1
+ curl http://127.0.0.1:8080/ -v
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
~ curl http://127.0.0.1:62864/ -v
VERBOSE: GET with 0-byte payload
curl : The underlying connection was closed: An unexpected error occurred on a receive.
At line:1 char:1
+ curl http://127.0.0.1:62864/ -v
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
kubectl expose deployment hello-world3 --type=LoadBalancer --port=8080
You can check what happened when you used command above using kubectl get svc hello-world3 -o yaml and look on ports section:
ports:
- nodePort: 30514
port:8080
protocol: TCP
targetPort:8080
As you can see targetPort has been set to the same port as port. You can read more here
Note: A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
You couldn't see nginx page because targetPort should be set to 80 which is listening by pods by default instead of 8080.
To solve your issue you can set a targetPort to port 80 , by adding --target-port=80 to your command as below:
kubectl expose deployment hello-world3 --type=LoadBalancer --port=8080 --target-port=80
More convenient option using Kubernetes on Windows machine is set up Enable Kubernetes option in Docker Desktop in Settings>Kubernetes. Cluster will be created automatically and you will be able use kubectl commands in few minutes in terminal or powershell. If something goes wrong, you will be able easily reset your cluster by clicking Reset Kubernetes Cluster button which is in the same place when you enabled Kubernetes in Docker Desktop.

Kubernetes Deployment dynamic port forwarding

I am moving a Docker Image from Docker to a K8s Deployment. I have auto-scale rules on so it starts 5 but can go to 12. The Docker image on K8s starts perfectly with a k8s service in front to cluster the Deployment.
Now each container has its own JVM which has a Prometheus app retrieving its stats. In Docker, this is no problem because the port that serves Prometheus info is dynamically created with a starting port of 8000, so the docker-compose.yml grows the port by 1 based on how many images are started.
The problem is that I can't find how to do this in a K8s [deployment].yml file. Because Deployment pods are dynamic, I would have thought there would be some way to set a starting HOST port to be incremented based on how many containers are started.
Maybe I am looking at this the wrong way so any clarification would be helpful meanwhile will keep searching the Google for any info on such a thing.
Well after reading and reading and reading so much I came to the conclusion that K8s is not responsible to open ports for a Docker Image or provide ingress to your app on some weird port, it's not its responsibility. K8s Deployment just deploys the Pods you requested. You can set the Ports option on a DEPLOYMENT -> SPEC -> CONTAINERS -> PORTS which just like Docker is only informational. But this allows you to JSONPath query for all PODS(containers) with a Prometheus port available. This will allow you to rebuild the "targets" value in Prometheus.yaml file. Now having those targets makes them available to Grafana to create a dashboard.
That's it, pretty easy. I was complicating something because I did not understand it. I am including a script I QUICKLY wrote to get something going USE AT YOUR OWN RISK.
By the way, I use Pod and Container interchangeably.
#!/usr/bin/env bash
#set -x
_MyappPrometheusPort=8055
_finalIpsPortArray=()
_prometheusyamlFile=prometheus.yml
cd /docker/images/prometheus
#######################################################################################################################################################
#One container on the K8s System is weave and it holds the subnet we need to validate against.
#weave-net-lwzrk 2/2 Running 8 (7d3h ago) 9d 192.168.2.16 accl-ffm-srv-006 <none> <none>
_weavenet=$(kubectl get pod -n kube-system -o wide | grep weave | cut -d ' ' -f1 )
echo "_weavenet: $_weavenet"
#The default subnet is the one that lets us know the conntainer is part of kubernetes network.
# Range: 10.32.0.0/12
# DefaultSubnet: 10.32.0.0/12
_subnet=$( kubectl exec -n kube-system $_weavenet -c weave -- /home/weave/weave --local status | sed -En "s/^(.*)(DefaultSubnet:\s)(.*)?/\3/p" )
echo "_subnet: $_subnet"
_cidr2=$( echo "$_subnet" | cut -d '/' -f2 )
echo "_cidr2: /$_cidr2"
#######################################################################################################################################################
#This is an array of the currently monitored containers that prometheus was sstarted with.
#We will remove any containers form the array that fit the K8s Weavenet subnet with the myapp prometheus port.
_targetLineFound_array=($( egrep '^\s{1,20}-\s{0,5}targets\s{0,5}:\s{0,5}\[.*\]' $_prometheusyamlFile | sed -En "s/(.*-\stargets:\s\[)(.*)(\]).*/\2/p" | tr "," "\n"))
for index in "${_targetLineFound_array[#]}"
do
_ip="${index//\'/$''}"
_ipTocheck=$( echo $_ip | cut -d ':' -f1 )
_portTocheck=$( echo $_ip | cut -d ':' -f2 )
#We need to check if the IP is within the subnet mask attained from K8s.
#The port must also be the prometheus port in case some other port is used also for Prometheus.
#This means the IP should be removed since we will put the list of IPs from
#K8s currently in production by Deployment/AutoScale rules.
#Network: 10.32.0.0/12
_isIpWithinSubnet=$( ipcalc $_ipTocheck/$_cidr2 | sed -En "s/^(.*)(Network:\s+)([0-9]{1}[0-9]?[0-9]?\.[0-9]{1}[0-9]?[0-9]?\.[0-9]{1}[0-9]?[0-9]?\.[0-9]{1}[0-9]?[0-9]?)(\/[0-9]{1}[0-9]{1}.*)?/\3/p" )
if [[ "$_isIpWithinSubnet/$_cidr2" == "$_subnet" && "$_portTocheck" == "$_MyappPrometheusPort" ]]; then
echo "IP managed by K8s will be deleted: _isIpWithinSubnet: ($_ip) $_isIpWithinSubnet"
else
_finalIpsPortArray+=("$_ip")
fi
done
#######################################################################################################################################################
#This is an array of the current running myapp App containers with a prometheus port that is available.
#From this list we will add them to the prometheus file to be available for Grafana monitoring.
readarray -t _currentK8sIpsArr < <( kubectl get pods --all-namespaces --chunk-size=0 -o json | jq '.items[] | select(.spec.containers[].ports != null) | select(.spec.containers[].ports[].containerPort == '$_MyappPrometheusPort' ) | .status.podIP' )
for index in "${!_currentK8sIpsArr[#]}"
do
_addIPToMonitoring=${_currentK8sIpsArr[index]//\"/$''}
echo "IP Managed by K8s as myapp app with prometheus currently running will be added to monitoring: $_addIPToMonitoring"
_finalIpsPortArray+=("$_addIPToMonitoring:$_MyappPrometheusPort")
done
######################################################################################################################################################
#we need to recreate this string and sed it into the file
#- targets: ['192.168.2.13:3201', '192.168.2.13:3202', '10.32.0.7:8055', '10.32.0.8:8055']
_finalPrometheusTargetString="- targets: ["
i=0
# Iterate the loop to read and print each array element
for index in "${!_finalIpsPortArray[#]}"
do
((i=i+1))
_finalPrometheusTargetString="$_finalPrometheusTargetString '${_finalIpsPortArray[index]}'"
if [[ $i != ${#_finalIpsPortArray[#]} ]]; then
_finalPrometheusTargetString="$_finalPrometheusTargetString,"
fi
done
_finalPrometheusTargetString="$_finalPrometheusTargetString]"
echo "$_finalPrometheusTargetString"
sed -i -E "s/(.*)-\stargets:\s\[.*\]/\1$_finalPrometheusTargetString/" ./$_prometheusyamlFile
docker-compose down
sleep 4
docker-compose up -d
echo "All changes were made. Exiting"
exit 0
Ideally, you should be using the Average of JVM across all the replicas. There is no meaning to create a different deployment with a different port if you are running the single same Docker image across all the replicas.
i think keeping a single deployment with resource requirements set to deployment would be the best practice.
You can get the JVM average of all the running replicas
sum(jvm_memory_max_bytes{area="heap", app="app-name",job="my-job"}) / sum(kube_pod_status_phase{phase="Running"})
as you are running the same Docker image across all replicas and K8s service by default will be managing the Load Balancing, average utilization would be an option to monitor.
Still, if you want to filter and get different values you can create different deployments (Not at all good way) or use the stateful sets.
You can also filter the data by hostname (POD name) in Prometheus, so will get the each replica usage.

Using minikube with --driver=docker fails to forward from localhost to minikube internal address

Edit - this is on OSX
Also, I've tried running minikube service <service-name>, that's shown below and when it tries to open it in a browser I get a "connection refused" signal because the port is closed.
I have a kubernetes deployment that works fine when using --driver=virtualbox. I translated this to use --driver=docker and this almost works except when I do the following
$ minikube service websocket-nodeport
|-----------|--------------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------------|-------------|-------------------------|
| default | websocket-nodeport | 9000 | http://172.17.0.4:30007 |
|-----------|--------------------|-------------|-------------------------|
🏃 Starting tunnel for service websocket-nodeport.
|-----------|--------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------------|-------------|------------------------|
| default | websocket-nodeport | | http://127.0.0.1:62032 |
|-----------|--------------------|-------------|------------------------|
🎉 Opening service default/websocket-nodeport in default browser...
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
But if I go to
$ curl http://127.0.0.1:62032
curl: (7) Failed to connect to 127.0.0.1 port 62032: Connection refused
nothing happens, it isn't a valid address. However, if I do the following
$ minikube ssh
# inside the VM now
docker#minikube:~$ curl http://172.17.0.4:30007
ok!: websocket-frontend-b7c8dc4b9-5jtg6
I get the response I want! So this means that my service is running and the URL output of the websocket-nodeport address as is internal to minikube is correct but for some reason the local address http://127.0.0.1:62032 isn't be forwarded to the minikube VM.
How do I get this forward to work?
$ minikube service <service-name>
this will open up a tunnel to connect to the service, make sure the service is a NodePort service.
If it opens a browser and you get a 404 this is because the url in the address bar doesn't exist within your api. Changing the URL PATH to correct URL paths/routes you defined in your API should fix this
To open exposed service run following
$ minikube service <service-name>
This command will open the specified service in your default browser.

Can't Ping a Pod after Ubuntu cluster setup

I have followed the most recent instructions (updated 7th May '15) to setup a cluster in Ubuntu** with etcd and flanneld. But I'm having trouble with the network... it seems to be in some kind of broken state.
**Note: I updated the config script so that it installed 0.16.2. Also a kubectl get minions returned nothing to start but after a sudo service kube-controller-manager restart they appeared.
This is my setup:
| ServerName | Public IP | Private IP |
------------------------------------------
| KubeMaster | 107.x.x.32 | 10.x.x.54 |
| KubeNode1 | 104.x.x.49 | 10.x.x.55 |
| KubeNode2 | 198.x.x.39 | 10.x.x.241 |
| KubeNode3 | 104.x.x.52 | 10.x.x.190 |
| MongoDev1 | 162.x.x.132 | 10.x.x.59 |
| MongoDev2 | 104.x.x.103 | 10.x.x.60 |
From any machine I can ping any other machine... it's when I create pods and services that I start getting issues.
Pod
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
auth-dev-ctl-6xah8 172.16.37.7 sis-auth leportlabs/sisauth:latestdev 104.x.x.52/104.x.x.52 environment=dev,name=sis-auth Running 3 hours
So this pod has been spun up on KubeNode3... if I try and ping it from any machine other than it's KubeNode3 I get a Destination Net Unreachable error. E.g.
# ping 172.16.37.7
PING 172.16.37.7 (172.16.37.7) 56(84) bytes of data.
From 129.250.204.117 icmp_seq=1 Destination Net Unreachable
I can call etcdctl get /coreos.com/network/config on all four and get back {"Network":"172.16.0.0/16"}.
I'm not sure where to look from there. Can anyone help me out here?
Supporting Info
On the master node:
# ps -ef | grep kube
root 4729 1 0 May07 ? 00:06:29 /opt/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080
root 4730 1 1 May07 ? 00:21:24 /opt/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd_servers=http://127.0.0.1:4001 --logtostderr=true --portal_net=192.168.3.0/24
root 5724 1 0 May07 ? 00:10:25 /opt/bin/kube-controller-manager --master=127.0.0.1:8080 --machines=104.x.x.49,198.x.x.39,104.x.x.52 --logtostderr=true
# ps -ef | grep etcd
root 4723 1 2 May07 ? 00:32:46 /opt/bin/etcd -name infra0 -initial-advertise-peer-urls http://107.x.x.32:2380 -listen-peer-urls http://107.x.x.32:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
On a node:
# ps -ef | grep kube
root 10878 1 1 May07 ? 00:16:22 /opt/bin/kubelet --address=0.0.0.0 --port=10250 --hostname_override=104.x.x.49 --api_servers=http://107.x.x.32:8080 --logtostderr=true --cluster_dns=192.168.3.10 --cluster_domain=kubernetes.local
root 10882 1 0 May07 ? 00:05:23 /opt/bin/kube-proxy --master=http://107.x.x.32:8080 --logtostderr=true
# ps -ef | grep etcd
root 10873 1 1 May07 ? 00:14:09 /opt/bin/etcd -name infra1 -initial-advertise-peer-urls http://104.x.x.49:2380 -listen-peer-urls http://104.x.x.49:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
#ps -ef | grep flanneld
root 19560 1 0 May07 ? 00:00:01 /opt/bin/flanneld
So I noticed that the flannel configuration (/run/flannel/subnet.env) was different to what docker was starting up with (wouldn't have a clue how they got out of sync).
# ps -ef | grep docker
root 19663 1 0 May07 ? 00:09:20 /usr/bin/docker -d -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.85.1/24 --mtu=1472
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=172.16.60.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
Note that the docker --bip=172.16.85.1/24 was different to the flannel subnet FLANNEL_SUBNET=172.16.60.1/24.
So naturally I changed /etc/default/docker to reflect the new value.
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.60.1/24 --mtu=1472"
But now a sudo service docker restart wasn't erroring out... so looking at /var/log/upstart/docker.log I could see the following
FATA[0000] Shutting down daemon due to errors: Bridge ip (172.16.85.1) does not match existing bridge configuration 172.16.60.1
So the final piece to the puzzle was deleting the old bridge and restarting docker...
# sudo brctl delbr docker0
# sudo service docker start
If sudo brctl delbr docker0 returns bridge docker0 is still up; can't delete it run ifconfig docker0 down and try again.
Please try this:
ip link del docker0
systemctl restart flanneld

Resources