Context
I've a minikube cluster which run into WSL2 context and docker driver (from docker-desktop).
~ » minikube profile list
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
| minikube | docker | docker | 192.168.49.2 | 8443 | v1.24.3 | Running | 1 | * |
|----------|-----------|---------|--------------|------|---------|---------|-------|--------|
I've setup a LoadBalancer service, then I run minikube tunnel command.
~ » kubectl get svc my-svc-loadbalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc-loadbalancer LoadBalancer 10.105.11.182 127.0.0.1 8080:31684/TCP 18h
My problem
When trying to access 127.0.0.1:8080 from my browser, I'm getting a ERR_EMPTY_RESPONSE error code.
What I have noticed:
Without minikube tunnel command, accessing to 127.0.0.1:8080 result into a ERR_CONNECTION_REFUSED - which make sense.
The same service configuration but as NodePort and minikube service command works and I can access my deployment, but I would like to access it as LoadBalancer.
Related
I'm trying to use the resources from other computers using the python3-mpi4py since my research uses a lot of calculations.
My codes and data are on the docker container.
To use mpi I have to be able to ssh directly to the docker container from other computers inside the same network as the host computer is located. But I cannot ssh into it.
my image is like below
|Host | <- On the same network -> | Other Computers |
| port 10000 | | |
| ^ | | |
|-------|-----------| | |
| V | | |
| port 10000 | | |
|docker container <-|------------ ssh ------------|--> |
Can anyone teach me how to do this?
You can running ssh server in the Host computer, then you can ssh to Host, then use docker command such as docker exec -i -t containerName /bin/bash to get interactive shell.
example:
# 1. On Other Computers
ssh root#host_ip
>> enter into Host ssh shell
# 2. On Host ssh shell
docker exec -i -t containerName /bin/bash
>> enter into docker interactive shell
# 3. On docker interactive shell
do sth.
What am I trying to do
Trying to expose an endpoint from a kubernetes pod to the internet/ browser/ API on a Windows 11 platform with WSL 2 enabled and using Powershell, Docker on Windows, kubectl and minikube. This is essential for resolving my dev environment.
What happens
Based on whatever I could find in the docs and online, I saw Loadbalancer as the option used for <>. The tunneling never seemed to happen. I tested using the browser and using curl.
Environment Information
Windows: Windows 11 Pro
Docker on Windows: Docker Desktop 4.3.2 (72729)
Kubernetes: v1.22.3
Minikube: minikube version: v1.24.0
Commands - executed
Here are the commands that I executed to create the service.
1. Create the deployment
~ kubectl create deployment hello-world3 --image=nginx:mainline-alpine
deployment.apps/hello-world3 created
~ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
hello-world3 1/1 1 1 19s
2. Expose outbound
~ kubectl expose deployment hello-world3 --type=LoadBalancer --port=8080
service/hello-world3 exposed
~ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world3 LoadBalancer 10.103.203.156 127.0.0.1 8080:30300/TCP 14s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d8h
3. Create tunnel service
~ minikube service hello-world3
|-----------|--------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------|-------------|---------------------------|
| default | hello-world3 | 8080 | http://192.168.49.2:30300 |
|-----------|--------------|-------------|---------------------------|
* Starting tunnel for service hello-world3.
|-----------|--------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------|-------------|------------------------|
| default | hello-world3 | | http://127.0.0.1:62864 |
|-----------|--------------|-------------|------------------------|
* Opening service default/hello-world3 in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
I expected to get the “Nginx welcome” page when I connect to http://127.0.0.1:8080
But it was
This site can’t be reached. The connection was reset.
Try:
Checking the connection
Checking the proxy and the firewall
Running Windows Network Diagnostics
ERR_CONNECTION_RESET
Same occurs with:
http://127.0.0.1:62864/
Output when I use curl
~ curl http://127.0.0.1:8080/* -v
VERBOSE: GET with 0-byte payload
curl : The underlying connection was closed: An unexpected error occurred on a receive.
At line:1 char:1
+ curl http://127.0.0.1:8080/ -v
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
~ curl http://127.0.0.1:62864/ -v
VERBOSE: GET with 0-byte payload
curl : The underlying connection was closed: An unexpected error occurred on a receive.
At line:1 char:1
+ curl http://127.0.0.1:62864/ -v
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
kubectl expose deployment hello-world3 --type=LoadBalancer --port=8080
You can check what happened when you used command above using kubectl get svc hello-world3 -o yaml and look on ports section:
ports:
- nodePort: 30514
port:8080
protocol: TCP
targetPort:8080
As you can see targetPort has been set to the same port as port. You can read more here
Note: A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
You couldn't see nginx page because targetPort should be set to 80 which is listening by pods by default instead of 8080.
To solve your issue you can set a targetPort to port 80 , by adding --target-port=80 to your command as below:
kubectl expose deployment hello-world3 --type=LoadBalancer --port=8080 --target-port=80
More convenient option using Kubernetes on Windows machine is set up Enable Kubernetes option in Docker Desktop in Settings>Kubernetes. Cluster will be created automatically and you will be able use kubectl commands in few minutes in terminal or powershell. If something goes wrong, you will be able easily reset your cluster by clicking Reset Kubernetes Cluster button which is in the same place when you enabled Kubernetes in Docker Desktop.
Edit - this is on OSX
Also, I've tried running minikube service <service-name>, that's shown below and when it tries to open it in a browser I get a "connection refused" signal because the port is closed.
I have a kubernetes deployment that works fine when using --driver=virtualbox. I translated this to use --driver=docker and this almost works except when I do the following
$ minikube service websocket-nodeport
|-----------|--------------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------------|-------------|-------------------------|
| default | websocket-nodeport | 9000 | http://172.17.0.4:30007 |
|-----------|--------------------|-------------|-------------------------|
🏃 Starting tunnel for service websocket-nodeport.
|-----------|--------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|--------------------|-------------|------------------------|
| default | websocket-nodeport | | http://127.0.0.1:62032 |
|-----------|--------------------|-------------|------------------------|
🎉 Opening service default/websocket-nodeport in default browser...
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
But if I go to
$ curl http://127.0.0.1:62032
curl: (7) Failed to connect to 127.0.0.1 port 62032: Connection refused
nothing happens, it isn't a valid address. However, if I do the following
$ minikube ssh
# inside the VM now
docker#minikube:~$ curl http://172.17.0.4:30007
ok!: websocket-frontend-b7c8dc4b9-5jtg6
I get the response I want! So this means that my service is running and the URL output of the websocket-nodeport address as is internal to minikube is correct but for some reason the local address http://127.0.0.1:62032 isn't be forwarded to the minikube VM.
How do I get this forward to work?
$ minikube service <service-name>
this will open up a tunnel to connect to the service, make sure the service is a NodePort service.
If it opens a browser and you get a 404 this is because the url in the address bar doesn't exist within your api. Changing the URL PATH to correct URL paths/routes you defined in your API should fix this
To open exposed service run following
$ minikube service <service-name>
This command will open the specified service in your default browser.
Am trying to setup Minikube, and have a challenge. My minikube is setup, and I started the Nginex pod. I can see that the pod is up, but the service doesn't appear as active. On dashboard too, although the pod appears the depolyment doesn't show up. Here are my power shell command outputs.
Am learning this technology and may have missed something. My understanding is that when using docker tools, no explicit configurations are necessary at docker level, other than setting it up. Am I wrong here ? If so where ?
relevant PS output
Lets deploy hello-nginx deployment
C:\> kubectl.exe run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
View List of pods
c:\> kubectl.exe get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-6d66656896-hqz9j 1/1 Running 0 6m
Expose as a Service
c:\> kubectl.exe expose deployment hello-nginx --type=NodePort
service "hello-nginx" exposed
List exposed services using minikube
c:\> minikube.exe service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | hello-nginx | http://192.168.99.100:31313 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.100:30000 |
|-------------|----------------------|-----------------------------|
Access Nginx from browser http://192.168.99.100:31313
This method can be used
this worked for me on centos 7
$ systemctl enable nginx
$ systemctl restart nginx
or
$ systemctl start nginx
I have followed the most recent instructions (updated 7th May '15) to setup a cluster in Ubuntu** with etcd and flanneld. But I'm having trouble with the network... it seems to be in some kind of broken state.
**Note: I updated the config script so that it installed 0.16.2. Also a kubectl get minions returned nothing to start but after a sudo service kube-controller-manager restart they appeared.
This is my setup:
| ServerName | Public IP | Private IP |
------------------------------------------
| KubeMaster | 107.x.x.32 | 10.x.x.54 |
| KubeNode1 | 104.x.x.49 | 10.x.x.55 |
| KubeNode2 | 198.x.x.39 | 10.x.x.241 |
| KubeNode3 | 104.x.x.52 | 10.x.x.190 |
| MongoDev1 | 162.x.x.132 | 10.x.x.59 |
| MongoDev2 | 104.x.x.103 | 10.x.x.60 |
From any machine I can ping any other machine... it's when I create pods and services that I start getting issues.
Pod
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
auth-dev-ctl-6xah8 172.16.37.7 sis-auth leportlabs/sisauth:latestdev 104.x.x.52/104.x.x.52 environment=dev,name=sis-auth Running 3 hours
So this pod has been spun up on KubeNode3... if I try and ping it from any machine other than it's KubeNode3 I get a Destination Net Unreachable error. E.g.
# ping 172.16.37.7
PING 172.16.37.7 (172.16.37.7) 56(84) bytes of data.
From 129.250.204.117 icmp_seq=1 Destination Net Unreachable
I can call etcdctl get /coreos.com/network/config on all four and get back {"Network":"172.16.0.0/16"}.
I'm not sure where to look from there. Can anyone help me out here?
Supporting Info
On the master node:
# ps -ef | grep kube
root 4729 1 0 May07 ? 00:06:29 /opt/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080
root 4730 1 1 May07 ? 00:21:24 /opt/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd_servers=http://127.0.0.1:4001 --logtostderr=true --portal_net=192.168.3.0/24
root 5724 1 0 May07 ? 00:10:25 /opt/bin/kube-controller-manager --master=127.0.0.1:8080 --machines=104.x.x.49,198.x.x.39,104.x.x.52 --logtostderr=true
# ps -ef | grep etcd
root 4723 1 2 May07 ? 00:32:46 /opt/bin/etcd -name infra0 -initial-advertise-peer-urls http://107.x.x.32:2380 -listen-peer-urls http://107.x.x.32:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
On a node:
# ps -ef | grep kube
root 10878 1 1 May07 ? 00:16:22 /opt/bin/kubelet --address=0.0.0.0 --port=10250 --hostname_override=104.x.x.49 --api_servers=http://107.x.x.32:8080 --logtostderr=true --cluster_dns=192.168.3.10 --cluster_domain=kubernetes.local
root 10882 1 0 May07 ? 00:05:23 /opt/bin/kube-proxy --master=http://107.x.x.32:8080 --logtostderr=true
# ps -ef | grep etcd
root 10873 1 1 May07 ? 00:14:09 /opt/bin/etcd -name infra1 -initial-advertise-peer-urls http://104.x.x.49:2380 -listen-peer-urls http://104.x.x.49:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://107.x.x.32:2380,infra1=http://104.x.x.49:2380,infra2=http://198.x.x.39:2380,infra3=http://104.x.x.52:2380 -initial-cluster-state new
#ps -ef | grep flanneld
root 19560 1 0 May07 ? 00:00:01 /opt/bin/flanneld
So I noticed that the flannel configuration (/run/flannel/subnet.env) was different to what docker was starting up with (wouldn't have a clue how they got out of sync).
# ps -ef | grep docker
root 19663 1 0 May07 ? 00:09:20 /usr/bin/docker -d -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.85.1/24 --mtu=1472
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=172.16.60.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
Note that the docker --bip=172.16.85.1/24 was different to the flannel subnet FLANNEL_SUBNET=172.16.60.1/24.
So naturally I changed /etc/default/docker to reflect the new value.
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.60.1/24 --mtu=1472"
But now a sudo service docker restart wasn't erroring out... so looking at /var/log/upstart/docker.log I could see the following
FATA[0000] Shutting down daemon due to errors: Bridge ip (172.16.85.1) does not match existing bridge configuration 172.16.60.1
So the final piece to the puzzle was deleting the old bridge and restarting docker...
# sudo brctl delbr docker0
# sudo service docker start
If sudo brctl delbr docker0 returns bridge docker0 is still up; can't delete it run ifconfig docker0 down and try again.
Please try this:
ip link del docker0
systemctl restart flanneld