I can't get a shell from localhost to Pod - docker

From Pod to localhost, ssh works well. And ping also works well with each other.
There is centos7 in Pod. Also, openssh-server is installed in Pod. But there is always an error.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hadoop-master-pod 1/1 Running 0 39m 10.244.9.25 slave10 <none>
hadoop-secondary-pod 1/1 Running 0 48m 10.244.11.11 slave12 <none>
ssh 10.244.9.25
ssh: connect to host 10.244.9.25 port 22: Connection refused

You should be able to connect using kubectl exec -it hadoop-master-pod -- /bin/bash
Then You can check if your pod in listening on port 22 for 0.0.0.0
Check the iptables if there is nothing blocked.
Make sure openssh is running and on which port it's running.

10.244.9.25 IP is an internal IP address given to pod by Kubernetes ( you can read more about Kubernetes networking model here ) to use inside Kubernets cluster so you won't be able to SSH or even to ping to these IPs from outside the cluster. In other words the network containing 10.244.9.25 is like a private network inside the K8 cluster and your host machine (localhost) is on different network.
If you want to get into the container for example in here you can use kubectl exec -it hadoop-master-pod -- /bin/bash or /bin/sh depending on the shell installed in the container and you can do anything that you tried to do by SSH into the pod.
If you really want to SSH into the pod from localhost (outside the cluster) you can write a Kubernetes Service probably exposing over NodePort which will expose the 22 (default port of SSH) to outside via NodePort.

Related

Calling an application outside cluster from a pod

There is a web service app running on a Compute Engine and a GKE cluster in the same network.
Is it possible for a pod in the cluster to call the web service app using internal IP address of web service app?
Your answer will be appreciated.
Thanks.
TL;DR
Yes it's possible.
Assuming that you are talking about the Internal IP address of your VM you will need to create a rule allowing traffic from pod address range to your VM.
Example
Assuming that:
There is a Compute Engine instance named: nginx and it's configured to run on port 80.
There is a Kubernetes Engine within the same network as your GCE instance.
You will need to check the pod ip address range of your GKE cluster. You can do it by either:
Cloud Console (Web UI)
$ gcloud container clusters describe CLUSTER-NAME --zone=ZONE | grep -i "clusterIpv4Cidr"
The firewall rule could be created by either:
Cloud Console (Web UI)
gcloud command like below:
gcloud compute --project=PROJECT-ID firewall-rules create pod-to-vm \
--direction=INGRESS --priority=1000 --network=default \
--action=ALLOW --rules=tcp:80 --source-ranges=clusterIpv4Cidr \
--target-tags=pod-traffic
Disclaimer!
Enter the value from last command (describe cluster) in the place of clusterIpv4Cidr
You will need to add pod-traffic to your VM's network tags!
After that you can spawn a pod and check if you can communicate with your VM:
$ kubectl run -it ubuntu --image=ubuntu -- /bin/bash
$ apt update && apt install -y curl dnsutils
You can communicate with your VM with GKE pods by either:
IP address of your VM:
root#ubuntu:/# curl IP_ADDRESS
REDACTED
<p><em>Thank you for using nginx.</em></p>
REDACTED
Name of your VM (nginx):
root#ubuntu:/# curl nginx
REDACTED
<p><em>Thank you for using nginx.</em></p>
REDACTED
You can also check if the name is correctly resolved by running:
root#ubuntu:/# nslookup nginx
Server: DNS-SERVER-IP
Address: DNS-SERVER-IP#53
Non-authoritative answer:
Name: nginx.c.PROJECT_ID.internal
Address: IP_ADDRESS
Additional resources:
Stackoverflow.com: Unable to connect from gke to gce

How to access NodePort in Minikube with docker driver?

I launched minikube with the docker driver on a remote machine and I have used a nodePort service for a particular pod. I believe nodePort exposes the port on the minikube docker container. On doing minikube IP it gave me the IP of the docker container in which minikube runs. How can I port map the port from the minnikube container to the host port so that I can access it remotely. A different approach would other than using driver=none or restarting minikube is appreciated as I do not want to restart my spinnaker cluster.
There is a minikube service <SERVICE_NAME> --url command which will give you a url where you can access the service. In order to open the exposed service, the minikube service <SERVICE_NAME> command can be used:
$ minikube service example-minikube
Opening kubernetes service default/hello-minikube in default browser...
This command will open the specified service in your default browser.
There is also a --url option for printing the url of the service which is what gets opened in the browser:
$ minikube service example-minikube --url
http://192.168.99.100:31167
You can run minikube service list to get list of all available services with their corresponding URL's. Also make sure the service points to correct pod by using correct selector.
Try also to execute command:
ssh -i ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip) -L *:30000:0.0.0.0:30000
Take a look: minikube-service-port-forward, expose-port-minikube, minikube-service-documentation.

Can't expose port on local Kubernetes cluster (via Docker Desktop)

I am using the local Kubernetes cluster from Docker Desktop on Windows 10. No virtual machines, no minikubes.
I need to expose a port on my localhost for some service.
For example, I take kubernetes-bootcamp image from the official tutorial:
docker pull jocatalin/kubernetes-bootcamp:v2
Put it in the local registry:
docker tag jocatalin/kubernetes-bootcamp:v2 localhost:5000/kubernetes-bootcamp
docker push localhost:5000/kubernetes-bootcamp
Then create a deployment with this image:
kubectl create deployment kubernetes-bootcamp --image=localhost:5000/kubernetes-bootcamp
Then let's expose a port for accessing our deployment:
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
kubectl get services
kubernetes-bootcamp NodePort 10.102.167.98 <none> 8080:32645/TCP 8s
So we found out that the exposed port for our deployment is 32645. Let's try to request it:
curl localhost:32645
Failed to connect to localhost port 32645: Connection refused
And nothing is work.
But if I try port-forward everything is working:
kubectl port-forward deployment/kubernetes-bootcamp 7000:8080
Forwarding from 127.0.0.1:7000 -> 8080
Forwarding from [::1]:7000 -> 8080
Handling connection for 7000
Another console:
curl localhost:7000
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-7b5598d7b5-8qf9j | v=2
What am I doing wrong? I have found out several posts like mine, but none of them help me.
try to run the this CMD:
kubectl get svc | grep kubernetes-bootcamp
after this expose the pod to your network by using the CMD:
kubectl expose pod (podname) --type=NodePort
After that, you can check the URL by using the cmd example
minikube 0r kubectl service (service name) --url
So I have found out the problem root - local Kubernetes cluster somehow work the inappropriate way.
How I solve the problem:
Remove C:\ProgramData\DockerDesktop\pki
Recreate all pods, services, deployments
Now the same script I use before works great.

Docker swarm containers can't connect out

I've got a small docker swarm with three nodes.
$ sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
jmsidw84mom3k9m4yoqc7rkj0 ip-172-31-a-x.region.compute.internal Ready Active 19.03.1
qg1njgopzgiainsbl2u9bmux4 * ip-172-31-b-y.region.compute.internal Ready Active Leader 19.03.1
yn9sj3sp5b3sr9a36zxpdt3uw ip-172-31-c-z.region.compute.internal Ready Active 19.03.1
And I'm running three redis containers.
$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
6j9mmnpgk5j4 redis replicated 3/3 172.31.m.n:5000/redis
But I can't get redis sentinel working between them - reading the logs it looks as though there are connection failures.
Just standing them up as three separate redis instances I've been testing connectivity and I can telnet from a shell on any host to the host IP of another node and it connects to the service running on the container. If I do the same from a shell on the container it can't connect out.
i.e.
[centos#172.31.a.x ~]$ telnet 172.31.b.y 6379
Trying 172.31.b.y...
Connected to 172.31.b.y.
Escape character is '^]'.
^CConnection closed by foreign host.
[centos#172.31.a.x ~]$ sudo docker exec -it 4d5abad441b8 sh
/ # telnet 172.31.14.12 6379
And then it hangs. Similarly I can't telnet to google.com on 443 from within a container but I can on the host. Curiously though, ping does get out of the container.
Any suggestions?
Ugh.
The redis side is a red herring, I can debug that now - I was mulling over that telnet isn't on the container (alpine linux) by default so there must be some connectivity, but I couldn't telnet to the webserver port it claimed it was downloading from as it installs.
Turns out there's something up with the version of the telnet client alpine linux installs - nmap and curl behave as expected.

scp to Docker Container that is on Remote Host

I am running two AWS EC2 Ubuntu instances in separate regions (Ireland and London). A Docker container is running on each instance.
An established IPSec connection exists:
root#ip-10-0-1-178:/mnt# ipsec status
Security Associations (1 up, 0 connecting):
Ireland-to-London[2]: ESTABLISHED 37 seconds ago,
172.17.0.1[34.X.X.X]...35.X.X.X[35.X.X.X]
Here are some IP's for each:
Ireland
Public IP: 34.X.X.X
Private IP: 10.0.1.178
VPC CIDR Block: 10.0.0.0/16
London
Public IP: 35.X.X.X
Private IP: 10.10.1.187
VPC CIDR Block: 10.10.0.0/16
Docker(same for both)
Public IP: 172.17.0.1
VPC CIDR Block: 172.17.0.0/16
Ports open: 500 and 4500
I cannot figure out how to transfer files using scp from a Docker container on one instance to the Docker container on the other.
Make sure your docker image has port 22 mapped to 500/4500 so you should be able to use
scp -P YOUR_PORT file USERNAME#AWS_IP:/file
You can also use docker cp to copy the file to the host machine, then scp to the other machine and then use docker cp to copy to docker image as a workaround.
List of things you need to do.
Enable ssh ports(e.g 22) on both containers.
EXPOSE / forward the container ports to the host machine ports(e.g. 2200:22) on both machines.
Open the forwarded ports on the host machines in firewall.
Now scp -P 2200 localfile.txt <london_user>#<london_public_ip>:<remote path>.
I ignored the part were you configure password, or key based communication, there are already many resources on that.

Resources